I am trying to experimentally run a Postgres Service on a local Kubernetes cluster consisting of 2 Ubuntu-18.04 machines.
My postgres pod is stuck in ContainerCreating, and kubectl describe pod postgres
gave me this message:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 14s default-scheduler Successfully assigned default/postgres-57b4695bc9-8wklp to cumulusg2
Warning FailedCreatePodSandBox 11s kubelet, cumulusg2 Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "832ee34a687d8a1aabb92b57ec6b6b5b8d5f55889c996c2bd4bc4ddcb106fdd2" network for pod "postgres-57b4695bc9-8wklp": networkPlugin cni failed to set up pod "postgres-57b4695bc9-8wklp_default" network: error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes"), failed to clean up sandbox container "832ee34a687d8a1aabb92b57ec6b6b5b8d5f55889c996c2bd4bc4ddcb106fdd2" network for pod "postgres-57b4695bc9-8wklp": networkPlugin cni failed to teardown pod "postgres-57b4695bc9-8wklp_default" network: error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
Normal SandboxChanged 8s (x2 over 9s) kubelet, cumulusg2 Pod sandbox changed, it will be killed and re-created.
The error message confuses me and I am not sure where to start, so I'll lay out my process up to this point. To initialize the cluster, I used
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
,then the kubeadm join
command, and after that:
kubectl apply -n kube-system -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 |tr -d '\n')"
To create the Postgres Database, I used 3 yaml files:
postgres-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: postgresdb
POSTGRES_USER: postgresadmin
POSTGRES_PASSWORD: admin123
postgres-volumes.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
and postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:latest
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
type: NodePort
ports:
- port: 5432
selector:
app: postgres