I am using the YAML
file to deploy the container on Kubernetes
with some replication factor on a hosted machine.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mojo-deployment
labels:
app: mojo
spec:
selector:
matchLabels:
app: mojo
replicas: 3
template:
metadata:
labels:
app: mojo
spec:
containers:
- name: mojo
image: mojo:1.0.1
ports:
- containerPort: 9000
---
#Services Info
apiVersion: v1
kind: Service
metadata:
name: mojo-services
spec:
selector:
app: mojo
ports:
- protocol: TCP
port: 80
targetPort: 9376
---
#Ingress Configuration
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: mojo-ingress
annotations:
kubernetes.io/ingress.class: mojo
spec:
backend:
serviceName: mojo-services
servicePort: 80
Steps:
Docker
image using `docker build -t mojo:1.0 .docker image ls
show me an image id.docker build
command to deploy image on container. Do I need to do it? or kubectl
service will take care of it.kubectl apply -f Prod.yaml
. It showsdeployment.apps/mojo-deployment created
service/mojo-services created
ingress.networking.k8s.io/mojo-ingress created
Questions?
Do I need to build the container before deploying YAML
file? I tried it but still kubernetes
not running.
Why all pods are showing Pending
status.
Deployment is also showing pending
status.
Though I am trying to access the Ingress
with :80 and cannot access it.
pod description
Name: mojo-deployment-6665bdc557-s57m7
Namespace: default
Priority: 0
Node: <none>
Labels: app=mojo
pod-template-hash=6665bdc557
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/mojo-deployment-6665bdc557
Containers:
mojo:
Image: mojo:1.0
Port: 9000/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tjx6p
(ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-tjx6p:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tjx6p
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From
Message ---- ------ ---- ---- ------- Warning FailedScheduling 70s (x45 over 67m) default-scheduler 0/1 nodes are available: 1 node(s) were unschedulable.
After removing the taint from the master node. 1. kubectl get node
returns
kubectl get pod
returnskubectl describe node
: https://gist.github.com/amixpal/333bffd6ab91def749267f30d4ffb079containers:
- name: mojo
image: mojo:1.0.1
ports:
- containerPort: 9000
Please answer: How your mojo:1.0.1 docker image appears on kubernetes nodes?
All pods wait to image be available.
Deployment wait for all pods will be in status Running
.
K8s services make ingress be available after deployment be ready.
If you have only one node (master) , then usually a Taint will be added to it which will make master node unschedulable. Remove taint from the master (and all other nodes, if there is more than one) using below.
kubectl taint nodes --all node-role.kubernetes.io/master-
Edit: Based on the node describe output, the CNI not ready. Please make sure all CNI related Pods are running and healthy