I'm new to Kubernetes - I've worked with docker-compose until now (on one machine). Now I want to expend my work to cluster of nodes and to get Kubernetes capabilities (service discovery, load balancing, health check etc).
I'm working in local servers (RHEL7) and trying to run my first Kubernetes environment (following this doc) with no lack.
I run:
hack/local-up-cluster.sh
then (In another terminal):
cluster/kubectl.sh config set-cluster local --server=http://127.0.0.1:8080 --insecure-skip-tls-verify=true
cluster/kubectl.sh config set-context local --cluster=local
cluster/kubectl.sh config use-context local
And:
cluster/kubectl.sh create -f run-aii.yaml
my run-aii.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: aii
spec:
replicas: 1
template:
metadata:
labels:
run: aii
spec:
containers:
- name: aii
image: localhost:5000/dev/aii
ports:
- containerPort: 5144
env:
- name: KAFKA_IP
value: kafka
volumeMounts:
- mountPath: /root/script
name: scripts-data
readOnly: true
- mountPath: /home/aii/core
name: core-aii
readOnly: true
- mountPath: /home/aii/genome
name: genome-aii
readOnly: true
- mountPath: /home/aii/main
name: main-aii
readOnly: true
- name: kafka
image: localhost:5000/dev/kafkazoo
volumeMounts:
- mountPath: /root/script
name: scripts-data
readOnly: true
- mountPath: /root/config
name: config-data
readOnly: true
- name: ws
image: localhost:5000/dev/ws
ports:
- containerPort: 3000
volumes:
- name: scripts-data
hostPath:
path: /home/aii/general/infra/script
- name: config-data
hostPath:
path: /home/aii/general/infra/config
- name: core-aii
hostPath:
path: /home/aii/general/core
- name: genome-aii
hostPath:
path: /home/aii/general/genome
- name: main-aii
hostPath:
path: /home/aii/general/main
Additional info:
[aii@localhost kubernetes]$ cluster/kubectl.sh describe pod aii-4073165096-nkdq6
Name: aii-4073165096-nkdq6
Namespace: default
Node: /
Labels: pod-template-hash=4073165096,run=aii
Status: Pending
IP:
Controllers: ReplicaSet/aii-4073165096
Containers:
aii:
Image: localhost:5000/dev/aii
Port: 5144/TCP
QoS Tier:
cpu: BestEffort
memory: BestEffort
Environment Variables:
KAFKA_IP: kafka
kafka:
Image: localhost:5000/dev/kafkazoo
Port:
QoS Tier:
cpu: BestEffort
memory: BestEffort
Environment Variables:
ws:
Image: localhost:5000/dev/ws
Port: 3000/TCP
QoS Tier:
cpu: BestEffort
memory: BestEffort
Environment Variables:
Volumes:
scripts-data:
Type: HostPath (bare host directory volume)
Path: /home/aii/general/infra/script
config-data:
Type: HostPath (bare host directory volume)
Path: /home/aii/general/infra/config
core-aii:
Type: HostPath (bare host directory volume)
Path: /home/aii/general/core
genome-aii:
Type: HostPath (bare host directory volume)
Path: /home/aii/general/genome
main-aii:
Type: HostPath (bare host directory volume)
Path: /home/aii/general/main
default-token-hiwwo:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hiwwo
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
37s 6s 6 {default-scheduler } Warning FailedScheduling no nodes available to schedule pods
docker images:
[aii@localhost kubernetes]$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
kube-build build-47381c8eab f221edba30ed 25 hours ago 1.628 GB
aii latest 1026cd920723 4 days ago 1.427 GB
localhost:5000/dev/aii latest 1026cd920723 4 days ago 1.427 GB
registry 2 34bccec54793 4 days ago 171.2 MB
localhost:5000/dev/ws latest fa7c5f6ef83a 12 days ago 706.8 MB
ws latest fa7c5f6ef83a 12 days ago 706.8 MB
kafkazoo latest 84c687b0bd74 2 weeks ago 697.7 MB
localhost:5000/dev/kafkazoo latest 84c687b0bd74 2 weeks ago 697.7 MB
node 4.4 1a93433cee73 2 weeks ago 647 MB
gcr.io/google_containers/hyperkube-amd64 v1.2.4 3c4f38def75b 2 weeks ago 316.7 MB
nginx latest 3edcc5de5a79 2 weeks ago 182.7 MB
gcr.io/google_containers/debian-iptables-arm v3 aca727a3023c 5 weeks ago 120.5 MB
gcr.io/google_containers/debian-iptables-amd64 v3 49b5e076215b 6 weeks ago 129.4 MB
spotify/kafka latest 30d3cef1fe8e 3 months ago 421.6 MB
gcr.io/google_containers/kube-cross v1.4.2-1 8d2874b4f7e9 3 months ago 1.551 GB
wurstmeister/zookeeper latest dc00f1198a44 4 months ago 468.7 MB
centos latest 61b442687d68 5 months ago 196.6 MB
centos centos7.2.1511 38ea04e19303 5 months ago 194.6 MB
hypriot/armhf-busybox latest d7ae69033898 6 months ago 1.267 MB
gcr.io/google_containers/etcd 2.2.1 a6cd91debed1 6 months ago 28.19 MB
gcr.io/google_containers/pause 2.0 2b58359142b0 7 months ago 350.2 kB
gcr.io/google_containers/kube-registry-proxy 0.3 b86ac3f11a0c 9 months ago 151.2 MB
What is the 'no nodes available to schedule pods' means? Where should I configure/define the nodes? where and how should I specify the IPs of physical machines?
EDIT:
[aii@localhost kubernetes]$ kubectl get nodes
NAME STATUS AGE
127.0.0.1 Ready 1m
and:
[aii@localhost kubernetes]$ kubectl describe nodes
Name: 127.0.0.1
Labels: kubernetes.io/hostname=127.0.0.1
CreationTimestamp: Tue, 24 May 2016 09:58:00 +0300
Phase:
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk True Tue, 24 May 2016 09:59:50 +0300 Tue, 24 May 2016 09:58:10 +0300 KubeletOutOfDisk out of disk space
Ready True Tue, 24 May 2016 09:59:50 +0300 Tue, 24 May 2016 09:58:10 +0300 KubeletReady kubelet is posting ready status
Addresses: 127.0.0.1,127.0.0.1
Capacity:
pods: 110
cpu: 4
memory: 8010896Ki
System Info:
Machine ID: b939b024448040469dfdbd3dd3c3e314
System UUID: 59FF2897-234D-4069-A5D4-B68648FC7D38
Boot ID: 0153b84d-90e1-4fd1-9afa-f4312e89613e
Kernel Version: 3.10.0-327.4.5.el7.x86_64
OS Image: Red Hat Enterprise Linux
Container Runtime Version: docker://1.10.3
Kubelet Version: v1.2.4
Kube-Proxy Version: v1.2.4
ExternalID: 127.0.0.1
Non-terminated Pods: (0 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
Allocated resources:
(Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
0 (0%) 0 (0%) 0 (0%) 0 (0%)
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {kube-proxy 127.0.0.1} Normal Starting Starting kube-proxy.
1m 1m 1 {kubelet 127.0.0.1} Normal Starting Starting kubelet.
1m 1m 1 {kubelet 127.0.0.1} Normal NodeHasSufficientDisk Node 127.0.0.1 status is now: NodeHasSufficientDisk
1m 1m 1 {controllermanager } Normal RegisteredNode Node 127.0.0.1 event: Registered Node 127.0.0.1 in NodeController
1m 1m 1 {kubelet 127.0.0.1} Normal NodeOutOfDisk Node 127.0.0.1 status is now: NodeOutOfDisk
1m 1m 1 {kubelet 127.0.0.1} Normal NodeReady Node 127.0.0.1 status is now: NodeReady
But I got some free space:
[aii@localhost kubernetes]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-root 47G 42G 3.2G 93% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 3.7M 3.9G 1% /dev/shm
tmpfs 3.9G 17M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/mapper/rhel-var 485M 288M 198M 60% /var
/dev/sda1 509M 265M 245M 52% /boot
tmpfs 783M 44K 783M 1% /run/user/1000
/dev/sr0 56M 56M 0 100% /run/media/aii/VBOXADDITIONS_5.0.18_106667
How much disk space does it need? (I'm working in VM so I don't have much)
It means there's no available nodes in the system for the pods to be scheduled on. Can you provide the output of kubectl get nodes
and kubectl describe nodes
?
Following steps descirbed in the local cluster doc should give you a single node. If your node is there (it should be) but just not ready, you can look at the log in /tmp/kubelet.log
(in the future, if you're not using local cluster, look for /var/log/kubelet.log
instead) to figure out possible causes.