Error setting up tools on kubernetes on local system

1/29/2017

I am just beginning with kubernetes. I am following the wiki provided by pipelineIO and setting everything up. I have successfully setup kubernetes but the other tools that I need to run on the clusters havent gone so successfully. Here is the bash script for them:

#!/bin/sh

echo '...MySql...'
kubectl create -f https://raw.githubusercontent.com/fluxcapacitor/pipeline/master/sql.ml/mysql-rc.yaml
kubectl create -f https://raw.githubusercontent.com/fluxcapacitor/pipeline/master/sql.ml/mysql-svc.yaml

echo '...HDFS...'
kubectl create -f https://raw.githubusercontent.com/fluxcapacitor/pipeline/master/hdfs.ml/hdfs-rc.yaml
kubectl create -f https://raw.githubusercontent.com/fluxcapacitor/pipeline/master/hdfs.ml/hdfs-svc.yaml

echo '...Hive Metastore...'
kubectl create -f https://raw.githubusercontent.com/fluxcapacitor/pipeline/master/metastore.ml/metastore-rc.yaml
kubectl create -f https://raw.githubusercontent.com/fluxcapacitor/pipeline/master/metastore.ml/metastore-svc.yaml

echo '...Spark - Master...'
kubectl create -f https://raw.githubusercontent.com/fluxcapacitor/pipeline/master/apachespark.ml/spark-master-rc.yaml
kubectl create -f https://raw.githubusercontent.com/fluxcapacitor/pipeline/master/apachespark.ml/spark-master-svc.yaml

echo '...Spark - Worker...'
kubectl create -f https://raw.githubusercontent.com/fluxcapacitor/pipeline/master/apachespark.ml/spark-worker-rc.yaml
kubectl create -f https://raw.githubusercontent.com/fluxcapacitor/pipeline/master/apachespark.ml/spark-worker-svc.yaml

echo '...JupyterHub...'
kubectl create -f https://raw.githubusercontent.com/fluxcapacitor/pipeline/master/jupyterhub.ml/jupyterhub-rc.yaml
kubectl create -f https://raw.githubusercontent.com/fluxcapacitor/pipeline/master/jupyterhub.ml/jupyterhub-svc.yaml

echo '...Dashboard - Weavescope...'
kubectl create -f https://raw.githubusercontent.com/fluxcapacitor/pipeline/master/dashboard.ml/weavescope/weavescope.yaml
kubectl describe svc weavescope-app

Out of these, only weavescope has been successfully setup and rest show error as displayed in image:

enter image description here

I dont understand what went wrong and how to fix it. I haven't been able to fix the issue by searching online. I don't think any docker images were actually downloaded by kubernetes but I dont know if that is the problem or not. Please help.

ps: If it is in any way required :
OS: Ubuntu 16.04
CPU: Intel i5 6200
GPU: Nvidia GeForce 940MX

EDIT 1:
I didnt find anything about findSelector after running the query kubectl describe node minikube, here's the output:

Name:           minikube
Role:           
Labels:         beta.kubernetes.io/arch=amd64
            beta.kubernetes.io/os=linux
            kubernetes.io/hostname=minikube
Taints:         <none>
CreationTimestamp:  Mon, 30 Jan 2017 02:16:46 +0530
Phase:          
Conditions:
  Type          Status  LastHeartbeatTime           LastTransitionTime          Reason              Message
  ----          ------  -----------------           ------------------          ------              -------
  OutOfDisk         False   Tue, 31 Jan 2017 02:55:10 +0530     Mon, 30 Jan 2017 02:16:46 +0530     KubeletHasSufficientDisk    kubelet has sufficient disk space available
  MemoryPressure    False   Tue, 31 Jan 2017 02:55:10 +0530     Mon, 30 Jan 2017 02:16:46 +0530     KubeletHasSufficientMemory  kubelet has sufficient memory available
  DiskPressure      False   Tue, 31 Jan 2017 02:55:10 +0530     Mon, 30 Jan 2017 02:16:46 +0530     KubeletHasNoDiskPressure    kubelet has no disk pressure
  Ready         True    Tue, 31 Jan 2017 02:55:10 +0530     Mon, 30 Jan 2017 02:16:47 +0530     KubeletReady            kubelet is posting ready status
Addresses:      192.168.99.100,192.168.99.100,minikube
Capacity:
 alpha.kubernetes.io/nvidia-gpu:    0
 cpu:                   2
 memory:                2049008Ki
 pods:                  110
Allocatable:
 alpha.kubernetes.io/nvidia-gpu:    0
 cpu:                   2
 memory:                2049008Ki
 pods:                  110
System Info:
 Machine ID:            112c60c791a944ff93bbc446e8c28598
 System UUID:           E0D8970E-F2E2-4503-A282-756ADA83592A
 Boot ID:           9305d5d2-11e9-411a-b335-b5aa3d59432e
 Kernel Version:        4.7.2
 OS Image:          Buildroot 2016.08
 Operating System:      linux
 Architecture:          amd64
 Container Runtime Version: docker://1.11.1
 Kubelet Version:       v1.5.1
 Kube-Proxy Version:        v1.5.1
ExternalID:         minikube
Non-terminated Pods:        (4 in total)
  Namespace         Name                    CPU Requests    CPU Limits  Memory Requests Memory Limits
  ---------         ----                    ------------    ----------  --------------- -------------
  default           weavescope-probe-zk779          50m (2%)    50m (2%)    0 (0%)      0 (0%)
  kube-system           kube-addon-manager-minikube     5m (0%)     0 (0%)      50Mi (2%)   0 (0%)
  kube-system           kube-dns-v20-75pq6          110m (5%)   0 (0%)      120Mi (5%)220Mi (10%)
  kube-system           kubernetes-dashboard-5q16v      0 (0%)      0 (0%)      0 (0%)      0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.
  CPU Requests  CPU Limits  Memory Requests Memory Limits
  ------------  ----------  --------------- -------------
  165m (8%) 50m (2%)    170Mi (8%)  220Mi (10%)
Events:
  FirstSeen LastSeen    Count   From            SubObjectPath   Type        Reason          Message
  --------- --------    -----   ----            -------------   --------    ------          -------
  17m       17m     1   {kube-proxy minikube}           Normal      Starting        Starting kube-proxy.
  17m       17m     1   {kubelet minikube}          Normal      Starting        Starting kubelet.
  17m       17m     1   {kubelet minikube}          Warning     ImageGCFailed       unable to find data for container /
  17m       17m     1   {kubelet minikube}          Normal      NodeHasSufficientDisk   Node minikube status is now: NodeHasSufficientDisk
  17m       17m     1   {kubelet minikube}          Normal      NodeHasSufficientMemory Node minikube status is now: NodeHasSufficientMemory
  17m       17m     1   {kubelet minikube}          Normal      NodeHasNoDiskPressure   Node minikube status is now: NodeHasNoDiskPressure
  17m       17m     1   {kubelet minikube}          Warning     Rebooted        Node minikube has been rebooted, boot id: 9305d5d2-11e9-411a-b335-b5aa3d59432e

EDIT 2:

cortana@cortana:~$ kubectl get pods
NAME                       READY     STATUS              RESTARTS   AGE
hdfs-gv0ss                 0/1       Completed           0          8h
jupyterhub-master-tt81t    0/1       ContainerCreating   0          8h
metastore-1-2-1-4qdnz      0/1       ContainerCreating   0          8h
mysql-master-8ksch         0/1       Completed           0          8h
spark-master-2-0-1-r8g9j   1/1       Running             1          8h
spark-worker-2-0-1-9v45w   0/1       Completed           0          8h
weavescope-app-lkl15       0/1       ContainerCreating   0          8h
weavescope-probe-s5zsd     0/1       ContainerCreating   0          6h

The pods are completed but aren't running except the spark one. How can I run them ?

-- lee huang
apache-spark
hdfs
kubernetes
ubuntu

1 Answer

1/30/2017

The problem is that ReplicationControllers you are deploying have a node selector:

  nodeSelector:
    training: "true"

A node selector will affect the scheduling of pods. In this case, the scheduler is looking for nodes that have a label called training, with a value of true.

My suspicion is that you do not have a node that has this label. To look at the labels of a given node, you may use kubectl describe node $NODE_NAME. More information on node selectors here.

Edit: As you can see from kubectl's output, your node does not have a label training=true. The node has the labels:

Name:           minikube
Role:           
Labels: beta.kubernetes.io/arch=amd64
        beta.kubernetes.io/os=linux
        kubernetes.io/hostname=minikube

For this reason, there is no suitable node where to deploy your workloads. If you want to label the node, you can kubectl label node minikube training=true.

-- AlexBrand
Source: StackOverflow