Kubernetes: ./hack/local-up-cluster.sh requires authentication

2/27/2017

I've updated my local kubernetes from master (Dec 2016) to v1.5.3
I used hack/local-up-cluster.sh to start my local cluster:

sudo KUBE_ENABLE_CLUSTER_DNS=true \
    SERVICE_CLUSTER_IP_RANGE="10.100.0.0/16" \
    API_HOST_IP=0.0.0.0 \
    hack/local-up-cluster.sh

After the update I have this error:

Creating kube-system namespace
Cluster "local" set.
Context "local" set.
Switched to context "local".
Please enter Username: admin
Please enter Password: ********
Please enter Username: admin
Please enter Password: ********
Please enter Username: admin
Please enter Password: ********
Error from server (AlreadyExists): error when creating "namespace.yaml": namespaces "kube-system" already exists
Please enter Username: admin
Please enter Password: ********
Please enter Username: admin
Please enter Password: ********
Please enter Username: admin
Please enter Password: ********
deployment "kube-dns" created
Please enter Username: 

How can I fix it?

Full log:

0.0 hack/local-up-cluster.sh
[sudo] password for dmitry: 
make: Entering directory '/opt/kubernetes'
make[1]: Entering directory '/opt/kubernetes'
can't load package: package .: no buildable Go source files in /opt/kubernetes
can't load package: package .: no buildable Go source files in /opt/kubernetes
can't load package: package .: no buildable Go source files in /opt/kubernetes
can't load package: package .: no buildable Go source files in /opt/kubernetes
make[1]: Leaving directory '/opt/kubernetes'
+++ [0227 19:34:34] Building the toolchain targets:
    k8s.io/kubernetes/hack/cmd/teststale
    k8s.io/kubernetes/vendor/github.com/jteeuwen/go-bindata/go-bindata
+++ [0227 19:34:34] Generating bindata:
    test/e2e/framework/gobindata_util.go
+++ [0227 19:34:35] Building go targets for linux/amd64:
    cmd/kubectl
    cmd/hyperkube
make: Leaving directory '/opt/kubernetes'
API SERVER insecure port is free, proceeding...
API SERVER secure port is free, proceeding...
Detected host and ready to start services.  Doing some housekeeping first...
Using GO_OUT /opt/kubernetes/_output/local/bin/linux/amd64
Starting services now!
Starting etcd
etcd --advertise-client-urls http://127.0.0.1:2379 --data-dir /tmp/tmp.FhAud4KuG4 --listen-client-urls http://127.0.0.1:2379 --debug > "/dev/null" 2>/dev/null
Waiting for etcd to come up.
+++ [0227 19:34:38] On try 2, etcd: : http://127.0.0.1:2379
{"action":"set","node":{"key":"/_test","value":"","modifiedIndex":4,"createdIndex":4}}
Waiting for apiserver to come up
+++ [0227 19:34:39] On try 2, apiserver: : {
  "major": "1",
  "minor": "5",
  "gitVersion": "v1.5.3",
  "gitCommit": "029c3a408176b55c30846f0faedf56aae5992e9b",
  "gitTreeState": "clean",
  "buildDate": "2017-02-27T11:05:22Z",
  "goVersion": "go1.7.4",
  "compiler": "gc",
  "platform": "linux/amd64"
}
Creating kube-system namespace
Cluster "local" set.
Context "local" set.
Switched to context "local".
Please enter Username: admin
Please enter Password: ********
Please enter Username: admin
Please enter Password: ********
Please enter Username: admin
Please enter Password: ********
Error from server (AlreadyExists): error when creating "namespace.yaml": namespaces "kube-system" already exists
Please enter Username: admin
Please enter Password: ********
Please enter Username: admin
Please enter Password: ********
Please enter Username: admin
Please enter Password: ********
deployment "kube-dns" created
Please enter Username:

kubectl config view:

apiVersion: v1
clusters:
- cluster:
    certificate-authority: /var/run/kubernetes/apiserver.crt
    server: https://localhost:6443
  name: local
contexts:
- context:
    cluster: local
    user: ""
  name: local
current-context: local
kind: Config
preferences: {}
users: []

kubectl config get-contexts local:

CURRENT   NAME      CLUSTER   AUTHINFO   NAMESPACE
*         local     local 
-- Dmitry
cluster-computing
kubernetes

1 Answer

2/28/2017

Your local context is configured without a user, but your cluster seems to be configured to use a CA. If your cluster is using a CA you either need a user with a valid certificate, signed by the aforementioned CA, or a valid user token in order to communicate over TLS.

The script you mentioned gives some hints about the way you should configure your client after it completes, try to follow these steps:

cluster/kubectl.sh config set-credentials myself --username=admin --password=admin
cluster/kubectl.sh config set-context local --cluster=local --user=myself
cluster/kubectl.sh config use-context local

Another option is to change your local configuration in a way that you communicate over the insecure port, which defaults to 8080. You can achieve that with following command:

kubectl config set-cluster local --server=http://localhost:8080

References:

-- Antoine Cotten
Source: StackOverflow