How to start (restart) kubernetes apiservice and adding username password authentication

3/17/2018

I am really new on kubernetes. I created a kubernetes cluster with this guide using kubeadm. The cluster consists of one master node and two nodes. Since I want to access the kubernetes web UI via master apiserver (by browser on my laptop), I modified /etc/kubernetes/manifests/kube-apiserver.yaml following these K8 WebUI, Access control. What I did is that I added the following args in /etc/kubernetes/manifests/kube-apiserver.yaml:

- --authentication-mode=basic
- --basic-auth-file=/etc/kubernetes/auth.csv
- hostPath:
  path: /etc/kubernetes/auth.csv
  name: kubernetes-dashboard
- mountPath: /etc/kubernetes/auth.csv
  name: kubernetes-dashboard
  readOnly: true

I have password and user name in auth.csv file. However, after I modified the .yaml file, my kube-apiserver process crashed. I checked by running ps -aux|grep kube to know which processes were running. The result was kube-scheduler,kube-controller-manager,/usr/bin/kubelet were all running but kube-apiserver process was not found. I was wondering what is a graceful way to restart kubernetes and let my cluster come back to the state immediately before changing the .yaml.

In addition, I will appreciate if someone could show me the exact steps to adding username/passwd authentication so that I can access Kubernetes Dashboad UI via browser on my laptop, Or any other way that can help me view the K8 Web UI on my mac laptop. I found a similar question similar question, but i still can not make it.

The environment:

  • Three ubuntu 16 server: one master two nodes
  • Kubernetes version 1.9
  • I can SSH to the three machines and have root privilege.

update: the kube-apiserver.yaml file attached.

apiVersion: v1
kind: Pod
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
    - --insecure-port=0
    - --advertise-address=172.16.28.125
    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    - --secure-port=6443
    - --enable-bootstrap-token-auth=true
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-allowed-names=front-proxy-client
    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    - --allow-privileged=true
    - --requestheader-username-headers=X-Remote-User
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --service-cluster-ip-range=10.96.0.0/12
    - --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
    - --authorization-mode=Node,RBAC
    - --etcd-servers=http://127.0.0.1:2379
    - --authentication-mode=basic
    - --basic-auth-file=/etc/kubernetes/auth.csv
    image: gcr.io/google_containers/kube-apiserver-amd64:v1.9.4
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 172.16.28.125
        path: /healthz
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: kube-apiserver
    resources:
      requests:
        cpu: 250m
    volumeMounts:
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/pki
      name: ca-certs-etc-pki
      readOnly: true
    - mountPath: /etc/kubernetes/auth.csv
      name: kubernetes-dashboard
      readOnly: true
  hostNetwork: true
  volumes:
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/pki
      type: DirectoryOrCreate
    name: ca-certs-etc-pki
  - hostPath:
      path: /etc/kubernetes/auth.csv
    name: kubernetes-dashboard
status: {}
-- kz28
docker
kubectl
kubernetes
kubernetes-health-check

1 Answer

3/17/2018

Based on official documentation, --authentication-mode=basic is not a valid option for Kubernetes 1.9 api server.

Try to remove it, hope it will help.

What about exposing your dashboard for external access, the better way is using kube-proxy for that, but if you want to access Dashboard directly, the only more or less secure way is to using Ingress for that.

I recommend you to use Helm package manager to manage all installations in your cluster, that is much easier and useful than manually writing all configs.

So, to get a dashboard behind an Ingress on bare-metal, you need:

  1. Make sure you cluster work again:)
  2. Install Helm to your PC using official documentation.
  3. Initialize Helm by call helm init. It will its server path (tiller) to your kubernetes cluster. All details about initialization you can check in the documentation, but usually, it just working.
  4. Now you need to install Ingress. In two words, Ingress in Kubernetes is a special service like a proxy, which will able to you to get static ingress point for applications inside a cluster. We will use Ingress chart based on Nginx. You can check available options in its repo. For installing it with basic configuration, call:

helm install stable/nginx-ingress --set=controller.service.type=NodePort

  1. So, now we have an Ingress and its time to install Dashboard using its chart. I highly recommend you to use HTTPS connection instead of HTTP, but now we will use HTTP for making it faster to deploy. You can read about how to enable HTTPS connections on Ingress here (you will need to add a secret containing your TLS key and cert and set TLS configuration in a chart). So, let's install dashboard:

helm install stable/kubernetes-dashboard \ --set=ingress.enabled=True,ingress.hosts=my-dashboard.local

  1. Now check on which node your Ingress pod working by kubectl describe pod $pod-with-ingress and add IP address of that pod to your hosts files with FQDN my-dashboard.local.

Finally, the dashboard has to be available in your browser by http://my-dashboard.local address.

P.S. I highly recommend you also to setup RBAC on your cluster for manage privileges of each user and application in it, including a dashboard.

-- Anton Kostenko
Source: StackOverflow