I was trying unsuccessfully access Kubernetes API via HTTPS ingress and now started to wonder if that is possible at all?
Any working detailed guide for a direct remote access (without using ssh -> kubectl proxy to avoid user management on Kubernetes node) would be appreciated. :)
UPDATE:
Just to make more clear. This is bare metal on premise deployment (NO GCE, AWZ, Azure or any other) and there is intension that some environments will be totally offline (which will add additional issues with getting the install packages).
Intention is to be able to use kubectl on client host with authentication via Keycloak (which also fails if followed by the step by step instructions). Administrative access using SSH and then kubectl is not suitable fir client access. So it looks I will have to update firewall to expose API port and create NodePort service.
Setup:
[kubernetes - env] - [FW/SNAT] - [me]
FW/NAT allows only 22,80 and 443 port access
So as I set up an ingress on Kubernetes, I cannot create a firewall rule to redirect 443 to 6443. Seems the only option is creating an https ingress to point access to "api-kubernetes.node.lan" to kubernetes service port 6443. Ingress itself is working fine, I have created a working ingress for Keycloak auth application.
I have copied .kube/config from the master node to my machine and placed it into .kube/config (Cygwin environment)
What was attempted:
Followed guide [1] to create kube-ca signed certificate. Now the browser does not show anything at all. Using curl to access https://api-kubernetes.node.lan/api results in an empty output (I can see an HTTP OK when using -v). Kubectl now gets the following error:
$ kubectl.exe version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"windows/amd64"}
Error from server: the server responded with the status code 0 but did not return more information
When trying to compare apiserver.pem and my generated cert I see the only difference:
apiserver.pem
X509v3 Key Usage:
Digital Signature, Non Repudiation, Key Encipherment
generated.crt
X509v3 Extended Key Usage:
TLS Web Server Authentication
Ingress configuration:
---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: kubernetes-api
namespace: default
labels:
app: kubernetes
annotations:
kubernetes.io/ingress.class: nginx
spec:
tls:
- secretName: kubernetes-api-cert
hosts:
- api-kubernetes.node.lan
rules:
- host: api-kubernetes.node.lan
http:
paths:
- path: "/"
backend:
serviceName: kubernetes
servicePort: 6443
For those coming here who just want to see their kubernetes API from another network and with another host-name but don't need to change the API to a port other than the default 6443, an ingress isn't necessary.
If this describes you, all you have to do is add additional SAN rules in your API's cert for the DNS you're coming from. This article describes the process in detail
Partially answering my own question.
For the moment I am satisfied with token based auth: this allows to have separate access levels and avoid allowing shell users. Keycloak based dashboard auth worked, but after logging in, was not able to logout. There is no logout option. :D
And to access dashboard itself via Ingress I have found somewhere a working rewrite rule: nginx.ingress.kubernetes.io/configuration-snippet: "rewrite ^(/ui)$ $1/ui/ permanent;"
One note, that UI must be accessed with trailing slash "/": https://server_address/ui/
You should be able to do it as long as you expose the kube-apiserver
pod in the kube-system
namespace. I tried it like this:
$ kubectl -n kube-system expose pod kube-apiserver-xxxx --name=apiserver --port 6443
service/apiserver exposed
$ kubectl -n kube-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apiserver ClusterIP 10.x.x.x <none> 6443/TCP 1m
...
Then go to a cluster machine and point my ~/.kube/config
context IP 10.x.x.x:6443
clusters:
- cluster:
certificate-authority-data: [REDACTED]
server: https://10.x.x.x:6443
name: kubernetes
...
Then:
$ kubectl version --insecure-skip-tls-verify
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
I used --insecure-skip-tls-verify
because 10.x.x.x
needs to be valid on the server certificate. You can actually fix it like this: Configure AWS publicIP for a Master in Kubernetes
So maybe a couple of things in your case:
/etc/kubernetes/pki/
on your master