How should Tyk and Kubernetes be configured in for internal K8s TLS?

10/30/2018

The problem:

I am configuring a Tyk Gateway and Dashboard based on the TykTechnologies/tyk-kubernetes repo:

I now wish to secure both the Gateway and Dashboard K8s services with TLS certificates.

I've purchased a certificate to secure the external URLs (https://api.example.com and https://dashboard.example.com) as below but the cert is invalid for the internal k8s service endpoints and so the dashboard and gateway can no longer speak internally:

$ kubectl logs deployment/tyk-dashboard
...
time="Jan 01 00:00:00" level=error msg="Request failed with error Get https://tyk-dashboard.tyk.svc.cluster.local:443/register/node: x509: certificate is valid for *.example.com, not tyk-dashboard.tyk.svc.cluster.local; retrying in 5s"

What I've done so far:

Modified the tyk.conf and tyk_analytics.conf to change the listen port and reference the certificates:

{
"listen_port": 443,
"notifications_listen_port": 5000,
"tyk_api_config": {
    "Host": "https://tyk-gateway.tyk.svc.cluster.local",
    "Port": "443",
...
"http_server_options": {
    "use_ssl": true,
    "server_name": "api.example.com",
    "min_version": 771,
    "certificates": [
        {
        "domain_name": "*.example.com",
        "cert_file": "/etc/ssl/gateway-tls/tls.crt",
        "key_file": "/etc/ssl/gateway-tls/tls.key"
        }
    ]
},
...

I am mounting the certificates in the tyk pods via K8s TLS secrets (and similarly for the dashboard):

kubectl create secret tls tyk-gateway --cert=example.com.crt --key=example.com.key

And corresponding K8s deployment update:

...
ports:
- containerPort: 443
- containerPort: 5000
volumeMounts:
...
    - name: tyk-gateway-tls
    readOnly: true
    mountPath: "/etc/ssl/gateway-tls"
volumes:
...
- name: tyk-gateway-tls
    secret:
    secretName: tyk-gateway
-- Ieuan
kubernetes
ssl
tyk

1 Answer

10/30/2018

A possible solution I'm considering is to use the certificates.k8s.io API to generate a valid certificate for the service's internal DNS name (tyk-gateway.tyk.svc.cluster.local) that's signed by the K8s cluster's CA, as outlined in the Kubernetes documentation here.

This certificate could then be added to the http_server_options config and bound to the service hostname.

However, that example seems to suggest I need to include the service and pod IPs as SANs in the CSR. I don't think it would then be valid when the pod is rescheduled with a different IP address.

Generate a private key and certificate signing request (or CSR) by running the following command:

cat <<EOF | cfssl genkey - | cfssljson -bare server
{
"hosts": [
    "my-svc.my-namespace.svc.cluster.local",
    "my-pod.my-namespace.pod.cluster.local",
    "172.168.0.24",
    "10.0.34.2"
],
"CN": "my-pod.my-namespace.pod.cluster.local",
"key": {
    "algo": "ecdsa",
    "size": 256
}
}
EOF

Where 172.168.0.24 is the service’s cluster IP, my-svc.my-namespace.svc.cluster.local is the service’s DNS name, 10.0.34.2 is the pod’s IP and my-pod.my-namespace.pod.cluster.local is the pod’s DNS name.

-- Ieuan
Source: StackOverflow