I'm using a dockerized microservice architecture running on Kubernetes with Nginx, and am encountering an issue with hostnames. How do you correctly add the hostname to Kubernetes (or perhaps Nginx too)?
The problem: When microservice A called admin
tries to talk to microservice B called session
, admin
logs the following error and session
is not reached:
{ Error [ERR_TLS_CERT_ALTNAME_INVALID]: Hostname/IP does not match certificate's
altnames: Host: session. is not in the cert's altnames: DNS:*.example.com, example.com
at Object.checkServerIdentity (tls.js:225:17)
at TLSSocket.onConnectSecure (_tls_wrap.js:1051:27)
at TLSSocket.emit (events.js:160:13)
at TLSSocket._finishInit (_tls_wrap.js:638:8)
reason: 'Host: session. is not in the cert\'s altnames:
DNS:*.example.com, example.com',
host: 'session',
cert:
{ subject: { OU: 'Domain Control Validated', CN:
'*.example.com' },
issuer: ...
In response to this error, I tried to update the hostname in the kubernetes config yaml file unsuccessfully (based on this). See the added hostname
below.
apiVersion: apps/v1
kind: Deployment
metadata:
name: session
namespace: demo
spec:
replicas: 1
selector:
matchLabels:
app: session
component: demo
template:
metadata:
labels:
app: session
component: demo
spec:
hostname: session.example.com . ----> added host name here
imagePullSecrets:
- name: docker-secret
containers:
- name: session
...
However, when I try to apply this updated config file in Kubernetes, an error emerges that I cannot use a period. If I cannot use a period, and the hostname is *.example.com
(i.e. session.example.com
), where/how should the hostname be updated.
The Deployment "session" is invalid: spec.template.spec.hostname:
Invalid value: "session.example.com": a DNS-1123 label must
consist of lower case alphanumeric characters or '-', and must start and
end with an alphanumeric character (e.g. 'my-name', or '123-abc', regex
used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?')
Meanwhile, the server name in the nginx config file is indeed updated with session.example.com
.
upstream session {
server 127.0.0.1:3000;
keepalive 32;
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name "session.example.com"; ---> updated for hostname
ssl_certificate /etc/ssl/nginx/certificate.pem;
ssl_certificate_key /etc/ssl/nginx/key.pem;
location / {
proxy_pass http://session/;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name "session.example.com"; ---> updated for hostname
return 301 https://$host$request_uri;
}
How do you suggest fixing this? My goal is for admin
to successfully communicate with session
.
You can use kubernetes own dns.
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
So you can access your pod using pod dns;
When enabled, pods are assigned a DNS A record in the form of
“pod-ip-address.my-namespace.pod.cluster.local”
With service you can use
my-svc.my-namespace.svc.cluster.local