I'm trying to secure Nifi in a Kubernetes cluster, behind a Traefik proxy. Both are running as services in K8S. Traefik is secured with a public certificate. I want it to redirect calls to nifi, while securing the communication between Traefik (as an Ingress Controller) and the backend pods : Nifi.
Looks like the secure confiuration should lire in my Ingress YAML descriptor. Looks like I should issue a CA root to generate Nifi self signed certificate and load this CA Root in Traefik so it can validate the certificate sent by Nifi while handshaking with it.
But... I can't figure out 1) if this is the good approach, 2) how I can generate my stores (trust, ...) for NiFi using a CA Root, 3) how I should setup my YAML (insecureSkipVerify
seems not to be supported, ...)
By advance, thanks for you help.
Cheers,
Olivier
I had the same problem and could solve it with the insecureSkipVerify
flag.
The problem with traefik is, that NiFi gets the request from traefik and sends it's self signed certificate back to traefik for hand shaking. Traefik doesn't accept it, thus the handshake fails, leading to a bad_certificate
exception in NiFi (has loglevel DEBUG
, so you have to change the logback.xml
file).
So one solution could be to add your self signed certificate to traefik, which is not possible at the moment, see this (currently) open issue.
Another solution, without 'insecuring' your existing traefik would be to add an nginx
between traefik and NiFi. So traefik talk HTTP
with nginx, which talks HTTPS
with NiFi (this will be the next thing I'm trying).
Or you can set the insecureSkipVerify
flag within traefik like I did in this daemonset.yaml
:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
creationTimestamp: 2018-06-21T16:18:46Z
generation: 4
labels:
k8s-app: traefik-internal
release: infrastructure
name: traefik-internal
namespace: infrastructure
resourceVersion: "18860064"
selfLink: /apis/extensions/v1beta1/namespaces/infrastructure/daemonsets/traefik-internal
uid: c64a20e1-776e-11f8-be83-42010a9c0ff6
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: traefik-internal
name: traefik-internal
release: infrastructure
template:
metadata:
creationTimestamp: null
labels:
k8s-app: traefik-internal
name: traefik-internal
release: infrastructure
spec:
containers:
- args:
- --api
- --ping
- --defaultEntryPoints=http,https
- --logLevel=INFO
- --accessLog
- --kubernetes
- --kubernetes.ingressClass=traefik-internal
- --metrics.prometheus=true
- --entryPoints=Name:https Address::443 TLS:/certs/cert.pem,/certs/cert.key
CA:/certs/clientca.pem
- --entryPoints=Name:http Address::80 Redirect.EntryPoint:https
- --insecureSkipVerify=true
image: traefik:1.6.0-rc6-alpine
imagePullPolicy: IfNotPresent
name: traefik-internal
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /certs
name: traefik-internal-certs
readOnly: true
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: sa-traefik
serviceAccountName: sa-traefik
terminationGracePeriodSeconds: 60
volumes:
- name: traefik-internal-certs
secret:
defaultMode: 420
secretName: traefik-internal
templateGeneration: 4
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
status:
currentNumberScheduled: 3
desiredNumberScheduled: 3
numberAvailable: 3
numberMisscheduled: 0
numberReady: 3
observedGeneration: 4
updatedNumberScheduled: 3
The insecureSkipVerify
flag is changed within spec.containers.args
.
Hope that helps!