Is it possible to maintain a Kerberos token across Kubernetes containers?

1/15/2021

We have got a multi container pod with one sidecar container having a script to prepare the TGT cache(/dev/shm/ccache) using keytab and principal. The location /dev/shm/ has been mounted on host machine at /tmp/shm/using respective pv and pvc.

The another container of the pod is an application which has to access Hadoop service using the TGT cache maintained by side car container by mounting that cache on host machine inside the app container.

apiVersion: apps/v1
kind: Deployment
metadata:
 name: xxx-yyy
 labels:
   name: xxx-yyy
spec:              
 selector:
   matchLabels:
    name: xxx-yyy
 replicas: 1
 template:
   metadata:
     labels:
       name: xxx-yyy
   spec:
     nodeSelector:
         kubernetes.io/hostname: aa.bb.ccc
     volumes:
      - name: xxx-yyy-keytabs
        persistentVolumeClaim:
          claimName: xxx-yyy-keytabs-pvc
      - name: xxx-yyy-ccache
        persistentVolumeClaim:
          claimName: xxx-yyy-ccache-pvc
      - name: xxx-yyy-conf
        persistentVolumeClaim:
          claimName: xxx-yyy-conf-pvc
     containers:                            
     - name: kinit-sidecar
       image: kinit-sidecar:latest
       imagePullPolicy: IfNotPresent
       resources:
         limits:
           cpu: "500m"
           memory: "1Gi"
         requests:
           cpu: "100m"
           memory: "500Mi"
         env:
          - name: PERIOD_SECONDS
            value: "30"
          - name: OPTIONS
            value: "-k user@realm"
          volumeMounts:
           - name: xxx-yyy-keytabs
             mountPath: /krb5
           - name: xxx-yyy-ccache
             mountPath: /dev/shm
           - name: xxx-yyy-conf
             mountPath: /etc/krb5.conf.d
        - name: hive-client-app
          image: hive-client-app:latest
          imagePullPolicy: IfNotPresent
          resources:
            limits:
              cpu: "1500m"
              memory: "4Gi"
            requests:
              cpu: "1000m"
              memory: "2Gi"
          volumeMounts:
           - name: xxx-yyy-keytabs
             mountPath: /krb5
           - name: xxx-yyy-ccache
             mountPath: /dev/shm 
           - name: xxx-yyy-conf
             mountPath: /etc/krb5.conf.d
           - name: xxx-yyy-app-conf
             mountPath: /tmp/conf

However while trying to establish hive jdbc connection with the shared ticket cache between containers we are getting following exception:

javax.security.sasl.SaslException: GSS initiate failed
	at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) ~[?:1.8.0_201]
	at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94) ~[libthrift-0.9.3.jar:0.9.3]
	at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) [libthrift-0.9.3.jar:0.9.3]
	
	........
	Caused by: org.ietf.jgss.GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
	at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147) ~[?:1.8.0_201]
	at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122) ~[?:1.8.0_201]
	at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187) ~[?:1.8.0_201]
	at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224) ~[?:1.8.0_201]
	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212) ~[?:1.8.0_201]
	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179) ~[?:1.8.0_201]
	at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192) ~[?:1.8.0_201]

Is there anything we are missing out here; whether this is possible or not? We are following the link for Kerberos authentication from a container but not sure our Kubernetes based implementation is different from the approach suggested in blog:

Secondly, is it possible that TGT cache generated from one host could be used by another host to query hdfs system as both containers of my deployment will be having different hostnames? How can we achieve this. Thanks !

-- Jaraws
kerberos
kubernetes
openshift

0 Answers