Can't use persistent host key on sftp kubernetes deployment

9/9/2019

I'm running atmoz/sftp deployment for sftp on GKE. I succeeded mounting a persistent volume and using a configmap to mount public keys for users, but I can't mount a host key so every time my container is restarting I'm getting a warning that my host key has changed..

I tried to mount it to /etc/ssh and changing sshd_config and nothing worked - it says file already exists, overwrite? (y/n) and I can't manipulate it because it's inside the container.

And even if I try to run a command, any command like echo, the container is turning into CrashLoopBackhOff

my configmap:

apiVersion: v1
data:
  ssh_host_rsa_key: |
    <my key>
kind: ConfigMap
metadata:
  name: ssh-host-rsa
  namespace: default

my deployment yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
  name: sftp
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: sftp
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: sftp
    spec:
      containers:
      - args:
        - client::::sftp
        env:
        - name: sftp
          value: "1"
        image: atmoz/sftp
        imagePullPolicy: IfNotPresent
        name: sftp
        ports:
        - containerPort: 22
          name: sftp
          protocol: TCP
        resources: {}
        securityContext:
          capabilities:
            add:
            - SYS_ADMIN
          procMount: Default
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /home/client/sftp
          name: sftp
        - mountPath: /home/client/.ssh/keys
          name: sftp-public-keys
        - mountPath: /etc/ssh
          name: ssh-host-ed25519
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 100
      terminationGracePeriodSeconds: 30
      volumes:
      - name: sftp
        persistentVolumeClaim:
          claimName: sftp-uat
      - configMap:
          defaultMode: 420
          name: sftp-public-keys
        name: sftp-public-keys
      - configMap:
          defaultMode: 420
          name: ssh-host-ed25519
        name: ssh-host-ed25519

the echo test:

containers:
          - args:
            - client::::sftp
            env:
            - name: sftp
              value: "1"
            image: atmoz/sftp
            command:
            - "echo hi"
            imagePullPolicy: IfNotPresent
            name: sftp
            ports:
            - containerPort: 22
              name: sftp
              protocol: TCP
            resources: {}
            securityContext:
              capabilities:
                add:
                - SYS_ADMIN
              procMount: Default
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: File
            volumeMounts:
            - mountPath: /home/client/sftp
              name: sftp
            - mountPath: /home/client/.ssh/keys
              name: sftp-public-keys
            - mountPath: /etc/ssh
              name: ssh-host-ed25519

Any ideas?

-- Idan
containers
google-kubernetes-engine
kubernetes
sftp
ssh

2 Answers

9/11/2019

"Not sure if you're still looking for a way to get host keys to persist, but mounting host key secrets into their relevant /etc/ssh/ files seems to work for me, eg."

kind: Deployment
...
spec:
  template:
    spec:
      #secrets and config
      volumes:
      ...
      - name: sftp-host-keys
        secret:
          secretName: sftp-host-keys
          defaultMode: 0600
      ...
      containers:
        #the sftp server itself
        - name: sftp
          image: atmoz/sftp:latest
          ...
          volumeMounts:
          - mountPath: /etc/ssh/ssh_host_ed25519_key
            name: sftp-host-keys
            subPath: ssh_host_ed25519_key
            readOnly: true
          - mountPath: /etc/ssh/ssh_host_ed25519_key.pub
            name: sftp-host-keys
            subPath: ssh_host_ed25519_key.pub
            readOnly: true
          - mountPath: /etc/ssh/ssh_host_rsa_key
            name: sftp-host-keys
            subPath: ssh_host_rsa_key
            readOnly: true
          - mountPath: /etc/ssh/ssh_host_rsa_key.pub
            name: sftp-host-keys
            subPath: ssh_host_rsa_key.pub
            readOnly: true
            ...
---
apiVersion: v1
kind: Secret
metadata:
  name: sftp-host-keys
  namespace: sftp
stringData:
  ssh_host_ed25519_key: |
    -----BEGIN OPENSSH PRIVATE KEY-----
    ...
    -----END OPENSSH PRIVATE KEY-----
  ssh_host_ed25519_key.pub: |
    ssh-ed25519 AAAA...
  ssh_host_rsa_key: |
    -----BEGIN RSA PRIVATE KEY-----
    ...
    -----END RSA PRIVATE KEY-----
  ssh_host_rsa_key.pub: |
    ssh-rsa AAAA...
type: Opaque
-- baraka
Source: StackOverflow

9/9/2019

The error you are facing is because SSH needs special permissions on the RSA Key File [1].

The best option for you is to mount your configmap as readOnly. In order to do so, add the “readOnly: true” flag to your mount. It should look like this [2]:

 volumeMounts:
            - mountPath: /home/client/sftp
              name: sftp
            - mountPath: /home/client/.ssh/keys
              name: sftp-public-keys
              readOnly: true
            - mountPath: /etc/ssh
              name: ssh-host-ed25519
              readOnly: true

Also, the capability of “SYS_ADMIN” [3] should be look like this:

 securityContext:
            capabilities:
              add: ["SYS_ADMIN"]

[1] https://unix.stackexchange.com/questions/257590/ssh-key-permissions-chmod-settings

[2] https://kubernetes.io/docs/concepts/storage/volumes/

[3] https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod

-- Armando Cuevas
Source: StackOverflow