I want to use SSH Authentication with Spring boot cloud config server. Up to now, I've been running the config server in a docker container on docker swarm with no issues.
My organization recently decided to move everything to Openshift 3. I am trying to get the config server deployed however I'm running into issues authenticating to Gitlab using SSH. In the docker image, I used before I just copied the public and private SSH keys into /root/.ssh and it worked but it's not working at all in my Fabric8 generated S2I image on Openshift.
In Openshift I created a generic kubernetes secret called cfgssh
which consists of the id_rsa and id_rsa.pub keys (base64 encoded)
I then mounted the secret to /root/.ssh in my deployment config/containers. This doesn't work I'm guessing as to the user that the application is run as doesn't have read access to /root/.ssh? This always ends up in configServer crashing after an auth exception.
I am able to run spring cloud configserver locally and on docker swarm without any auth issues so it's definitely not a cloud configserver config issue.
Has anyone else encountered/found a way around this?
My config is below.
Maven fabric8 deployment fragment:
spec:
replicas: 4
template:
spec:
containers:
- env:
- name: SPRING_PROFILES_ACTIVE
value: qa
volumeMounts:
- name: cfgssh
mountPath: ~/.ssh
readOnly: true
livenessProbe:
httpGet:
path: /actuator/health
port: 8080
scheme: HTTPS
initialDelaySeconds: 180
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /actuator/health
port: 8080
scheme: HTTPS
initialDelaySeconds: 60
timeoutSeconds: 5
resources:
requests:
memory: "64Mi"
limits:
memory: "256Mi"
env:
- name: JAVA_OPTIONS
value: "-Xms64M -Xmx256M"
volumes:
- name: cfgssh
secret:
secretName: cfgssh
items:
- key: id_rsa
path: id_rsa
- key: id_rsa.pub
path: id_rsa.pub
Assuming that it's using the ssh client in the container (you didn't specify the image that you are using), my guess is that whatever user that you are running as is not root
. You can just check what the default user is for the container by exec-ing into it
$ kubectl exec -it <pod-name> -c <container-name> sh
$ whoami
Having said that, mounting on ~/.ssh
should work (you check when you exec into the pod/container). So I think you might have the wrong permissions for your id_rsa
file. (They need to be 0600
). Make sure of that in your volumes
section:
volumes:
- name: cfgssh
secret:
secretName: cfgssh
defaultMode: 0600 <= add this
items:
- key: id_rsa
path: id_rsa
- key: id_rsa.pub
path: id_rsa.pub
Reference: secrets file permissions.