I have a private AWS S3 backed docker registry running in a container on a fedora 21 host. I'm trying to move that setup into kubernetes.
My configs,
registry-service.yaml
apiVersion: v1beta3
kind: Service
metadata:
labels:
name: registry
name: registry
spec:
ports:
- name: registry
port: 5000
protocol: TCP
targetPort: 5000
selector:
name: registry
registry-controller.yaml
apiVersion: v1beta3
kind: ReplicationController
metadata:
labels:
name: registry
name: registry
spec:
replicas: 1
selector:
name: registry
template:
metadata:
labels:
name: registry
spec:
containers:
-
env:
-
name: SETTINGS_FLAVOR
value: s3
-
name: AWS_BUCKET
value: docker-registry
-
name: STORAGE_PATH
value: /registry
-
name: AWS_KEY
value: XXXXXXXXXXXXX
-
name: AWS_SECRET
value: XXXXXXXXXXXXXX
-
name: SEARCH_BACKEND
value: sqlalchemy
image: registry
name: registry
ports:
-
containerPort: 5000
protocol: TCP
Then running,
kubectl.sh create -f registry-service.yaml
services/registry
kubectl.sh create -f registry-controller.yaml
replicationcontrollers/registry
kubectl.sh get pods
NAME READY REASON RESTARTS AGE 2d
registry-83icn 0/1 Running 13 7m
checking the logs it looks like a DNS issue, I'm not sure where to proceed from here.
kubectl logs registry-83icn
Error: 'tcp://10.0.173.63:5000' is not a valid port number.
What am I missing here? Do I need to configure DNS somewhere?
Your registry container is in a crash loop. I think the problem is that we automatically inject environment variables of the form ${SERVICE_NAME}_HOST to emulate Docker links variables. The registry looks for an environment variable named REGISTRY_HOST. Since your kubernetes service is named 'registry' there is a name conflict.
If you rename your registry Service to something like 'private_registry' then I think it will work correctly.