I've built a CoreOS Kubernetes (1.0.7) Cluster in AWS and everything seems to work swimmingly, except Volumes. I manually create a volume in the same zone as my nodes prior to bringing up an RC with the volume configured. As below:
apiVersion: v1
kind: ReplicationController
metadata:
name: jenkins
spec:
replicas: 1
selector:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins
ports:
- containerPort: 8080
- containerPort: 50000
volumeMounts:
- name: "jenkins-data"
mountPath: "/var/jenkins_home"
livenessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 60
timeoutSeconds: 180
volumes:
- name: "jenkins-data"
awsElasticBlockStore:
volumeID: aws://us-east-1c/vol-abcd123
fsType: ext4
The pod begins to come up, but waits for the volume to mount. The volume properly attaches to the instance (I see it in fdisk
) but is unable to be mounted properly when I turn up logging on the kublet:
safe_format_and_mount[1504]: Running: fsck.ext4 -a /dev/xvdf
safe_format_and_mount[1508]: /dev/xvdf: clean, 11/655360 files, 79663/2621440 blocks
safe_format_and_mount[1513]: Running: mount -o discard,defaults /dev/xvdf /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-1c/vol-abcd123
safe_format_and_mount[1516]: mount: mount point /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-1c/vol-abcd123 does not exist
safe_format_and_mount[1519]: Disk /dev/xvdf looks formatted but won't mount. Giving up.
Now that error is correct, the /var/lib/kubelet/plugins
dir exists but is empty from there out. Seems like I'm missing something here that should be creating this directory? I've looked around previous issues and seen many regarding safe_format_and_mount but haven't seen any regarding that directory. Any ideas?