I'm not able to deploy "make dev-deploy k8s multinode" successfully in configuring openstack Helm

4/30/2018

I'm trying to deploy openstack Helm and contrail Helm and got stuck here in make "dev-deploy k8s multinode" and getting the following error for deploy-kubelet action

ok: [verne] => {
"out.stdout_lines": [
"",
"PLAY [all] *********************************************************************",
"",
"TASK [Gathering Facts] *********************************************************",
"ok: [/mnt/rootfs]",
"",
"TASK [deploy-kubelet : include_tasks] ******************************************",
"included: /opt/playbooks/roles/deploy-kubelet/tasks/support-packages.yaml for /mnt/rootfs",
"",
"TASK [deploy-kubelet : centos | installing epel-release] ***********************",
"skipping: [/mnt/rootfs] => (item=[]) ",
"",
"TASK [deploy-kubelet : centos | installing SElinux support packages] ***********",
"skipping: [/mnt/rootfs] => (item=[]) ",
"",
"TASK [deploy-kubelet : fedora | installing SElinux support packages] ***********",
"skipping: [/mnt/rootfs] => (item=[]) ",
"",
"TASK [deploy-kubelet : installing ceph support packages] ***********************",
"",
"TASK [deploy-package : ubuntu | installing packages] ***************************",
"ok: [/mnt/rootfs] => (item=[u'ceph-common'])",
"",
"TASK [deploy-package : centos | installing packages] ***************************",
"skipping: [/mnt/rootfs] => (item=[]) ",
"",
"TASK [deploy-package : fedora | installing packages] ***************************",
"skipping: [/mnt/rootfs] => (item=[]) ",
"",
"TASK [deploy-kubelet : installing NFS support packages] ************************",
"",
"TASK [deploy-package : ubuntu | installing packages] ***************************",
"ok: [/mnt/rootfs] => (item=[u'nfs-common'])",
"",
"TASK [deploy-package : centos | installing packages] ***************************",
"skipping: [/mnt/rootfs] => (item=[]) ",
"",
"TASK [deploy-package : fedora | installing packages] ***************************",
"skipping: [/mnt/rootfs] => (item=[]) ",
"",
"TASK [deploy-kubelet : installing LinuxBridge support] *************************",
"",
"TASK [deploy-package : ubuntu | installing packages] ***************************",
"ok: [/mnt/rootfs] => (item=[u'bridge-utils'])",
"",
"TASK [deploy-package : centos | installing packages] ***************************",
"skipping: [/mnt/rootfs] => (item=[]) ",
"",
"TASK [deploy-package : fedora | installing packages] ***************************",
"skipping: [/mnt/rootfs] => (item=[]) ",
"",
"TASK [deploy-kubelet : include_tasks] ******************************************",
"included: /opt/playbooks/roles/deploy-kubelet/tasks/hostname.yaml for /mnt/rootfs",
"",
"TASK [deploy-kubelet : DNS | Ensure node fully qualified hostname is set] ******",
"fatal: [/mnt/rootfs]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'ipv4'\n\nThe error appears to have been in '/opt/playbooks/roles/deploy-kubelet/tasks/hostname.yaml': line 13, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: DNS | Ensure node fully qualified hostname is set\n ^ here\n"}",
"\tto retry, use: --limit @/opt/playbooks/kubeadm-aio-deploy-kubelet.retry",

Do I need to make any changes in opt/openstack-helm-infra/tools/gate/devel/multinode-vars.yaml file?

Below is my multinode-vars.yaml file

kubernetes:
network:
default_device: enp129s0f1
ipv4: 172.19.2.2
cluster:
cni: calico
pod_subnet: 192.168.0.0/16
domain: cluster.local

Below is my multinode-inventory.yaml file

all:
children:
primary:
hosts:
jules:
ansible_port: 22
ansible_host: xxxx
ansible_user: root
ansible_ssh_private_key_file: /etc/openstack-helm/deploy-key.pem
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
nodes:
hosts:
verne:
ansible_port: 22
ansible_host: xxxx
ansible_user: root
ansible_ssh_private_key_file: /etc/openstack-helm/deploy-key.pem
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
-- vunnam prasad
kubernetes
kubernetes-helm
openstack

1 Answer

5/7/2018

I'm guessing that the default device in your multinode-vars.yaml file is wrong. That device doesn't exist in your worker nodes. Use an existing configured device and you're good to go :)

-- pgithaiga
Source: StackOverflow