Setup Rancher Hello World on 2 node bare metal cluster

4/29/2019

I am trying to set up a K8s cluster using rancher on 2 baremetal servers running centos 7. I created the cluster using rancher UI and then added 2 nodes: - Server 1 that has etcd, controlplane and worker roles - Server 2 has controlplane and worker roles

Everything gets set up ok. Then, I try to deploy the rancher/hello-world image using rancher tutorial and configure ingress in port 80.

If pod runs on server 1 I can access easily using server1.xio ip address. Because server 1 ip is the ingress of the cluster. When it runs on pod 2 it shows a 504 Gateway error for nginx.

I already disabled firewalld after opening all ports.

I notices 2 kubernates services log some errors:

flannel:

E0429 14:20:13.625489 1 route_network.go:114] Error adding route to 10.42.0.0/24 via 192.168.169.46 dev index 2: network is unreachable
I0429 14:20:13.626679 1 iptables.go:115] Some iptables rules are missing; deleting and recreating rules
I0429 14:20:13.626689 1 iptables.go:137] Deleting iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
I0429 14:20:13.626934 1 iptables.go:115] Some iptables rules are missing; deleting and recreating rules
I0429 14:20:13.626943 1 iptables.go:137] Deleting iptables rule: -s 10.42.0.0/16 -j ACCEPT
I0429 14:20:13.627279 1 iptables.go:137] Deleting iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
I0429 14:20:13.627568 1 iptables.go:137] Deleting iptables rule: -d 10.42.0.0/16 -j ACCEPT
I0429 14:20:13.627849 1 iptables.go:137] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.4.0/24 -j RETURN
I0429 14:20:13.628111 1 iptables.go:125] Adding iptables rule: -s 10.42.0.0/16 -j ACCEPT
I0429 14:20:13.628551 1 iptables.go:137] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE
I0429 14:20:13.629139 1 iptables.go:125] Adding iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
I0429 14:20:13.629356 1 iptables.go:125] Adding iptables rule: -d 10.42.0.0/16 -j ACCEPT
I0429 14:20:13.630313 1 iptables.go:125] Adding iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
I0429 14:20:13.631531 1 iptables.go:125] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.4.0/24 -j RETURN
I0429 14:20:13.632717 1 iptables.go:125] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE```

cattle agent thrown:

Timout Connecting to proxy" url="wss://ljanalyticsdev01.lojackhq.com.ar:16443/v3/connect"```

but that was fixed when node assumed controlplane role.

Hello World pod YAML:

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    field.cattle.io/creatorId: user-qlsc5
    field.cattle.io/publicEndpoints: '[{"addresses":["192.168.169.46"],"port":80,"protocol":"HTTP","serviceName":"default:ingress-d1e1a394f61c108633c4bd37aedde757","ingressName":"default:hello","hostname":"hello.default.192.168.169.46.xip.io","allNodes":true}]'
  creationTimestamp: "2019-04-29T03:55:16Z"
  generation: 6
  labels:
    cattle.io/creator: norman
    workload.user.cattle.io/workloadselector: deployment-default-hello
  name: hello
  namespace: default
  resourceVersion: "303493"
  selfLink: /apis/apps/v1beta2/namespaces/default/deployments/hello
  uid: 992bf62e-6a32-11e9-92ae-005056998e1d
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      workload.user.cattle.io/workloadselector: deployment-default-hello
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      annotations:
        cattle.io/timestamp: "2019-04-29T03:54:58Z"
      creationTimestamp: null
      labels:
        workload.user.cattle.io/workloadselector: deployment-default-hello
    spec:
      containers:
      - image: rancher/hello-world
        imagePullPolicy: Always
        name: hello
        resources: {}
        securityContext:
          allowPrivilegeEscalation: false
          capabilities: {}
          privileged: false
          procMount: Default
          readOnlyRootFilesystem: false
          runAsNonRoot: false
        stdin: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        tty: true
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2019-04-29T03:55:16Z"
    lastUpdateTime: "2019-04-29T03:55:36Z"
    message: ReplicaSet "hello-6cc7bc6644" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  - lastTransitionTime: "2019-04-29T13:22:35Z"
    lastUpdateTime: "2019-04-29T13:22:35Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 6
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

Load Balancer and Ingress YAML:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    field.cattle.io/creatorId: user-qlsc5
    field.cattle.io/ingressState: '{"aGVsbG8vZGVmYXVsdC94aXAuaW8vLzgw":"deployment:default:hello"}'
    field.cattle.io/publicEndpoints: '[{"addresses":["192.168.169.46"],"port":80,"protocol":"HTTP","serviceName":"default:ingress-d1e1a394f61c108633c4bd37aedde757","ingressName":"default:hello","hostname":"hello.default.192.168.169.46.xip.io","allNodes":true}]'
  creationTimestamp: "2019-04-27T03:51:08Z"
  generation: 2
  labels:
    cattle.io/creator: norman
  name: hello
  namespace: default
  resourceVersion: "303476"
  selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/hello
  uid: b082e994-689f-11e9-92ae-005056998e1d
spec:
  rules:
  - host: hello.default.192.168.169.46.xip.io
    http:
      paths:
      - backend:
          serviceName: ingress-d1e1a394f61c108633c4bd37aedde757
          servicePort: 80
status:
  loadBalancer:
    ingress:
    - ip: 192.168.169.46
    - ip: 192.168.186.211
-- Santiago Russo
bare-metal-server
kubernetes
rancher

1 Answer

4/30/2019

is your ingress controller running on the other node? I might restart your docker service on both nodes and see if that flushes any old routes

-- krome
Source: StackOverflow