How to join worker node in my gcp cluster

12/8/2020

I'm trying to create my own k8s cluster for training purpose. I have installed kubernetes with kubedam and my master node is ready:

NAME       STATUS   ROLES    AGE   VERSION
master-1   Ready    master   54s   v1.19.4

Now I'm trying join my worker instance with the join command with the token given at the end of the kubeadm init but I have this error when I do the command:

sudo kubeadm join my-master-node-ip-here:6443 --token xxxx.xxxxxxxxxxxx \
    --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

The error:

[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: missing optional cgroups: hugetlb
error execution phase preflight: couldn't validate the identity of the API Server: Get "https://my-master-node-ip-here:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": net/http: request canceled
 while waiting for connection (Client.Timeout exceeded while awaiting headers)
To see the stack trace of this error execute with --v=5 or higher

I have used Weave for the pod network

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

And with Terraform the 2 instances are in the same VPC called k8s-node

network.tf

resource "google_compute_network" "vpc_network" {
  name = "k8s-node"
}

# We create a public IP address for our google compute instance to utilize
resource "google_compute_address" "static" {
  name = "vm-public-address"
}

instance.tf

resource "google_compute_instance" "default" {
    name = var.vm_name
    machine_type = "e2-standard-2"
    zone = "europe-west1-b"

    boot_disk {
        initialize_params {
            image = "debian-cloud/debian-9"
        }
    }

    network_interface {
        network = var.network
        access_config {
            // Include this section to give the VM an external IP address
        }
    }

    metadata_startup_script = file("./scripts/bootstrap.sh")

    tags = ["node"]
}

It seems that the worker can't connect to the master instances, Did I miss something in my configuration?

-- Kevin
google-kubernetes-engine
kubernetes
terraform
terraform-provider-gcp

1 Answer

12/9/2020

To solve the issue I have added a firewall rule in teraform and open port 6443

resource "google_compute_network" "vpc_network" {
  name = "k8s-node"
}

resource "google_compute_firewall" "default" {
  name    = "k8s-firewall"
  network = google_compute_network.vpc_network.name

  allow {
    protocol = "icmp"
  }

  allow {
    protocol = "tcp"
    ports    = ["80", "6443"]
  }

  source_tags = ["node"]
}
-- Kevin
Source: StackOverflow