I'm having a bit of a challenge try to build my app which is using the golang client-go library. What the app does is provide and api which then deploys a pod to a kubernetes cluster. Now the app is able to deploy a pod successfully if I use an out of cluster kubernetes(i.e minikube) config which is found in $HOME/.kube/config. See code below that determines which config to use depending on the config path;
package kubernetesinterface import ( "log" "os" core "k8s.io/api/core/v1" v1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/client-go/kubernetes" _ "k8s.io/client-go/plugin/pkg/client/auth" // load auth packages "k8s.io/client-go/rest" "k8s.io/client-go/tools/clientcmd" ) // KubeStruct - struct that uses interface type (useful when testing) type KubeStruct struct { clientset kubernetes.Interface } // DeployPod - Method that uses a KubeStruct type to deploy deploy simulator pod to kubernetes cluster func (kube *KubeStruct) DeployPod() bool { var podObject *core.Pod podObject = createPodObjects() _, err := kube.clientset.Core().Pods(podObject.Namespace).Create(podObject) if err != nil { log.Println("Failed to create simulator pod: ", err.Error()) return false } return true } // GetNewClient - function to create a new clientset object to connect to a kubernetes cluster func GetNewClient() (*KubeStruct, error) { var kubeConfig *rest.Config var err error configPath := os.Getenv("CONFIG_PATH") if configPath == "" { log.Println("Using in-cluster configuration") kubeConfig, err = rest.InClusterConfig() } else { log.Println("Using out of cluster config") kubeConfig, err = clientcmd.BuildConfigFromFlags("", configPath) } if err != nil { log.Println("Error getting configuration ", err.Error()) return nil, err } // create clientset for kubernetes cluster client := KubeStruct{} client.clientset, err = kubernetes.NewForConfig(kubeConfig) if err != nil { log.Println("Error creating clientset for kubernetes cluster ", err.Error()) return nil, err } return &client, nil } func createPodObjects() *core.Pod { return &core.Pod{ ObjectMeta: v1.ObjectMeta{ Name: "podname", Namespace: "default", Labels: map[string]string{ "app": "podname", }, }, Spec: core.PodSpec{ Containers: []core.Container{ { Name: "podname", Image: os.Getenv("IMAGE"), ImagePullPolicy: core.PullIfNotPresent, Command: []string{ "sleep", "3600", }, }, }, }, } }
So if a value exists for CONFIG_PATH, the app works as expected and a pod is deployed to my minikube cluster. Now when the same app is built on gcp, I get the following build error;
Step #1: 2019/03/13 21:25:20 Error getting configuration unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
I have searched online unsuccessfully for a solution so I thought I'd post here.