I've created clusters using kops command. For each cluster I've to create a hosted zone and add namespaces to DNS provider. To create a hosted zone, I've created a sub-domain in the hosted zone in aws(example.com) by using the following command :
ID=$(uuidgen) && aws route53 create-hosted-zone --name subdomain1.example.com --caller-reference $ID | jq .DelegationSet.NameServers
The nameservers I get by executing above command are included in a newly created file subdomain1.json with the following content.
{
"Comment": "Create a subdomain NS record in the parent domain",
"Changes": [
{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "subdomain1.example.com",
"Type": "NS",
"TTL": 300,
"ResourceRecords": [
{
"Value": "ns-1.awsdns-1.co.uk"
},
{
"Value": "ns-2.awsdns-2.org"
},
{
"Value": "ns-3.awsdns-3.com"
},
{
"Value": "ns-4.awsdns-4.net"
}
]
}
}
]
}
To get the parent-zone-id, I've used the following command:
aws route53 list-hosted-zones | jq '.HostedZones[] | select(.Name=="example.com.") | .Id'
To apply the subdomain NS records to the parent hosted zone-
aws route53 change-resource-record-sets --hosted-zone-id <parent-zone-id> --change-batch file://subdomain1.json
then I created a cluster using kops command-
kops create cluster --name=subdomain1.example.com --master-count=1 --master-zones ap-southeast-1a --node-count=1 --zones=ap-southeast-1a --authorization=rbac --state=s3://example.com --kubernetes-version=1.11.0 --yes
I'm able to create a cluster, validate it and get its nodes. By using the same procedure, I created one more cluster (subdomain2.example.com).
I've set aliases for the two clusters using these commands-
kubectl config set-context subdomain1 --cluster=subdomain1.example.com --user=subdomain1.example.com
kubectl config set-context subdomain2 --cluster=subdomain2.example.com --user=subdomain2.example.com
To set up federation between these two clusters, I've used these commands- kubectl config use-context subdomain1
kubectl create clusterrolebinding admin-to-cluster-admin-binding --clusterrole=cluster-admin --user=admin
kubefed init interstellar --host-cluster-context=subdomain1 --dns-provider=aws-route53 --dns-zone-name=example.com
-The output of kubefed init command should be
But for me it's showing as "waiting for the federation control plane to come up...", but it does not come up. What might be the error?
I've followed the following tutorial to create 2 clusters.
https://gist.github.com/arun-gupta/02f534c7720c8e9c9a875681b430441a
There was a problem with the default image used for federation api server and controller manager binaries. By default, the below mentioned image is considered for the kubefed init command- "gcr.io/k8s-jkns-e2e-gce-federation/fcp-amd64:v0.0.0-master_$Format:%h
quot;.But this image is old and is not available, the federation control plane tries to pull the image but fails. This is the error I was getting.
To rectify it, build a fcp image of your own and push it to some repository and use this image in kubefed init command. Below are the instructions to be executed(Run all of these commands from this path "$GOPATH/src/k8s.io/kubernetes/federation")-
docker load -i _output/release-images/amd64/fcp-amd64.tar
docker tag gcr.io/google_containers/fcp-amd64:v1.9.0-alpha.2.60_430416309f9e58-dirty REGISTRY/REPO/IMAGENAME[:TAG]
docker push REGISTRY/REPO/IMAGENAME[:TAG]
_output/dockerized/bin/linux/amd64/kubefed init myfed --host-cluster-context=HOST_CLUSTER_CONTEXT --image=REGISTRY/REPO/IMAGENAME[:TAG] --dns-provider="PROVIDER" --dns-zone-name="YOUR_ZONE" --dns-provider-config=/path/to/provider.conf