I am testing Rancher 2 as a Kubernetes interface. Rancher 2 is launched with docker-compose, using image rancher/rancher:latest.
Everything is Ok for clusters, nodes, pods. Then I try to secure some load balancers with certificates. Do do so, I install cert-manager from the catalog/helm.
I have tried to follow this video tutorial (https://www.youtube.com/watch?v=xc8Jg9ItDVk) which explains how to create an issuer and a certificate, and how to link it to a load balancer.
I create a file for the issuer :
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: root@example.com
privateKeySecretRef:
name: letsencrypt-private-key
http01: {}
It's time to create the issuer.
sudo docker-compose exec rancher bash
I am connected to the Rancher container. kubectl
and helm
are installed.
I try to create the issuer :
kubectl create -f etc/cert-manager/cluster-issuer.yaml
error: unable to recognize "etc/cert-manager/cluster-issuer.yaml": no matches for certmanager.k8s.io/, Kind=ClusterIssuer
Additional informations :
When I do helm list
:
Error: could not find a ready tiller pod
I get the pods to find tiller :
kubectl get pods
NAME READY STATUS RESTARTS AGE
tiller-deploy-6ffc49c5df-zbjg8 0/1 Pending 0 39m
I describe this pod :
kubectl describe pod tiller-deploy-6ffc49c5df-zbjg8
Name: tiller-deploy-6ffc49c5df-zbjg8
Namespace: default
Node: <none>
Labels: app=helm
name=tiller
pod-template-hash=2997057189
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"tiller-deploy-6ffc49c5df","uid":"46f74523-7f8f-11e8-9d04-0242ac1...
Status: Pending
IP:
Created By: ReplicaSet/tiller-deploy-6ffc49c5df
Controlled By: ReplicaSet/tiller-deploy-6ffc49c5df
Containers:
tiller:
Image: gcr.io/kubernetes-helm/tiller:v2.8.0-rancher3
Ports: 44134/TCP, 44135/TCP
Liveness: http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
Environment:
TILLER_NAMESPACE: default
TILLER_HISTORY_MAX: 0
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from tiller-token-hbfgz (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
tiller-token-hbfgz:
Type: Secret (a volume populated by a Secret)
SecretName: tiller-token-hbfgz
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4m (x125 over 39m) default-scheduler no nodes available to schedule pods
This problem is a bit specific : rancher/kubernetes/docker-compose ... If anyone has some ideas, you're welcome ;)
Thanks in advance !
I just found an information to unblock the situation.
The first step is to load the cluster's configuration. I was working on the default cluster. So,
/root/.kube/config
If it can help someone ;)