Deploy GitLab with Helm. Nginx-ingress pods can't start

5/30/2018

Call install:

helm install --name gitlab1 -f values.yaml gitlab/gitlab-omnibus

I see Pods can't start.

And I see error:no service with name nginx-ingress/default-http-backend found: services "default-http-backend" is forbidden: User "system:serviceaccount:nginx-ingress:default" cannot get services in the namespace "nginx-ingress"

I think about ABAC/RBAC... But what doing with this...

Logs from nginx pod:

# kubectl logs nginx-ndxhn --namespace nginx-ingress
[dumb-init] Unable to detach from controlling tty (errno=25 Inappropriate ioctl for device).
[dumb-init] Child spawned with PID 7.
[dumb-init] Unable to attach to controlling tty (errno=25 Inappropriate ioctl for device).
[dumb-init] setsid complete.
I0530 21:30:23.232676       7 launch.go:105] &{NGINX 0.9.0-beta.11 git-a3131c5 https://github.com/kubernetes/ingress}
I0530 21:30:23.232749       7 launch.go:108] Watching for ingress class: nginx
I0530 21:30:23.233708       7 launch.go:262] Creating API server client for https://10.233.0.1:443
I0530 21:30:23.234080       7 nginx.go:182] starting NGINX process...
F0530 21:30:23.251587       7 launch.go:122] no service with name nginx-ingress/default-http-backend found: services "default-http-backend" is forbidden: User "system:serviceaccount:nginx-ingress:default" cannot get services in the namespace "nginx-ingress"
[dumb-init] Received signal 17.
[dumb-init] A child with PID 7 exited with exit status 255.
[dumb-init] Forwarded signal 15 to children.
[dumb-init] Child exited with status 255. Goodbye.


# kubectl get svc -w --namespace nginx-ingress nginx
NAME      TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)                                   AGE
nginx     LoadBalancer   10.233.25.0   <pending>     80:32048/TCP,443:31430/TCP,22:31636/TCP   9m


# kubectl describe svc --namespace nginx-ingress nginx
Name:                     nginx
Namespace:                nginx-ingress
Labels:                   <none>
Annotations:              service.beta.kubernetes.io/external-traffic=OnlyLocal
Selector:                 app=nginx
Type:                     LoadBalancer
IP:                       10.233.25.0
IP:                       1.1.1.1
Port:                     http  80/TCP
TargetPort:               80/TCP
NodePort:                 http  32048/TCP
Endpoints:                
Port:                     https  443/TCP
TargetPort:               443/TCP
NodePort:                 https  31430/TCP
Endpoints:                
Port:                     git  22/TCP
TargetPort:               22/TCP
NodePort:                 git  31636/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>


# kubectl get pods --all-namespaces
NAMESPACE       NAME                                                   READY     STATUS             RESTARTS   AGE
default         gitlab1-gitlab-75576c4589-lnf56                        0/1       Running            2          11m
default         gitlab1-gitlab-postgresql-f66555d65-nqvqx              1/1       Running            0          11m
default         gitlab1-gitlab-redis-58cf598657-ksptm                  1/1       Running            0          11m
default         gitlab1-gitlab-runner-55d458ccb7-g442z                 0/1       CrashLoopBackOff   6          11m
default         glusterfs-9cfcr                                        1/1       Running            0          1d
default         glusterfs-k422g                                        1/1       Running            0          1d
default         glusterfs-tjtvq                                        1/1       Running            0          1d
default         heketi-75dcfb7d44-thxpm                                1/1       Running            0          1d
default         nginx-nginx-ingress-controller-775b5b9c6d-hhvlr        1/1       Running            0          2h
default         nginx-nginx-ingress-default-backend-7bb66746b9-mzgcb   1/1       Running            0          2h
default         nginx-pod1                                             1/1       Running            0          1d
kube-lego       kube-lego-58c9f5788d-pdfb5                             1/1       Running            0          11m
kube-system     calico-node-hq2v7                                      1/1       Running            3          2d
kube-system     calico-node-z4nts                                      1/1       Running            3          2d
kube-system     calico-node-z9r9v                                      1/1       Running            4          2d
kube-system     kube-apiserver-k8s-m1.me                               1/1       Running            4          2d
kube-system     kube-apiserver-k8s-m2.me                               1/1       Running            5          1d
kube-system     kube-apiserver-k8s-m3.me                               1/1       Running            3          2d
kube-system     kube-controller-manager-k8s-m1.me                      1/1       Running            4          2d
kube-system     kube-controller-manager-k8s-m2.me                      1/1       Running            4          1d
kube-system     kube-controller-manager-k8s-m3.me                      1/1       Running            3          2d
kube-system     kube-dns-7bd4d5fbb6-r2rnf                              3/3       Running            9          2d
kube-system     kube-dns-7bd4d5fbb6-zffvn                              3/3       Running            9          2d
kube-system     kube-proxy-k8s-m1.me                                   1/1       Running            3          2d
kube-system     kube-proxy-k8s-m2.me                                   1/1       Running            3          1d
kube-system     kube-proxy-k8s-m3.me                                   1/1       Running            3          2d
kube-system     kube-scheduler-k8s-m1.me                               1/1       Running            4          2d
kube-system     kube-scheduler-k8s-m2.me                               1/1       Running            4          1d
kube-system     kube-scheduler-k8s-m3.me                               1/1       Running            4          2d
kube-system     kubedns-autoscaler-679b8b455-pp7jd                     1/1       Running            3          2d
kube-system     kubernetes-dashboard-55fdfd74b4-6z8qp                  1/1       Running            0          1d
kube-system     tiller-deploy-75b7d95f5c-8cmxh                         1/1       Running            0          1d
nginx-ingress   default-http-backend-6679b97b47-w6cx7                  1/1       Running            0          11m
nginx-ingress   nginx-ndxhn                                            0/1       CrashLoopBackOff   6          11m
nginx-ingress   nginx-nk2jg                                            0/1       CrashLoopBackOff   6          11m
nginx-ingress   nginx-rz7xj                                            0/1       CrashLoopBackOff   6          11m

Logs on runner:

# kubectl logs gitlab1-gitlab-runner-55d458ccb7-g442z
+ cp /scripts/config.toml /etc/gitlab-runner/
+ /entrypoint register --non-interactive --executor kubernetes
Running in system-mode.                            

ERROR: Registering runner... failed                 runner=tQtCbx5U status=couldn't execute POST against http://gitlab1-gitlab.default:8005/api/v4/runners: Post http://gitlab1-gitlab.default:8005/api/v4/runners: dial tcp 10.233.7.205:8005: i/o timeout
PANIC: Failed to register this runner. Perhaps you are having network problems

PVC is fine

# kubectl get pvc
NAME                                STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
gitlab1-gitlab-config-storage       Bound     pvc-c957bd23-644f-11e8-8f10-4ccc6a60fcbe   1Gi        RWO            gluster-heketi   13m
gitlab1-gitlab-postgresql-storage   Bound     pvc-c964e7d0-644f-11e8-8f10-4ccc6a60fcbe   30Gi       RWO            gluster-heketi   13m
gitlab1-gitlab-redis-storage        Bound     pvc-c96f9146-644f-11e8-8f10-4ccc6a60fcbe   5Gi        RWO            gluster-heketi   13m
gitlab1-gitlab-registry-storage     Bound     pvc-c959d377-644f-11e8-8f10-4ccc6a60fcbe   30Gi       RWO            gluster-heketi   13m
gitlab1-gitlab-storage              Bound     pvc-c9611ab1-644f-11e8-8f10-4ccc6a60fcbe   30Gi       RWO            gluster-heketi   13m
gluster1                            Bound     pvc-922b5dc0-6372-11e8-8f10-4ccc6a60fcbe   5Gi        RWO            gluster-heketi   1d

# kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
-- Ivan
gitlab
gitlab-omnibus
kubernetes
kubernetes-helm

1 Answer

5/31/2018

I think about ABAC/RBAC... But what doing with this...

You are correct, and the error message explains exactly what is wrong. There are two paths forward: you can fix the Role and RoleBinding for the default ServiceAccount in the nginx-ingress namespace, or you can switch the Deployment to use a ServiceAccount other than default in order to assign that Deployment the specific permissions required. I recommend the latter, but the former may be less typing.

The rough version of the Role and RoleBinding lives in the nginx-ingress repo but may need to be adapted for your needs, including updating the apiVersion away from v1beta1

After that change has taken place, you'll need to delete the nginx-ingress Pods in order for them to pick up their new Role and conduct whatever initialization tasks nginx does during startup.


Separately, you will for sure want to fix this business:

Post http://gitlab1-gitlab.default:8005/api/v4/runners: dial tcp 10.233.7.205:8005: i/o timeout

I can't offer more concrete actions without knowing more about your CNI setup and the state of affairs of the actual GitLab Pod, but an I/O timeout is certainly a very weird error to get for in cluster communication.

-- mdaniel
Source: StackOverflow