I am very new to OpenWhisk and have some difficulties in the setup. The Ngnix Pod is running in a CrashLoopBackOff because of an error in the Pod.
2018/07/02 16:14:27 [emerg] 1#1: host not found in resolver "kube-dns.kube-system" in /etc/nginx/nginx.conf:41
nginx: [emerg] host not found in resolver "kube-dns.kube-system" in /etc/nginx/nginx.conf:41
I cannot jump into the Pod itself but I ran a Docker Container with the same Image the Pod is using and looked inside the nginx.conf:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
When I looked into the conf.d directory I found a single default.conf file in which the server_name was set to localhost:
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/log/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
I believe this is causing the Issue and that the kube.dns service cannot resolv localhost.
However I do not know how I can resolve this Issue or at least work around it. Maybe I can set a static hostname for the Pod in the Ngnix Deployment and enter that hostname into the ngnix config?
Could Someone provide a workaround too me or even a fix?
Many thanks.
kubeadm gets and checks environment from your currently running host OS session.
You can check if proxy has been set by executing the below command:
env | grep _proxy
In environments where a proxy server is configured to access the internet services, such as the Docker Hub or the Oracle Container Registry, you may need to perform several configuration steps to get Kubernetes to install and to run correctly.
Ensure that the Docker engine startup configuration on each node in the cluster is configured to use the proxy server. For instance, create a systemd service drop-in file at /etc/systemd/system/docker.service.d/http-proxy.conf with the following contents:
[Service]
Environment="HTTP_PROXY=http://proxy.example.com:80/"
Environment="HTTPS_PROXY=https://proxy.example.com:443/"
Replace http://proxy.example.com:80/ with the URL for your HTTP proxy service. If you have an HTTPS proxy and you have specified this as well, replace https://proxy.example.com:443/ with the URL and port for this service. If you have made a change to your Docker systemd service configuration, run the following commands:
systemctl daemon-reload; systemctl restart docker
You may need to set the http_proxy or https_proxy environment variables to be able to run other commands on any of the nodes in your cluster. For example:
export http_proxy="http://proxy.example.com:80/"
export https_proxy="https://proxy.example.com:443/"
Disable the proxy configuration for the localhost and any node IPs in the cluster:
export no_proxy="127.0.0.1, 192.0.2.10, 192.0.2.11, 192.0.2.12"
These steps should be sufficient to enable the deployment to function normally. Use of a transparent proxy that does not require configuration on the host and which ignores internal network requests, can reduce the complexity of the configuration and may help to avoid unexpected behavior.
Are you using the "OpenWhisk Deployment on Kubernetes (https://github.com/apache/incubator-openwhisk-deploy-kube) project?
I suspect you may be hitting the Kubernetes bug described in the README.md:
However, multiple minor releases of Kubernetes, including 1.8.9 and 1.9.4 will not work for OpenWhisk due to bugs with volume mount subpaths (see[1]). This bug will surface as a failure when deploying the nginx container.
The fix for this is using a version of Kubernetes that does not have the volume mount subpath bug.