I am runningJenkins Master & K8s-Master on same server. Jenkins running through tomcat Apache(not on K8s cluster). I have another server for K8s-Worker-Node, On both the server CentOS-8 OS installed. I have configured Jenkins Kubernetes Plugin version - 1.26.4 But while running pipeline job i always getting an error, Below is K8s cluster Jenkins agent pod log.
[root@K8s-Master /]# kubectl logs -f pipeline-test-33-sj6tl-r0clh-g559d -c jnlp
Aug 08, 2020 8:37:21 AM hudson.remoting.jnlp.Main createEngine
INFO: Setting up agent: pipeline-test-33-sj6tl-r0clh-g559d
Aug 08, 2020 8:37:21 AM hudson.remoting.jnlp.Main$CuiListener <init>
INFO: Jenkins agent is running in headless mode.
Aug 08, 2020 8:37:21 AM hudson.remoting.Engine startEngine
INFO: Using Remoting version: 4.3
Aug 08, 2020 8:37:21 AM org.jenkinsci.remoting.engine.WorkDirManager initializeWorkDir
INFO: Using /home/jenkins/agent/remoting as a remoting work directory
Aug 08, 2020 8:37:21 AM org.jenkinsci.remoting.engine.WorkDirManager setupLogging
INFO: Both error and output logs will be printed to /home/jenkins/agent/remoting
Aug 08, 2020 8:37:21 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Locating server among [http://jenkins-serverjenkins/]
Aug 08, 2020 8:37:41 AM hudson.remoting.jnlp.Main$CuiListener error
SEVERE: Failed to connect to http://jenkins-server/jenkins/tcpSlaveAgentListener/: jenkins-server
java.io.IOException: Failed to connect to http://jenkins-serverjenkins/tcpSlaveAgentListener/: jenkins-server
at org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver.resolve(JnlpAgentEndpointResolver.java:217)
at hudson.remoting.Engine.innerRun(Engine.java:693)
at hudson.remoting.Engine.run(Engine.java:518)
Caused by: java.net.UnknownHostException: jenkins-server
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:607)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:558)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:242)
at sun.net.www.http.HttpClient.New(HttpClient.java:339)
at sun.net.www.http.HttpClient.New(HttpClient.java:357)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1226)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1162)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1056)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:990)
at org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver.resolve(JnlpAgentEndpointResolver.java:214)
... 2 moreBelow settings configuration already enabled.
Manage Jenkins --> Configure Global Security --> Agents Random [Enabled]
I am successfully able to communicate from my Jenkins to the K8s master cluster(Verified in Jenkins Cloud section).
Even in K8s master all the namespace pods are running. weave-net CNI installed, Don't know what is causing problem while agent provisioning through Jenkins.
My Jenkins/K8s master & K8s-Worker-Node /etc/hosts as follows.
# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
75.76.77.5 jenkins-server jenkins-server.company.domain.com
75.76.77.6 k8s-node-1 k8s-node-1.company.domain.comBelow output getting in K8s-Worker node. It looks there is no problem in connecting jenkins-master from K8s-worker node.
# curl -I http://jenkins-server/jenkins/tcpSlaveAgentListener/
HTTP/1.1 200
Server: nginx/1.14.1
Date: Fri, 28 Aug 2020 06:13:34 GMT
Content-Type: text/plain;charset=UTF-8
Connection: keep-alive
Cache-Control: private
Expires: Thu, 01 Jan 1970 00:00:00 GMT
X-Content-Type-Options: nosniff
X-Hudson-JNLP-Port: 40021
X-Jenkins-JNLP-Port: 40021
X-Instance-Identity: MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAnkgz8Av2x8R9R2KZDzWm1K11O01r7VDikW48rCNQlgw/pUeNSPJu9pv7kH884tOE65GkMepNdtJcOFQFtY1qZ0sr5y4GF5TOc7+U/TqfwULt60r7OQlKcrsQx/jJkF0xLjR+xaJ64WKnbsl0AiZhd8/ynk02UxFXKcgwkEP2PGpGyQ1ps5t/yj6ueFiPAHX2ssK8aI7ynVbf3YyVrtFOlqhnTy11mJFoLAZnpjYRCJsrX5z/xciVq5c2XmEikLzMpjFl0YBAsDo7JL4eBUwiBr64HPcSKrsBBB9oPE4oI6GkYUCAni8uOLfzoNr9B1eImaETYSdVPdSKW/ez/OeHjQIDAQAB
X-Jenkins-Agent-Protocols: JNLP4-connect, Ping
X-Remoting-Minimum-Version: 3.14
# curl http://jenkins-server:40021/
Jenkins-Agent-Protocols: JNLP4-connect, Ping
Jenkins-Version: 2.235.3
Jenkins-Session: 4455fd45
Client: 75.76.77.6
Server: 75.76.77.5
Remoting-Minimum-Version: 3.14It looks Kubernetes DNS not resolving the name. So any pointers to resolve this problem will help. Thanks.
It was an Kubernetes DNS resolution issue. With the help of following link - https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution created dnsutils.yaml pod and found that my K8s cluster pods was returning following error "connection timed out; no servers could be reached" for below command.
kubectl exec -i -t dnsutils -- nslookup kubernetes.defaultSo i have uninstalled and re-installed Kubernetes version - v1.19.0. Now everything working fine. Thanks.!!!