Can't reach a pod IP address on a Node host from another Node host with OpenShift origin on OpenStack

9/27/2017

My original question is available here but reaching out to wider community,

https://github.com/openshift/origin/issues/16447

I picked a 4 node cluster as follows,

[masters]
ci-master-01.example.com  
openshift_public_hostname="ci-master-01.example.com" ansible_host="10.145.78.110"

[etcd]
ci-etcd-node-01.example.com 
ansible_host="10.145.78.113"

[nodes]
ci-master-01.example.com 
openshift_schedulable=False  ansible_host="10.145.78.110" 

ci-infra-node-01.example.com 
openshift_schedulable=False  openshift_node_labels="{'region': 'infra', 
'zone': 'default'}" ansible_host="10.145.78.112"
ci-primary-node-01.example.com  
openshift_node_labels="{'region': 'primary', 'zone': 'default'}" 
ansible_host="10.145.78.111"

# Service Network CIDR
openshift_portal_net=172.30.0.0/16

# Pod Network CIDR
osm_cluster_network_cidr=10.128.0.0/14

I used this inventory to install openshift origin v3.6. The installation was successful but I am trying to a very simple network test before running an application.

oc version
oc v3.6.0+c4dd4cf
kubernetes v1.6.1+5115d708d7
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://ci-master-01.example.com:443
openshift v3.6.0+c4dd4cf
kubernetes v1.6.1+5115d708d7

Actual:

On Master,

oc login

oc get endpoints
NAME               ENDPOINTS                                                
docker-registry    10.128.0.3:5000                                           
kubernetes         10.145.78.110:443,10.145.78.110:8053,10.145.78.110:8053   
registry-console   10.128.0.5:9090                                           
router             <none>

ping 10.128.0.3 --> Not reachable
ping 10.128.0.5 --> Not reachable

These 2 PODS are running on a NODE host and I am trying to reach them from master.

Also I can't reach the service endpoints with their names or IP addresses,

oc get svc
NAME               CLUSTER-IP       EXTERNAL-IP   PORT(S)                   AGE
docker-registry    172.30.115.147   <none>        5000/TCP                  6h
kubernetes         172.30.0.1       <none>        443/TCP,53/UDP,53/TCP     6h
registry-console   172.30.189.195   <none>        9000/TCP                  6h
router             172.30.179.178   <none>        80/TCP,443/TCP,1936/TCP   6h

curl -v 172.30.115.147:5000
* About to connect() to 172.30.115.147 port 5000 (#0)
*   Trying 172.30.115.147...
* No route to host
* Failed connect to 172.30.115.147:5000; No route to host
* Closing connection 0
curl: (7) Failed connect to 172.30.115.147:5000; No route to host


curl -v docker-registry:5000
curl: (6) Could not resolve host: docker-registry; Unknown error

Expected:

Should be able to reach these addresses.

Additional information:

I have followed all the available troubleshooting techniques here but got no where.

https://access.redhat.com/documentation/en-us/openshift_enterprise/3.1/html/cluster_administration/admin-guide-sdn-troubleshooting#debugging-local-networking

-- Venkat reddy
kubernetes
openflow
openshift-origin
openstack

1 Answer

9/6/2018

I happened to have similar issues and It was finally AWS Security Group changes. After allowing the necessary ports, it worked like charm.

-- Arockiasmy K
Source: StackOverflow