I am trying to access within a kubernetes pod (inside minikube-VM) to an external Zookeeper/Kafka ( outside kubernetes-domain) ,which basically doesn't work.
First, I have a docker-image, which runs a Spring-Boot Application and it tries to connect on startup a Kafka-Instance on Port 2181/9092 respectably. As I have created a Service with an Endpoint, that points to the external Host/IP that should fix the routing, but unfortunately it doesn't.
Here is the definition of the service/endpoint
apiVersion: v1
kind: Service
metadata:
name: ext-kafka
namespace: default
spec:
clusterIP: None
ports:
- port: 2181
name: zk
protocol: TCP
targetPort: 2181
- port: 9092
name: kafka
protocol: TCP
targetPort: 9092
---
apiVersion: v1
kind: Endpoints
metadata:
name: ext-kafka
namespace: default
subsets:
- addresses:
# 192.168.99.1 is the external IP
- ip: 192.168.99.1
ports:
- port: 2181
name: zk
- port: 9092
name: kafka
#
# HERE ARE THE DEPLOYMENTS/DEFINITIONS THAT THE SERVICES ARE INSTALLED
#
[root@centos1 work]# kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ext-kafka None <none> 2181/TCP,9092/TCP 2d
...
[root@centos1 work]# kubectl get endpoints
NAME ENDPOINTS AGE
ext-kafka 192.168.99.1:2181,192.168.99.1:9092 2dI checked the iptables on the minikube VM, as it stated that Packages getting denied. So cleaning up doesn't resolve the Issue, as it gets recreated automatically behind the scenes.
$ iptables -L
....
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-FIREWALL all -- anywhere anywhere
KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */
...
Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all -- anywhere anywhere /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
Chain KUBE-SERVICES (1 references)
target prot opt source destination
....
REJECT tcp -- anywhere anywhere /* default/server-command: has no endpoints */ ADDRTYPE match dst-type LOCAL tcp dpt:30021 reject-with icmp-port-unreachable
REJECT tcp -- anywhere 10.0.0.240 /* default/server-command: has no endpoints */ tcp dpt:webcache
....
Another approach to ease the constellation:
I run ncat with ncat -l 192.168.99.1 2181 --keep-open on the host, where zookeeper/Kafka should run and tried to connect from the minikube-VM with telnet 192.168.99.1 2181 I got 'no rotue to host' ...
So how to get sovle the issue?? How to add a Service, which resolves the iptables- Problem? ( I used the kubernetes build from BuildDate:"2017-05-10T15:48:59Z" )
BR
As you are using Minikube, I think the issue is because of the Zookeeper/Kafka IP Address (192.168.99.1). You can see that Minikube is working on the network 192.168.99.0/24 by doing minikube ssh and executing ip addr
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:97:6c:ed brd ff:ff:ff:ff:ff:ff
inet 192.168.99.100/24 brd 192.168.99.255 scope global dynamic eth1
valid_lft 798sec preferred_lft 798sec
inet6 fe80::a00:27ff:fe97:6ced/64 scope link
valid_lft forever preferred_lft forever
Therefore, if the zookeeper/kafka is supposed to be outside the Minikube network, there may be a conflict with the IP. I would advise you to use a different IP (for example: 192.168.200.xx) for this external service.