I have a Kubernetes cluster with multiple java microservices that need to connect to a remote Kafka Cluster. Servers are in DigitalOcean and completely reachable within their private network. The Kafka Cluster does not use Kubernetes and it is not within the cluster.
I used kubeadm to launch the entire cluster and this is the information about the cluster:
# kubectl cluster-info
Kubernetes master is running at https://10.132.113.68:6443
KubeDNS is running at https://10.132.113.68:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Version of Cluster
# kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.7", GitCommit:"dd5e1a2978fd0b97d9b78e1564398aeea7e7fe92", GitTreeState:"clean", BuildDate:"2018-04-18T23:58:35Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
And this was used to set up the network:
kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.132.113.68 --kubernetes-version stable-1.8
All the pods launched sucessfully:
# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default datadog-agent-5cht2 1/1 Running 0 12d
default datadog-agent-5r7rw 1/1 Running 0 12d
default datadog-agent-b7t5q 1/1 Running 0 12d
default vizix-services-7bdccb48c4-2q8js 1/1 Running 0 19m
default vizix-tools-cpr88 0/1 Completed 0 12d
kube-system etcd-kubctl-s-2vcpu-4gb-nyc3-01-master 1/1 Running 0 27d
kube-system kube-apiserver-kubctl-s-2vcpu-4gb-nyc3-01-master 1/1 Running 0 27d
kube-system kube-controller-manager-kubctl-s-2vcpu-4gb-nyc3-01-master 1/1 Running 0 27d
kube-system kube-dns-6f4fd4bdf-f7ssn 3/3 Running 0 27d
kube-system kube-flannel-ds-dm5w4 1/1 Running 0 27d
kube-system kube-flannel-ds-ns58w 1/1 Running 0 27d
kube-system kube-flannel-ds-prnvf 1/1 Running 1 27d
kube-system kube-flannel-ds-xck8p 1/1 Running 0 27d
kube-system kube-proxy-2xrhl 1/1 Running 0 27d
kube-system kube-proxy-lnt9r 1/1 Running 0 27d
kube-system kube-proxy-m74ms 1/1 Running 0 27d
kube-system kube-proxy-vqdxt 1/1 Running 0 27d
kube-system kube-scheduler-kubctl-s-2vcpu-4gb-nyc3-01-master 1/1 Running 0 27d
kube-system kubernetes-dashboard-5bd6f767c7-7qp75 1/1 Running 0 26d
The POD that needs to connect to Kafka can reach the cluster when using PING or Telnet just fine:
# kubectl exec -it vizix-services-7bdccb48c4-2q8js bash
bash-4.2# ping 10.132.123.177
PING 10.132.123.177 (10.132.123.177) 56(84) bytes of data.
64 bytes from 10.132.123.177: icmp_seq=1 ttl=63 time=0.540 ms
64 bytes from 10.132.123.177: icmp_seq=2 ttl=63 time=0.518 ms
64 bytes from 10.132.123.177: icmp_seq=3 ttl=63 time=0.432 ms
64 bytes from 10.132.123.177: icmp_seq=4 ttl=63 time=0.527 ms
^C
--- 10.132.123.177 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.432/0.504/0.540/0.045 ms
bash-4.2# telnet 10.132.123.177 9092
Trying 10.132.123.177...
Connected to 10.132.123.177.
Escape character is '^]'.
^CConnection closed by foreign host.
bash-4.2#
But the JAVA application cannot. Using only docker, it can connect normally, but when the Kubernetes launches the pod, it cannot.
2018-05-30 01:25:06,993+0000 WARN [localhost-startStop-1] com.tierconnect.riot.commons.services.broker.KafkaPublisher:: -
Check if exists a connection to kafka server 10.132.123.177:9092 and services is able to publish to kafka.
Is there something from Kubernetes that can prevent some specific application protocols to connect from one node to an external host?
This is the deployment YAML file for the service:
apiVersion: apps/v1
kind: Deployment
metadata:
name: microservice1
labels:
app: services
spec:
replicas: 1
selector:
matchLabels:
app: services
template:
metadata:
labels:
app: services
spec:
containers:
- name: microservice1
image: random/java-image:v6.5.2
env:
- name: KAFKA_SERVERS
value: "10.132.123.177:9092"
I solved this using endpoints and services. This way the connectivity is being managed by Kubernetes:
---
kind: "Service"
apiVersion: "v1"
metadata:
name: "kafka"
spec:
ports:
-
name: "kafka"
protocol: "TCP"
port: 9092
targetPort: 9092
nodePort: 0
---
kind: "Endpoints"
apiVersion: "v1"
metadata:
name: "kafka"
subsets:
-
addresses:
-
ip: "10.128.0.2"
ports:
-
port: 9092
name: "kafka"