I have a three nodes kubernetes cluster on Azure, with three Kafka brokers and one zookeeper instance. Kafka brokers and zookeeper are publicly accessible by deploying their correspondent services (Load Balancer).
Now I'm deploying a schema registry, and I would like it to be accessible from outside the kubernetes cluster. I'm following the same steps than before, but not able to access the schema registry api from outside the kubernetes cluster. If I curl the schema registry from within the docker container, everything works fine, so I assume the schema registry is properly running. Here are my schema registry yamls descriptors:
Schema Registry deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: schema-registry
spec:
replicas: 1
template:
metadata:
labels:
name: schema-registry
spec:
containers:
- env:
- name: SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL
value: zookeeper-cluster-ip:2181
- name: SCHEMA_REGISTRY_HOST_NAME
value: registry-0.schema.default.svc.cluster.local
- name: SCHEMA_REGISTRY_LISTENERS
value: http://0.0.0.0:8081
name: schema-registry
image: confluentinc/cp-schema-registry:5.0.1
ports:
- containerPort: 8081
restartPolicy: Always
Schema Registry Service:
apiVersion: v1
kind: Service
metadata:
name: schema-registry
labels:
name: schema-registry
spec:
ports:
- port: 8081
selector:
name: schema-registry
type: LoadBalancer
After the service is deployed, the public ip is generated:
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
schema-registry LoadBalancer X.X.X.X. X.X.X.X. 8081:30921/TCP 13m
, so I run
curl -X GET -i -H "Content-Type: application/vnd.schemaregistry.v1+json" http://X.X.X.X:8081/subjects
But no response. From within the container, I'm getting a response from the curl command.
The reason why I want the schema registry to be accessible from outside the cluster is that we want to access it from a Nifi cluster.
Is that possible?
A simple problem with a firewall rule was the cause of the problem. Service and deployment config was ok. Thanks anyway!
I ran into the same problem before and I solved it by using selector/app.
Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: ***
name: schema-registry
labels:
app: schema-registry
spec:
replicas: 1
selector:
matchLabels:
app: schema-registry
template:
metadata:
labels:
app: schema-registry
spec:
containers:
- name: schema-registry
image: confluentinc/cp-schema-registry:5.3.0
ports:
- containerPort: 8081
imagePullPolicy: Always
env:
- name: SCHEMA_REGISTRY_HOST_NAME
value: schema-registry
- name: SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL
value: ***
- name: SCHEMA_REGISTRY_LISTENERS
value: http://0.0.0.0:8081
command:
- bash
- -c
- unset SCHEMA_REGISTRY_PORT; /etc/confluent/docker/run
Service:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
name: schema-registry
namespace: ***
labels:
app: schema-registry
spec:
selector:
app: schema-registry
ports:
- port: 8081
type: LoadBalancer
Hope it would be helpful!