kubernetes communicate different namespaces

4/5/2020

I'm starting in kubernetes and I'm having some issues reaching out my rabbitmq service inside a messaging namespace from my banking service inside the backend namespace. I know it's supposed to be "easy" by reading the documentation: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/

However, I've spent more than a day trying to figure out why I'm not able to connect to the rabbitmq client host

These are my yaml files:

banking-ip-service:

apiVersion: v1
kind: Service
metadata:
  name: banking-ip-service
  namespace: backend
spec:
  type: NodePort
  ports:
    - port: 8000
      targetPort: 8000
      nodePort: 30080
      protocol: TCP
      name: "banking-api"
  selector:
    component: bank

banking deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: banking
  namespace: backend
spec:
  replicas: 1
  selector:
    matchLabels:
      component: bank
  template:
    metadata:
      labels:
        component: bank
    spec:
      containers:
        - name: banking
          image: user/myownimage
          env:
            - name: URL
              # cluster ip
              value: rabbit-ip-service.messaging.svc.cluster.local
            - name: PORT
              value: "5672"
            - name: USER
              value: "guest"
            - name: PASSWORD
              value: "guest"
          ports:
            - containerPort: 8000
          resources:
            requests:
              memory: "64Mi"
              cpu: "25m"
            limits:
              memory: "128Mi"
              cpu: "50m"
      restartPolicy: Always
      imagePullSecrets:
        - name: regcred

rabbitmq-ip-service:

apiVersion: v1
kind: Service
metadata:
  name: rabbit-ip-service
  namespace: messaging
spec:
 type: ClusterIP
 ports:
    - port: 5672
      targetPort: 5672
      name: "rabbit-api"
    - port: 15672
      targetPort: 15672
      name: "rabbit-manager"
 selector:
  component: rabbitmq

rabbitmq deployment:

apiVersion: v1
kind: Pod
metadata:
  name: rabbitmq
  labels:
    component: rabbitmq
  namespace: messaging
spec:
  containers:
  - image: rabbitmq:3.5.4-management
    name: rabbitmq
    ports:
      - containerPort: 5672
        name: service
      - containerPort: 15672
        name: management
    resources:
      requests:
        memory: "64Mi"
        cpu: "50m"
      limits:
        memory: "128Mi"
        cpu: "100m"
    volumeMounts:
    - name: config-volume
      mountPath: /etc/rabbitmq
  volumes:
  - name: config-volume
    configMap:
      name: rabbitmq-config
      items:
      - key: rabbitmq.conf
        path: rabbitmq.conf
      - key: enabled_plugins
        path: enabled_plugins

This is the output when running kubectl describe svc rabbit-ip-service -n messaging :

Name:              rabbit-ip-service
Namespace:         messaging
Labels:            <none>
Annotations:       kubectl.kubernetes.io/last-applied-configuration:
                     {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"rabbit-ip-service","namespace":"messaging"},"spec":{"ports":[{"na...
Selector:          component=rabbitmq
Type:              ClusterIP
IP:                10.106.248.55
Port:              rabbit-api  5672/TCP
TargetPort:        5672/TCP
Endpoints:         172.17.0.5:5672
Port:              rabbit-manager  15672/TCP
TargetPort:        15672/TCP
Endpoints:         172.17.0.5:15672
Session Affinity:  None
Events:            <none>

If I hardcode the url value inside the backend deployment it doesn't work either:

env:
  - name: URL
    # cluster ip
    value: 10.106.248.55:5672
-- alex
kubernetes

1 Answer

4/5/2020

I made it work by just removing the volumes temporarily that were using the configmaps:

apiVersion: v1
kind: Pod
metadata:
  name: rabbitmq
  labels:
    component: rabbitmq
  namespace: messaging
spec:
  containers:
  - image: rabbitmq:3.5.4-management
    name: rabbitmq
    ports:
      - containerPort: 5672
        name: service
      - containerPort: 15672
        name: management
    resources:
      requests:
        memory: "64Mi"
        cpu: "50m"
      limits:
        memory: "128Mi"
        cpu: "100m"
-- alex
Source: StackOverflow