Communication between pods

7/16/2020

I am currently in the process to set up sentry.io but i am having problems in setting it up in openshift 3.11

I got pods running for sentry itself, postgresql, redis and memcache but according to the log messages they are not able to communicate together.

sentry.exceptions.InvalidConfiguration: Error 111 connecting to 127.0.0.1:6379. Connection refused.

Do i need to create a network like in docker or should the pods (all in the same namespace) be able to talk to each other by default? I got admin rights for the complete project so i can also work with the console and not only the web interface.

Best wishes

EDIT: Adding deployment config for sentry and its service and for the sake of simplicity the postgres config and service. I also blanked out some unnecessary information with the keyword BLANK if I went overboard please let me know and ill look it up.

Deployment config for sentry:

apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
  annotations:
    openshift.io/generated-by: OpenShiftWebConsole
  creationTimestamp: BLANK
  generation: 20
  labels:
    app: sentry
  name: sentry
  namespace: test
  resourceVersion: '506667843'
  selfLink: BLANK
  uid: BLANK
spec:
  replicas: 1
  selector:
    app: sentry
    deploymentconfig: sentry
  strategy:
    activeDeadlineSeconds: 21600
    resources: {}
    rollingParams:
      intervalSeconds: 1
      maxSurge: 25%
      maxUnavailable: 25%
      timeoutSeconds: 600
      updatePeriodSeconds: 1
    type: Rolling
  template:
    metadata:
      annotations:
        openshift.io/generated-by: OpenShiftWebConsole
      creationTimestamp: null
      labels:
        app: sentry
        deploymentconfig: sentry
    spec:
      containers:
        - env:
            - name: SENTRY_SECRET_KEY
              value: Iamsosecret
            - name: C_FORCE_ROOT
              value: '1'
            - name: SENTRY_FILESTORE_DIR
              value: /var/lib/sentry/files/data
          image: BLANK
          imagePullPolicy: Always
          name: sentry
          ports:
            - containerPort: 9000
              protocol: TCP
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /var/lib/sentry/files
              name: sentry-1
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
        - emptyDir: {}
          name: sentry-1
  test: false
  triggers:
    - type: ConfigChange
    - imageChangeParams:
        automatic: true
        containerNames:
          - sentry
        from:
          kind: ImageStreamTag
          name: 'sentry:latest'
          namespace: catcloud
        lastTriggeredImage: BLANK
      type: ImageChange
status:
  availableReplicas: 1
  conditions:
    - lastTransitionTime: BLANK
      lastUpdateTime: BLANK
      message: Deployment config has minimum availability.
      status: 'True'
      type: Available
    - lastTransitionTime: BLANK
      lastUpdateTime: BLANK
      message: replication controller "sentry-19" successfully rolled out
      reason: NewReplicationControllerAvailable
      status: 'True'
      type: Progressing
  details:
    causes:
      - type: ConfigChange
    message: config change
  latestVersion: 19
  observedGeneration: 20
  readyReplicas: 1
  replicas: 1
  unavailableReplicas: 0
  updatedReplicas: 1

Service for sentry:

apiVersion: v1
kind: Service
metadata:
  annotations:
    openshift.io/generated-by: OpenShiftWebConsole
  creationTimestamp: BLANK
  labels:
    app: sentry
  name: sentry
  namespace: test
  resourceVersion: '505555608'
  selfLink: BLANK
  uid: BLANK
spec:
  clusterIP: BLANK
  ports:
    - name: 9000-tcp
      port: 9000
      protocol: TCP
      targetPort: 9000
  selector:
    deploymentconfig: sentry
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

Deployment config for postgresql:

apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
  annotations:
    openshift.io/generated-by: OpenShiftWebConsole
  creationTimestamp: BLANK
  generation: 10
  labels:
    app: postgres
    type: backend
  name: postgres
  namespace: test
  resourceVersion: '506664185'
  selfLink: BLANK
  uid: BLANK
spec:
  replicas: 1
  selector:
    app: postgres
    deploymentconfig: postgres
    type: backend
  strategy:
    activeDeadlineSeconds: 21600
    resources: {}
    rollingParams:
      intervalSeconds: 1
      maxSurge: 25%
      maxUnavailable: 25%
      timeoutSeconds: 600
      updatePeriodSeconds: 1
    type: Rolling
  template:
    metadata:
      annotations:
        openshift.io/generated-by: OpenShiftWebConsole
      creationTimestamp: null
      labels:
        app: postgres
        deploymentconfig: postgres
        type: backend
    spec:
      containers:
        - env:
            - name: PGDATA
              value: /var/lib/postgresql/data/sql
            - name: POSTGRES_HOST_AUTH_METHOD
              value: trust
            - name: POSTGRESQL_USER
              value: sentry
            - name: POSTGRESQL_PASSWORD
              value: sentry
            - name: POSTGRESQL_DATABASE
              value: sentry
          image: BLANK
          imagePullPolicy: Always
          name: postgres
          ports:
            - containerPort: 5432
              protocol: TCP
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /var/lib/postgresql/data
              name: volume-uirge
              subPath: sql
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        runAsUser: 2000020900
      terminationGracePeriodSeconds: 30
      volumes:
        - name: volume-uirge
          persistentVolumeClaim:
            claimName: postgressql
  test: false
  triggers:
    - type: ConfigChange
    - imageChangeParams:
        automatic: true
        containerNames:
          - postgres
        from:
          kind: ImageStreamTag
          name: 'postgres:latest'
          namespace: catcloud
        lastTriggeredImage: BLANK
      type: ImageChange
status:
  availableReplicas: 1
  conditions:
    - lastTransitionTime: BLANK
      lastUpdateTime: BLANK
      message: Deployment config has minimum availability.
      status: 'True'
      type: Available
    - lastTransitionTime: BLANK
      lastUpdateTime: BLANK
      message: replication controller "postgres-9" successfully rolled out
      reason: NewReplicationControllerAvailable
      status: 'True'
      type: Progressing
  details:
    causes:
      - type: ConfigChange
    message: config change
  latestVersion: 9
  observedGeneration: 10
  readyReplicas: 1
  replicas: 1
  unavailableReplicas: 0
  updatedReplicas: 1

Service config postgresql:

apiVersion: v1
kind: Service
metadata:
  annotations:
    openshift.io/generated-by: OpenShiftWebConsole
  creationTimestamp: BLANK
  labels:
    app: postgres
    type: backend
  name: postgres
  namespace: catcloud
  resourceVersion: '506548841'
  selfLink: /api/v1/namespaces/catcloud/services/postgres
  uid: BLANK
spec:
  clusterIP: BLANK
  ports:
    - name: 5432-tcp
      port: 5432
      protocol: TCP
      targetPort: 5432
  selector:
    deploymentconfig: postgres
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
-- HFinch
kubernetes
openshift
sentry

3 Answers

7/16/2020

For communication between pods localhost or 127.0.0.1 does not work.

Get the IP of any pod using

kubectl describe podname

Use that IP in the other pod to communicate with above pod.

Since Pod IPs changes if the pod is recreated you should ideally use kubernetes service specifically clusterIP type for communication between pods within the cluster.

-- Arghya Sadhu
Source: StackOverflow

7/16/2020

Pods (even in the same namespace) are not able to talk directly to each other by default. You need to create a Service in order to allow a pod to receive connections from another pod. In general, one pod connects to another pod via the latter's service, as I illustrated below:

enter image description here

The connection info would look something like <servicename>:<serviceport> (e.g. elasticsearch-master:9200) rather than localhost:port.

You can read https://kubernetes.io/docs/concepts/services-networking/service/ for further info on a service.

N.B: localhost:port will only work for containers running inside the same pod to connect to each other, just like how nginx connects to gravitee-mgmt-api and gravitee-mgmt-ui in my illustration above.

-- Lukman
Source: StackOverflow

7/17/2020

Well for me it looks like you didn't configure the sentry correctly means you are not providing credential to sentry pod to connect to PostgreSQL pod and redis pod.

env:
    - name: SENTRY_SECRET_KEY
      valueFrom:
        secretKeyRef:
          name: sentry-sentry
          key: sentry-secret
    - name: SENTRY_DB_USER
      value: "sentry"
    - name: SENTRY_DB_NAME
      value: "sentry"
    - name: SENTRY_DB_PASSWORD
      valueFrom:
        secretKeyRef:
          name: sentry-postgresql
          key: postgres-password
    - name: SENTRY_POSTGRES_HOST
      value: sentry-postgresql
    - name: SENTRY_POSTGRES_PORT
      value: "5432"
    - name: SENTRY_REDIS_PASSWORD
      valueFrom:
        secretKeyRef:
          name: sentry-redis
          key: redis-password
    - name: SENTRY_REDIS_HOST
      value: sentry-redis
    - name: SENTRY_REDIS_PORT
      value: "6379"
    - name: SENTRY_EMAIL_HOST
      value: "smtp"
    - name: SENTRY_EMAIL_PORT
      value: "25"
    - name: SENTRY_EMAIL_USER
      value: ""
    - name: SENTRY_EMAIL_PASSWORD
      valueFrom:
        secretKeyRef:
          name: sentry-sentry
          key: smtp-password
    - name: SENTRY_EMAIL_USE_TLS
      value: "false"
    - name: SENTRY_SERVER_EMAIL
      value: "sentry@sentry.local"

for more info you could refer to this where they configured the sentry

https://github.com/maty21/sentry-kubernetes/blob/master/sentry.yaml

-- Dashrath Mundkar
Source: StackOverflow