Single YAML file with multiple load balancers and pods

7/23/2021

I have a single yaml file in which I am attempting to deploy 2 load balancers, each with a single pod and container.

2 Load balancers each with a pod and 1 container

However, when I run the following infrastructure.yml file it starts 2 load balancers and only 1 pod / container (job-base). It is ignoring starting the api-base pod / container. Commenting out the job-base job / container from the file and the api-base pod starts properly.

What am I missing? How do I get this single file to deploy all pods, and services?

kind: ConfigMap
apiVersion: v1
metadata:
  name: api-base
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-base
spec:
  replicas: 1
  selector:
    matchLabels:
      app: api-base
  template:
    metadata:
      labels:
        app: api-base
    spec:
      containers:
      - name: api-base
        image: path/to/apiImage
        ports:
        - containerPort: 44360
          protocol: TCP
        resources:
          requests:
            memory: "0.4Gi"
            cpu: "0.2"
          limits:
            memory: "0.4Gi"
            cpu: "0.2"
---
apiVersion: v1
kind: Service
metadata:
   name: api-base
   labels:
      app: api-base
   annotations:
      service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
spec:
   selector:
      app: api-base
   ports:
    - port: 80
      targetPort: 44360
      protocol: TCP
   type: LoadBalancer
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: job-base
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: job-base
spec:
  replicas: 1
  selector:
    matchLabels:
      app: job-base
  template:
    metadata:
      labels:
        app: job-base
    spec:
      containers:
      - name: job-base
        image: path/to/jobImage
        ports:
        - containerPort: 44360
          protocol: TCP
        resources:
          requests:
            memory: "0.4Gi"
            cpu: "0.2"
          limits:
            memory: "0.4Gi"
            cpu: "0.2"
---
apiVersion: v1
kind: Service
metadata:
   name: job-base
   labels:
      app: job-base
   annotations:
      service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
spec:
   selector:
      app: job-base
   ports:
    - port: 80
      targetPort: 44360
      protocol: TCP
   type: LoadBalancer

Update:

If it is not possible to accomplish this in a single yml I would also consider running the api and job containers in the same pod using the same load balancer. As long as I can expose each container within the pod on a different port.

-- Sixthpoint
kubernetes
kubernetes-pod

1 Answer

8/3/2021

The namespace did not have enough resources allocated. When the new container would attempt to be provisioned it would fail and terminate due to CPU or Memory limits.

I discovered the issue by running the following command

kubectl get deployment api-base -o yaml

This output a message property which indicated that the latest deployment failed due to resource issues.

Increasing the namespaces ResourceQuota resolved the issue: https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/

-- Sixthpoint
Source: StackOverflow