The container starts in Google Cloud Shell but fails on Kubernetes Engine

7/4/2018

I'm novice in using Kubernetes, Docker and GCP, sorry if the question is stupid and (or) obvious.

I try to create simple gRPC server with http(s) mapping using as example Google samples. The issue is that my container can be started from Google cloud shell with no complains but fails on Kubernetes Engine after deployment.

In Google Cloud Console:

git clone https://gitlab.com/myrepos/grpc.git
cd grpc
docker build -t gcr.io/project-id/python-grpc-diagnostic-server:v1 .

# Run the container "locally"
docker run --rm -p 8000:8000 gcr.io/mproject-id/python-grpc-diagnostic-server:v1                                    
Server is started
^CServer is stopped

# Pushing the image to Container Registry
gcloud docker -- push gcr.io/project-id/python-grpc-diagnostic-server:v1

# Deployment 
kubectl create -f grpc-diagnostic.yaml

In Deployment details 'diagnostic' container has "CrashLoopBackOff" status and in Logs the next error appears:

File "/diagnostic/diagnostic_pb2.py", line 17, in <module>
    from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2
ModuleNotFoundError: No module named 'google.api'

Could you please give any idea why the same container starts in shell and fails on Kubernetes Engine? Thanks.

requirements.txt

grpcio
grpcio-tools
pytz
google-auth
googleapis-common-protos

Dockerfile

FROM gcr.io/google_appengine/python

# Create a virtualenv for dependencies. This isolates these packages from
# system-level packages.
RUN virtualenv -p python3.6 /env

# Setting these environment variables are the same as running
# source /env/bin/activate.
ENV VIRTUAL_ENV -p python3.6 /env
ENV PATH /env/bin:$PATH

ADD . /diagnostic/

WORKDIR /diagnostic
RUN pip install -r requirements.txt

EXPOSE 8000

ENTRYPOINT ["python", "/diagnostic/diagnostic_server.py"]

grpc-diagnostic.yaml

apiVersion: v1
kind: Service
metadata:
  name: esp-grpc-diagnostic
spec:
  ports:
  # Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
  - port: 80
    targetPort: 9000 # or 8000?
    protocol: TCP
    name: http2
  selector:
    app: esp-grpc-diagnostic
  type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: esp-grpc-diagnostic
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: esp-grpc-diagnostic
    spec:
      containers:
      - name: esp
        image: gcr.io/endpoints-release/endpoints-runtime:1
        args: [
          "--http2_port=9000",
          "--service=diagnostic.endpoints.project-id.cloud.goog",
          "--rollout_strategy=managed",
          "--backend=grpc://127.0.0.1:8000"
        ]
        ports:
          - containerPort: 9000
      - name: diagnostic
        image: gcr.io/project-id/python-grpc-diagnostic-server:v1
        ports:
          - containerPort: 8000
-- Dimaf
docker
google-cloud-platform
google-kubernetes-engine
kubernetes
python-3.x

1 Answer

7/11/2018

That was my stupid mistake. I changed the image, but the name of image was the same, so the cluster continued using the old wrong image thinking nothing changed. The right way to redeploy a code is create image with new tag, for instance v1.01 and set the new image for existing deployment as it is described in documentation. I deleted the service and the deployment and recreated it again, but I didn't delete the cluster thinking that I started from scratch.

Right way:

docker build -t gcr.io/project-id/python-grpc-diagnostic-server:v1.01 . 
gcloud docker -- push gcr.io/project-id/python-grpc-diagnostic-server:v1.01   
kubectl set image deployment/esp-grpc-diagnostic diagnostic=gcr.io/project-id/python-grpc-diagnostic-server:v1.01

Another way pulling updated images with no name changed is changing imagePullPolicy that is set to IfNotPresent by default. more info

-- Dimaf
Source: StackOverflow