The problem, the load balancer endpoint hangs and doesn't produce any logs that I can find.
Chrome displays: This site can’t be reached 35.213.138.112 took too long to respond.
I've been stuck on this for 6 hours and I've been struggling to find relevant documentation on Kubernetes and Ruby on Rails. I assume the problem is something simple and todo with my Kubernetes config..
My docker file:
FROM ruby:2.6.8
RUN apt-get update -qq && apt-get install -y nodejs npm postgresql-client
RUN npm install --global yarn
WORKDIR /src/app
COPY Gemfile ./
ADD . /src/app
Copy vendor/gems/* /src/app/vender/gems/
ENV RAILS_ENV production
RUN bundle install --deployment --without development test
# ENV RAILS_ENV production
RUN bundle exec rake assets:precompile
# Start the main process.
CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0"]
My deployment config:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
name: "test-2"
name: "test-2"
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: "test-2"
template:
metadata:
labels:
app: "test-2"
spec:
containers:
- name: "test1-sha256-1"
image: "gcr.io/path-to-image"
ports:
- containerPort: 3000
protocol: TCP
env:
- name: PORT
value: "3000"
- name: RAILS_SERVE_STATIC_FILES
value: "true"
serviceAccountName: test-ksa
---
apiVersion: "autoscaling/v2beta1"
kind: "HorizontalPodAutoscaler"
metadata:
name: "test-2-hsdfsdf"
namespace: "default"
labels:
app: "test-2"
spec:
scaleTargetRef:
kind: "Deployment"
name: "test-2"
apiVersion: "apps/v1"
minReplicas: 1
maxReplicas: 5
metrics:
- type: "Resource"
resource:
name: "cpu"
targetAverageUtilization: 80
My pod log (seems to be working fine):
Puma starting in single mode...
* Puma version: 5.5.0 (ruby 2.6.8-p205) ("Zawgyi")
* Min threads: 5
* Max threads: 5
* Environment: production
* PID: 1
* Listening on http://0.0.0.0:3000
My load balancer config:
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
creationTimestamp: "2021-10-02T02:39:22Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
name: test-no-nginx
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:name: {}
f:spec:
f:externalTrafficPolicy: {}
f:ports:
.: {}
k:{"port":80,"protocol":"TCP"}:
.: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:selector:
.: {}
f:app: {}
f:sessionAffinity: {}
f:type: {}
manager: GoogleCloudConsole
operation: Update
time: "2021-10-02T02:39:22Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.: {}
v:"service.kubernetes.io/load-balancer-cleanup": {}
f:status:
f:loadBalancer:
f:ingress: {}
manager: kube-controller-manager
operation: Update
time: "2021-10-02T02:40:05Z"
name: test-no-nginx-service
namespace: default
resourceVersion: "83916"
uid: 9b560670-1f4d-41f7-8301-fd6b5c0e18d2
spec:
clusterIP: 10.95.123.231
clusterIPs:
- 10.95.123.231
externalTrafficPolicy: Cluster
ports:
- nodePort: 32663
port: 80
protocol: TCP
targetPort: 3000
selector:
app: test-2
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 35.233.233.11
If I add an nginx container to my deployment config and don't set a targetPort for the load balancer I can see "Welcome to nginx!" just fine.
config for nginx container in deployment.yml:
containers:
- name: "nginx-1"
image: "nginx:latest"
Thank you for your time, it's very much appreciated.
I solved this by setting up ingress-nginx with the install command from https://kubernetes.github.io/ingress-nginx/deploy/
An ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-test
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: domain.com
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: test
port:
number: 80
ingressClassName: nginx
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: nginx
spec:
controller: k8s.io/ingress-nginx
And a NodePort
apiVersion: v1
kind: Service
metadata:
name: test
namespace: default
annotations:
cloud.google.com/neg: '{"ingress": true}'
spec:
ports:
- port: 80
protocol: TCP
targetPort: 3000
selector:
app: test
type: NodePort