I'm newbie in Kubernetes. I created a Kubernetes Cluster on Amazon EKS. I'm trying to setup multiple kubernetes services to run multiple ASP.NET applications in one cluster. But facing a weird problem.
Everything runs fine when there is only 1 service. But whenever i create 2nd service for 2nd application it creates a conflict. The conflict is sometime service 1 url load service 2 application and sometime it loads service 1 application and same happens with service 2 url on simple page reload.
I've tried both Amazon Classic ELB (With LoadBalancer Service Type) and Nginx Ingress controller (With ClusterIp Service Type). This error is common in both approaches.
Both services and deployments are running on port 80, I even tried different ports for both services and deployments to avoid port conflict but same problem.
I've checked the deployment & service status, and pod log everything looks fine. No error or warning at all
Please guide how i can fix this error. Here is the yaml file of both services for nginx ingress
# Service 1 for deployment 1 (container port: 1120)
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-12-05T14:54:21Z
labels:
run: load-balancer-example
name: app1-svc
namespace: default
resourceVersion: "463919"
selfLink: /api/v1/namespaces/default/services/app1-svc
uid: a*****-****-****-****-**********c
spec:
clusterIP: 10.100.102.224
ports:
- port: 1120
protocol: TCP
targetPort: 1120
selector:
run: load-balancer-example
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
2nd Service
# Service 2 for deployment 2 (container port: 80)
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-12-05T10:13:33Z
labels:
run: load-balancer-example
name: app2-svc
namespace: default
resourceVersion: "437188"
selfLink: /api/v1/namespaces/default/services/app2-svc
uid: 6******-****-****-****-************0
spec:
clusterIP: 10.100.65.46
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: load-balancer-example
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Thanks
The problem is with the selector in the services. They both have the same selector and that's why you are facing that problem. So they both will point to same set of pods.
The set of Pods targeted by a Service is (usually) determined by a Label Selector
Since deployemnt 1 and deployment 2 are different(i think), you should use different selector in them. Then expose the deployments. For example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.15.4
ports:
- containerPort: 80
--
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app: hello
spec:
replicas: 3
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: nightfury1204/hello_server
args:
- serve
ports:
- containerPort: 8080
Above two deployment nginx-deployment
and hello-deployment
has different selector. So expose them to service will not colide each other.
When you use kubectl expose deployment app1-deployment --type=ClusterIP --name=app1-svc
to expose deployment, the service will have the same selector as the deployment.