I have sample apps running on my cluster. I have a webapp pod which has three containers. Each running as a separate springboot webservice. employee, test1 and test2. The service exposing this is shown below
apiVersion: v1 kind: Service metadata: labels: name: webapp name: webappservice spec: ports: - port: 8080 nodePort: 30062 type: NodePort selector: name: webapp
The pod spec is below - UPDATED to have whole context
apiVersion: v1
kind: Pod
metadata:
name: webapp
labels:
name: webapp
spec:
containers:
-
resources:
limits:
cpu: 0.5
image: kube/employee
imagePullPolicy: IfNotPresent
name: wsemp
ports:
- containerPort: 8080
name: wsemp
-
resources:
limits:
cpu: 0.5
image: kube/test1
imagePullPolicy: IfNotPresent
name: wstest1
ports:
- containerPort: 8081
name: wstest1
imagePullSecrets:
- name: myregistrykey
My assumption was that the webservice runs on 30062 on the node and depending on the mapping, I'll be able to access the webservice. eg http://11.168.24.221:30062/employee and http://11.168.24.221:30062/test1/
Separate logs from employee container and test1 container below.
s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/employee],methods=[GET]}" onto public java.util.List<employee.model.Employee> employee.controller.EmployeeController.getAll()
s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/test1/],methods=[GET]}" onto public java.util.List<model.Test1> test1.controller.Test1Controller.getAll()
Issue is http://11.168.24.221:30062/employee hits the webservice properly. but when I hit http://11.168.24.221:30062/test1/ , it says the test1/mapping is not available when above in the logs, the mapping is clearly avaialble. Error message is Whitelabel Error Page/.This application has no explicit mapping for /error, so you are seeing this as a fallback.
Anything I am doing wrong?
Your service yaml shows clearly that you are only exposing port 8080 as NodePort 30062. It is possible to simply add another -port:8081 nodePort:30063
to your existing configuration, but - Since your two services are separate containers anyway you may prefer to create two separate deployments and services in kubernetes. One for the employee and one for your test1 service. That will allow you to develop, deploy and test them separately. And it is generally not recommended to use multiple containers in a POD (with some exceptions) - See this.
Here are the two yamls for the services. Note that I changed the names, labels, and selectors.
apiVersion: v1
kind: Service
metadata:
labels:
name: employeeservice
name: employeeservice
spec:
ports:
- port: 8080
nodePort: 30062
type: NodePort
selector:
app: employeeservice
apiVersion: v1
kind: Service
metadata:
labels:
name: test1service
name: test1service
spec:
ports:
- port: 8081
nodePort: 30063
type: NodePort
selector:
app: test1service
You are not using deployments at all but that is not recommended and you won't benefit from kubernetes self healing abilities, e.g. the pods getting replaced automatically when they become unhealthy.
Creating a deployment is easy. Here are two yamls for deployments that include your POD specs. Note that I changed the names to match the selectors from the services above. I have set the replica count to 1, so only one POD will be maintained per deployment, but you can easily scale it up by setting it to a higher number.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: employeeservice-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: employeeservice
spec:
containers:
-resources:
limits:
cpu: 0.5
image: kube/employee
imagePullPolicy: IfNotPresent
name: wsemp
ports:
- containerPort: 8080
name: wsemp
imagePullSecrets:
- name: myregistrykey
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: test1service-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: test1service
spec:
containers:
-
resources:
limits:
cpu: 0.5
image: kube/test1
imagePullPolicy: IfNotPresent
name: wstest1
ports:
- containerPort: 8081
name: wstest1
imagePullSecrets:
- name: myregistrykey
Also note that your service is reachable by name through the DNS. So if you use above yamls you should be able to query a service from within the cluster at http://employeeservice/employee
instead of using the Nodes ip addresses. For access from outside the cluster you can use the NodePorts as specified and typically would do that through some kind of load balancer that routes to all the nodes.