I am creating a simple grpc example using Kubernetes in an on-premises environment.
When nodejs makes a request with pythonservice, pythonservice responds with helloworld and displays it on a web page.
However, pythonservice's clusterip is accessible, but not http://pythoservice:8000
.
There may be a problem with coredns, so I checked various things and deleted kube-dns service of kube-system.
And if you check with pythonservice.default.svc.cluster.local
with nslookup, you will see a different address from the clusterip of pythonservice. Sorry I'm not good at English
This is the node.js code :
var setting = 'test';
var express = require('express');
var app = express();
const port = 80;
var PROTO_PATH = __dirname + '/helloworld.proto';
var grpc = require('grpc');
var protoLoader = require('@grpc/proto-loader');
var packageDefinition = protoLoader.loadSync(
PROTO_PATH,
{keepCase: true,
longs: String,
enums: String,
defaults: true,
oneofs: true
});
// http://pythonservice:8000
// 10.109.228.152:8000
// pythonservice.default.svc.cluster.local:8000
// 218.38.137.28
var hello_proto =
grpc.loadPackageDefinition(packageDefinition).helloworld;
function main(callback) {
var client = new hello_proto.Greeter("http://pythonservice:8000",
grpc.credentials.createInsecure());
var user;
if (process.argv.length >= 3) {
user = process.argv[2];
} else {
user = 'world';
}
client.sayHello({name: user}, function(err, response) {
console.log('Greeting:', response.message);
setting = response.message;
});
}
var server = app.listen(port, function () {});
app.get('/', function (req, res) {
main();
res.send(setting);
//res.send(ip2);
//main(function(result){
// res.send(result);
//})
});
This is the yaml file for pythonservice
apiVersion: apps/v1
kind: Deployment
metadata:
name: practice-dp2
spec:
selector:
matchLabels:
app: practice-dp2
replicas: 1
template:
metadata:
labels:
app: practice-dp2
spec:
hostname: appname
subdomain: default-subdomain
containers:
- name: practice-dp2
image: taeil777/greeter-server:v1
ports:
- containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
name: pythonservice
spec:
type: ClusterIP
selector:
app: practice-dp2
ports:
- port: 8000
targetPort: 8000
this is kubectl get all:
root@pusik-server0:/home/tinyos/Desktop/grpc/node# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/practice-dp-55dd4b9d54-v4hhq 1/1 Running 1 68m
pod/practice-dp2-7d4886876-znjtl 1/1 Running 0 18h
NAME TYPE CLUSTER-IP EXTERNAL-IP
PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none>
443/TCP 34d
service/nodeservice ClusterIP 10.100.165.53 <none>
80/TCP 68m
service/pythonservice ClusterIP 10.109.228.152 <none>
8000/TCP 18h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/practice-dp 1/1 1 1 68m
deployment.apps/practice-dp2 1/1 1 1 18h
NAME DESIRED CURRENT READY
AGE
replicaset.apps/practice-dp-55dd4b9d54 1 1 1
68m
replicaset.apps/practice-dp2-7d4886876 1 1 1
18h
root@pusik-server0:/home/tinyos/Desktop/grpc/python# nslookup
pythonservice.default.svc.cluster.local
Server: 127.0.1.1
Address: 127.0.1.1#53
Name: pythonservice.default.svc.cluster.local
Address: 218.38.137.28
1. Answering the first question:
pythonservice's clusterip is accessible, but not http://pythoservice:8000
.Please refer to Connecting Applications with Services
The type of service/pythonservice
is ClusterIP
If you are interested with exposing the service outside the cluster please use service type NodePort or LoadBalancer
. According to the attached screens, your application is accessible from within the cluster (ClusterIP serice).
2. Answering the second question:
exec failed: container_linux.go:345: starting container process caused "exec: \"nslookup\": executable file not found in $PATH": unknown command terminated with exit code 126
means that inside your pod probably you don't have tools like nslookup
so: please run some pod in the same namespace with installed tools and verify again:
kubectl run ubuntu --rm -it --image ubuntu --restart=Never --command -- bash -c 'apt-get update && apt-get -y install dnsutils && bash'
kubectl exec ubuntu2 -- nslookup pythonservice.default.svc.cluster.local
-- Update
Please verfify the state for all pods and svc especially in the kube-system
namespace:
kubectl get nodes,pods,svc --all-namespaces -o wide
In order to start debugging please get more information about particular problem f.e. with coredns please use:
kubectl describe pod your coredns_pod -n kube-system
kubectl logs coredns_pod -n kube-system
Please refer to:
Hope this help.