My k8s enviroment is deployed by minikube;
egrep -i 'vmx|svm' /proc/cpuinfo
vmx flags : vnmi invvpid ept_x_only ept_ad tsc_.......
systemctl show --property=Environment docker
Environment=HTTP_PROXY=http://172.16.1.135:3128/ HTTPS_PROXY=http://172.16.1.135:3128/ "NO_PROXY=localhost,127.0.0.1,\$(minikube ip)"
minikube version
minikube version: v1.16.0
commit: 617f26b52345843a63d1a0715c4abf6625cb8862
k get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-54d67798b7-k6t5x 1/1 Running 2 120m
etcd-minikube 1/1 Running 2 120m
kube-apiserver-minikube 1/1 Running 2 120m
kube-controller-manager-minikube 1/1 Running 3 120m
kube-proxy-86pv4 1/1 Running 1 96m
kube-scheduler-minikube 1/1 Running 2 120m
storage-provisioner 1/1 Running 5 120m
k logs -f kube-proxy-86pv4 -n kube-system ✔ 1325 16:55:53
I0128 08:53:34.188328 1 node.go:172] Successfully retrieved node IP: 192.168.49.2
I0128 08:53:34.188524 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation
I0128 08:53:34.391356 1 server_others.go:258] Using ipvs Proxier.
I0128 08:53:34.392942 1 server.go:650] Version: v1.20.0
I0128 08:53:34.393378 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0128 08:53:34.393412 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0128 08:53:34.393483 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0128 08:53:34.393528 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0128 08:53:34.395556 1 config.go:315] Starting service config controller
I0128 08:53:34.397797 1 config.go:224] Starting endpoint slice config controller
I0128 08:53:34.397839 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0128 08:53:34.397979 1 shared_informer.go:240] Waiting for caches to sync for service config
I0128 08:53:34.498555 1 shared_informer.go:247] Caches are synced for service config
I0128 08:53:34.498572 1 shared_informer.go:247] Caches are synced for endpoint slice config
When i practice Interactive Tutorial - Exposing Your App I found the NodePort is not accessibility on my Node
k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 124m
kubernetes-bootcamp NodePort 10.98.71.49 <none> 8080:30159/TCP 3m31s
curl 10.98.71.49:8080
curl: (7) Failed to connect to 10.98.71.49 port 8080: Connection refused
telnet 10.98.71.49 8080
Trying 10.98.71.49...
telnet: Unable to connect to remote host: No route to host
nc -nvv 10.98.71.49 8080
Ncat: Version 7.91 ( https://nmap.org/ncat )
NCAT DEBUG: Using system default trusted CA certificates and those in /etc/ssl/certs/ca-certificates.crt.
libnsock nsock_iod_new2(): nsock_iod_new (IOD #1)
libnsock nsock_connect_tcp(): TCP connection requested to 10.98.71.49:8080 (IOD #1) EID 8
libnsock nsock_trace_handler_callback(): Callback: CONNECT ERROR [Connection refused (111)] for EID 8 [10.98.71.49:8080]
Ncat: Connection refused.
sof -i:30159
curl 127.0.0.1:30159
curl: (7) Failed to connect to 127.0.0.1 port 30159: Connection refused
curl $(minikube ip):30159
curl: (7) Failed to connect to 192.168.49.2 port 30159: Connection refused
while in the 'Interactive Tutorial - Exposing Your App', it is reachable; and i practice The tutorial depends on 'Interactive Tutorial - Exposing Your App', kube-proxy is noraml, and kubelet is normal too.
journalctl -l -u kubelet SIGINT(2) ↵ 1340 17:04:31
Hint: You are currently not seeing messages from other users and the system.
Users in groups 'adm', 'systemd-journal' can see all messages.
Pass -q to turn off this notice.
-- Journal begins at Sat 2020-12-12 19:12:36 CST, ends at Thu 2021-01-28 16:51:26 CST. --
-- No entries --
ifconfig docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:01:c7:42:b8 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.80.2 0.0.0.0 UG 100 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.49.0 0.0.0.0 255.255.255.0 U 0 0 0 br-1bb4185a80c7
192.168.80.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
kubernetes-bootcamp service config:
1 │ # Please edit the object below. Lines beginning with a '#' will be ignored,
2 │ # and an empty file will abort the edit. If an error occurs while saving this file will be
3 │ # reopened with the relevant failures.
4 │ #
5 │ apiVersion: v1
6 │ kind: Service
7 │ metadata:
8 │ creationTimestamp: "2021-01-28T09:13:52Z"
9 │ labels:
10 │ app: kubernetes-bootcamp
11 │ name: kubernetes-bootcamp
12 │ namespace: default
13 │ resourceVersion: "3495"
14 │ uid: 471eca22-d276-45e5-b68f-aa21d461ea49
15 │ spec:
16 │ clusterIP: 10.111.216.90
17 │ clusterIPs:
18 │ - 10.111.216.90
19 │ externalTrafficPolicy: Cluster
20 │ ports:
21 │ - nodePort: 32129
22 │ port: 8080
23 │ protocol: TCP
24 │ targetPort: 8080
25 │ selector:
26 │ app: kubernetes-bootcamp
27 │ sessionAffinity: None
28 │ type: NodePort
29 │ status:
30 │ loadBalancer: {}
I swtich the kube-proxy mode to iptable and restart kube-proxy, iptables -F, This is still the case. I have no idea about this. can someone help me?
EDIT:
Based on the configs you provided it looks like you are using the wrong NodePort
value while trying to curl into your service. It should be:
curl $(minikube ip):32129
instead of: curl $(minikube ip):30159
.
Notice that the port should be taken from the Service
definition:
20 │ ports:
21 │ - nodePort: 32129
There are some recommended steps that should be taken in order to debug this and any other similar problem in the future.
In order to Debug Services you should try to answer these questions:
Does the Service exist?: In your case we see that it does.
Does the Service work by DNS name?: One of the most common ways that clients consume a Service is through a DNS name.
Does the Service work by IP?: Assuming you have confirmed that DNS works, the next thing to test is whether your Service works by its IP address.
Is the Service defined correctly?: You should really double and triple check that your Service is correct and matches your Pod's port. Also:
Is the Service port you are trying to access listed in spec.ports[]
?
Is the targetPort
correct for your Pods (some Pods use a different port than the Service)?
If you meant to use a numeric port, is it a number (9376) or a string "9376"?
If you meant to use a named port, do your Pods expose a port with the same name?
Is the port's protocol
correct for your Pods?
Does the Service have any Endpoints?: Check that the Pods you ran are actually being selected by the Service.
Are the Pods working?: Check again that the Pods are actually working.
Is the kube-proxy working?: Confirm that kube-proxy
is running on your Nodes.
I assume that you are still learning Kubernetes. These steps will not only help you narrow down the issue but also teach you how to approach these kinds of problems.