Lots of connection logs after open ports of k8s service

8/3/2020

Im now using aws k8s service (eks). deployments and Loadbalancer Services were used.

(2 nodes, 1 loadbalancer service, 1 deployment, 1 pod, 1 replicaset was used.)

However, when I add a port to the service, so many connections are being connected to the port I opened. Log looks like as below.

[17:12:21.843] Client Connected [/192.168.179.222:28607]
[17:12:21.843] Client Disconnected [/192.168.179.222:28607]
[17:12:21.864] Client Connected [/192.168.179.222:16888]
[17:12:21.864] Client Disconnected [/192.168.179.222:16888]
[17:12:21.870] Client Connected [/192.168.79.91:58902]
[17:12:21.870] Client Disconnected [/192.168.79.91:58902]
[17:12:22.000] Client Connected [/192.168.179.222:52060]
[17:12:22.000] Client Disconnected [/192.168.179.222:52060]
[17:12:23.118] Client Connected [/192.168.79.91:14650]
[17:12:23.119] Client Disconnected [/192.168.79.91:14650]

192.168.179.222 and 192.168.79.91 are my nodes' IPs and logs are from pods.

I thought it is because of health check of aws loadbalancer, but health check interval is 30sec and it doesn't make sense. Since lots of logs, i cannot see my real transaction logs.

How can I get rid of those connections? What is the reason of those logs?

--- add

NAME                                                 STATUS   ROLES    AGE   VERSION                INTERNAL-IP       EXTERNAL-IP      OS-IMAGE         KERNEL-VERSION                  CONTAINER-RUNTIME
ip-192-168-179-222.ap-northeast-2.compute.internal   Ready    <none>   11d   v1.16.12-eks-904af05   192.168.179.222   ##########   Amazon Linux 2   4.14.181-142.260.amzn2.x86_64   docker://19.3.6
ip-192-168-79-91.ap-northeast-2.compute.internal     Ready    <none>   11d   v1.16.12-eks-904af05   192.168.79.91     ##########   Amazon Linux 2   4.14.181-142.260.amzn2.x86_64   docker://19.3.6

these are my node info. Im probably sure that ips from log is node IP. I have several processes in my pod, and Every processes are countered with too many connection logs.

-- YeSeul Bae
amazon-web-services
aws-eks
kubernetes

1 Answer

8/5/2020

What you are seeing is a result of scanners on the internet restlessly trying to find vulnerable applications

To fix that and to have a cleaner logs you can

  1. Do IP white listing on the security group, so that certain IPs can only connect to your service
  2. Install WAF to filter scanners out

Also you may want to have a structured logs, where your legit logs has a certain format that can be easily spotted and filtered away from garbage logs created by the scanner

-- Ahmad Aabed
Source: StackOverflow