I use Rancher HA AirGap. After I run docker command I get result of etcd!?
docker run -v backup:/backup -p 8080:80 -e CATTLE_SYSTEM_DEFAULT_REGISTRY="172.18.3.9:5000" 172.18.3.9:5000/rancher/rancher:v2.1.5
Result: 2019-02-05 13:37:11.457132 I |
mvcc: store.index: compact 13988 2019-02-05 13:37:11.462784 I |
mvcc: finished scheduled compaction at 13988 (took 2.639034ms) 2019-02-05 13:42:11.466068 I |
mvcc: store.index: compact 14319 2019-02-05 13:42:11.472226 I |
mvcc: finished scheduled compaction at 14319 (took 3.076376ms)
E0205 13:46:27.432964 7 watcher.go:208] watch chan error: etcdserver: mvcc: required revision has been compacted
^C2019-02-05 13:47:05.124730 N | pkg/osutil: received interrupt signal, shutting down... 2019/02/05 13:47:05
[INFO] Received SIGTERM, cancelling 2019/02/05 13:47:05
[INFO] Shutting down ClusterController controller 2019/02/05 13:47:05
[INFO] Shutting down ClusterRegistrationTokenController controller 2019/02/05 13:47:05
[INFO] Shutting down ProjectRoleTemplateBindingController controller 2019/02/05 13:47:05
[INFO] Shutting down SourceCodeRepositoryController controller 2019/02/05 13:47:05
[INFO] Shutting down RoleBindingController controller 2019/02/05 13:47:05
[INFO] Shutting down PipelineExecutionController controller 2019/02/05 13:47:05
[INFO] Shutting down PipelineController controller 2019/02/05 13:47:05
[INFO] Shutting down SecretController controller 2019/02/05 13:47:05
[INFO] Shutting down SecretController controller 2019/02/05 13:47:05
[INFO] Shutting down SettingController controller 2019/02/05 13:47:05
[INFO] Shutting down SourceCodeCredentialController controller 2019/02/05 13:47:05
[ERROR] kube-controller-manager exited with error: interrupted 2019/02/05 13:47:05
[INFO] Shutting down AuthConfigController controller 2019/02/05 13:47:05
[INFO] Shutting down ClusterRoleController controller 2019/02/05 13:47:05
[INFO] Shutting down ListenConfigController controller 2019/02/05 13:47:05
[INFO] Shutting down ClusterRoleBindingController controller 2019/02/05 13:47:05
[INFO] Shutting down RoleController controller 2019/02/05 13:47:05
[INFO] Shutting down NodePoolController controller 2019/02/05 13:47:05
[INFO] Shutting down RoleTemplateController controller 2019/02/05 13:47:05
[INFO] Shutting down UserAttributeController controller 2019/02/05 13:47:05
[INFO] Shutting down GlobalRoleController controller 2019/02/05 13:47:05
[INFO] Shutting down TokenController controller 2019/02/05 13:47:05
[INFO] Shutting down ClusterRoleTemplateBindingController controller 2019/02/05 13:47:05
[INFO] Shutting down GlobalRoleBindingController controller 2019/02/05 13:47:05
[INFO] Shutting down UserController controller 2019/02/05 13:47:05
[INFO] Shutting down DynamicSchemaController controller 2019/02/05 13:47:05
[INFO] Shutting down NodeController controller 2019/02/05 13:47:05
[INFO] Shutting down ProjectController controller 2019/02/05 13:47:05
[INFO] Shutting down GroupController controller 2019/02/05 13:47:05
[INFO] Shutting down GroupMemberController controller 2019/02/05 13:47:05
[FATAL] context canceled
Cluster Config Rancher:
nodes:
- address: 172.18.3.15 # hostname or IP to access nodes
user: ubuntu1604rone # root user (usually 'root')
role: [controlplane,etcd,worker] # K8s roles for node
ssh_key_path: ~/.ssh/id_rsa # path to PEM file
- address: 172.18.3.16
user: ubuntu1604rtwo
role: [controlplane,etcd,worker]
ssh_key_path: ~/.ssh/id_rsa
addons: |-
---
kind: Namespace
apiVersion: v1
metadata:
name: cattle-system
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: cattle-admin
namespace: cattle-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cattle-crb
namespace: cattle-system
subjects:
- kind: ServiceAccount
name: cattle-admin
namespace: cattle-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Secret
metadata:
name: cattle-keys-ingress
namespace: cattle-system
type: Opaque
data:
username: dWJ1bnR1MTYwNHJ6ZXJv
password: dGVzdDEyMyMh
---
apiVersion: v1
kind: Service
metadata:
namespace: cattle-system
name: cattle-service
labels:
app: cattle
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
app: cattle
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: cattle-system
name: cattle-ingress-http
annotations:
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" # Max time in seconds for ws to remain shell window open
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" # Max time in seconds for ws to remain shell window open
spec:
rules:
- host: cloud.nikafarinegan.com # FQDN to access cattle server
http:
paths:
- backend:
serviceName: cattle-service
servicePort: 80
tls:
- secretName: cattle-keys-ingress
hosts:
- cloud.nikafarinegan.com # FQDN to access cattle server
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
namespace: cattle-system
name: cattle
spec:
replicas: 1
template:
metadata:
labels:
app: cattle
spec:
serviceAccountName: cattle-admin
containers:
- image: 172.18.3.9:5000/rancher/rancher
imagePullPolicy: Always
name: cattle-server
ports:
- containerPort: 80
protocol: TCP
- containerPort: 443
protocol: TCP