I have been working on setting up elasticsearch and kibana for a project being hosted by GKE (our logs are currently being handled by stackdriver). Specifically, we have opted to use the elastic managed service, which is offered by google as a partner service. At present, I have followed the provided quickstart, and supplemented it with this article.
I was able to get the resources from quick start running, but I have become badly stuck trying to determine how are currently going to stackdriver can be directed at the elastic resources I have successfully set up.
Here are the resources I have applied:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: vorto-server
name: vorto-server-blank-deployment
namespace: vorto
spec:
replicas: 1
selector:
matchLabels:
app: vorto-server
tier: proxy
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: vorto-server
tier: proxy
spec:
containers:
- env:
- name: BRANCH
value: stage
- name: GRPC_GO_REQUIRE_HANDSHAKE
value: "off"
- name: CONSUL_URL
value: "consul-server:8500"
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
image: shaleapps/sandbox:other-beacon
imagePullPolicy: Always
name: vorto-server
ports:
- containerPort: 50051
name: service
protocol: TCP
restartPolicy: Always
imagePullSecrets:
- name: dockersecret
---
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: vorto-server
spec:
version: 7.7.0
nodeSets:
- name: vorto
count: 1
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: vorto-server
spec:
version: 7.7.0
count: 1
elasticsearchRef:
name: vorto-server
It feels to me that I need to do some sort of port forwarding from shaleapps/sandbox:other-beacon
(which logs hello world every n seconds). I also am suspicious that my service resources are inadequate; I've used kubectl to verify that the services created during the quickstart are there. I also think I might be misunderstanding the nature of the ECK; the quickstart talks about setting up a discrete cluster for elastic, but I want to have it 'handle' logging for a specific cluster.
kubectl get pods
consul-mnc6r 1/1 Running 0 19d
consul-nxzxx 1/1 Running 0 19d
consul-server-0 1/1 Running 0 19d
consul-server-1 1/1 Running 0 19d
consul-server-2 1/1 Running 0 19d
consul-sfqvz 1/1 Running 0 19d
consul-sync-catalog-958fbd449-bb9qm 1/1 Running 0 19d
redis-master-64c984d564-lrd9p 1/1 Running 0 27h
vorto-server-blank-deployment-78785bcdb4-rjtsb 1/1 Running 0 7h7m
vorto-server-deployment-788955fd67-pflpw 1/1 Running 0 4h48m
vorto-server-es-vorto-0 1/1 Running 0 8h
vorto-server-kb-85764554db-c5rxn 1/1 Running 0 8h
vorto-web-fc9ccff9-7x4l4 1/1 Running 0 3h22m
vorto-web-fc9ccff9-jx88j 1/1 Running 0 3h22m
vorto-web-fc9ccff9-plvln 1/1 Running 0 3h22m
kubectl get services
consul ExternalName <none> consul.service.consul <none> 19d
consul-dns ClusterIP 10.81.5.19 <none> 53/TCP,53/UDP 19d
consul-server ClusterIP None <none> 8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP 19d
consul-ui ClusterIP 10.81.3.51 <none> 80/TCP 19d
redis-master ClusterIP 10.81.10.152 <none> 6379/TCP 27h
vorto-server NodePort 10.81.13.230 <none> 80:31326/TCP 16d
vorto-server-es-http ClusterIP 10.81.11.69 <none> 9200/TCP 8h
vorto-server-es-transport ClusterIP None <none> 9300/TCP 8h
vorto-server-es-vorto ClusterIP None <none> <none> 8h
vorto-server-kb-http LoadBalancer 10.81.0.34 35.188.43.19 5601:30338/TCP 7h57m
vorto-web NodePort 10.81.12.22 <none> 80:30710/TCP 19d
kubectl get deployments
consul-sync-catalog 1/1 1 1 19d
redis-master 1/1 1 1 27h
vorto-server-blank-deployment 1/1 1 1 9h
vorto-server-deployment 1/1 1 1 16d
vorto-server-kb 1/1 1 1 8h
vorto-web 3/3 3 3 19d
Any help here would be tremendously appreciated.