I have deployed Open Distro using a modified Helm chart from myself
The Kibana kubernetes service looks like
apiVersion: v1
kind: Service
metadata:
annotations:
creationTimestamp: "2019-09-05T15:29:04Z"
labels:
app: opendistro-es
chart: opendistro-es-1.0.0
heritage: Tiller
release: opendistro-es
name: opendistro-es-kibana
namespace: elasticsearch
resourceVersion: "48313341"
selfLink: /api/v1/namespaces/elasticsearch/services/opendistro-es-kibana
uid: e5066171-cff1-11e9-bb87-42010a8401d0
spec:
clusterIP: 10.15.246.245
ports:
- name: opendistro-es-kibana
port: 443
protocol: TCP
targetPort: 5601
selector:
app: opendistro-es-kibana
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
and the pod looks like
apiVersion: v1
kind: Pod
metadata:
annotations:
checksum/config: a4af5a55572dd6587cb86b0e6b3758f682c23745ad114448ce93c19e9612b6a
creationTimestamp: "2019-09-05T15:29:04Z"
generateName: opendistro-es-kibana-5f78f46bb-
labels:
app: opendistro-es-kibana
chart: opendistro-es-1.0.0
heritage: Tiller
pod-template-hash: 5f78f46bb
release: opendistro-es
name: opendistro-es-kibana-5f78f46bb-8pqfs
namespace: elasticsearch
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: opendistro-es-kibana-5f78f46bb
uid: e4a7a0fe-cff1-11e9-bb87-42010a8401d0
resourceVersion: "48313352"
selfLink: /api/v1/namespaces/elasticsearch/pods/opendistro-es-kibana-5f78f46bb-8pqfs
uid: e4acd8b3-cff1-11e9-bb87-42010a8401d0
spec:
containers:
- env:
- name: CLUSTER_NAME
value: elasticsearch
image: amazon/opendistro-for-elasticsearch-kibana:1.0.2
imagePullPolicy: IfNotPresent
name: opendistro-es-kibana
ports:
- containerPort: 5601
protocol: TCP
resources:
limits:
cpu: 2500m
memory: 2Gi
requests:
cpu: 500m
memory: 512Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /usr/share/kibana/config/kibana.yml
name: config
subPath: kibana.yml
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: opendistro-es-kibana-token-9g8mq
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: gke-ehealth-africa-d-concourse-ci-poo-98690882-h3lj
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: opendistro-es-kibana
serviceAccountName: opendistro-es-kibana
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- configMap:
defaultMode: 420
name: opendistro-es-security-config
name: security-config
- name: config
secret:
defaultMode: 420
secretName: opendistro-es-kibana-config
- name: opendistro-es-kibana-token-9g8mq
secret:
defaultMode: 420
secretName: opendistro-es-kibana-token-9g8mq
Unfortunately when I try and curl the Kibana service name I get connection refused
curl: (7) Failed connect to opendistro-es-kibana:443; Connection refused
When I use
kubectl port-forward svc/opendistro-es-kibana 5601:443
I'm able to access Kibana
Any pointers of what I'm missing would be very much appreciated!
your service is of type clusterIP therefor its not accessible outside the cluster. change the type to NodePort to make it accessible via <your_node_ip>:<your_service_port>
a better solution will be to use k8s ingress to accept external traffic
Ok I managed to fix it, the Kibana service by default was only listening on the loopback interface. After switching it to use server.host: "0.0.0.0"
it works fine.
Thanks for the suggestions