I'm new to Helm and Kubernetes and cannot figure out how to use helm install --name kibana --namespace logging stable/kibana
with the Logtrail plugin enabled. I can see there's an option in the values.yaml file to enable plugins during installation but I cannot figure out how to set it.
I've tried this without success:
helm install --name kibana --namespace logging stable/kibana \
--set plugins.enabled=true,plugins.value=logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.5.4-0.1.30.zip
Update:
As Ryan suggested, it's best to provide such complex settings via a custom values file. But as it turned out, the above mentioned settings are not the only ones that one would have to provide to get the Logtrail plugin working in Kibana. Some configuration for Logtrail must be set before doing the helm install
. And here's how to set it. In your custom values file set the following:
extraConfigMapMounts:
- name: logtrail
configMap: logtrail
mountPath: /usr/share/kibana/plugins/logtrail/logtrail.json
subPath: logtrail.json
After that the full content of your custom values file should look similar to this:
image:
repository: "docker.elastic.co/kibana/kibana-oss"
tag: "6.5.4"
pullPolicy: "IfNotPresent"
commandline:
args: []
env: {}
# All Kibana configuration options are adjustable via env vars.
# To adjust a config option to an env var uppercase + replace `.` with `_`
# Ref: https://www.elastic.co/guide/en/kibana/current/settings.html
#
# ELASTICSEARCH_URL: http://elasticsearch-client:9200
# SERVER_PORT: 5601
# LOGGING_VERBOSE: "true"
# SERVER_DEFAULTROUTE: "/app/kibana"
files:
kibana.yml:
## Default Kibana configuration from kibana-docker.
server.name: kibana
server.host: "0"
elasticsearch.url: http://elasticsearch:9200
## Custom config properties below
## Ref: https://www.elastic.co/guide/en/kibana/current/settings.html
# server.port: 5601
# logging.verbose: "true"
# server.defaultRoute: "/app/kibana"
deployment:
annotations: {}
service:
type: ClusterIP
externalPort: 443
internalPort: 5601
# authProxyPort: 5602 To be used with authProxyEnabled and a proxy extraContainer
## External IP addresses of service
## Default: nil
##
# externalIPs:
# - 192.168.0.1
#
## LoadBalancer IP if service.type is LoadBalancer
## Default: nil
##
# loadBalancerIP: 10.2.2.2
annotations: {}
# Annotation example: setup ssl with aws cert when service.type is LoadBalancer
# service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:EXAMPLE_CERT
labels: {}
## Label example: show service URL in `kubectl cluster-info`
# kubernetes.io/cluster-service: "true"
## Limit load balancer source ips to list of CIDRs (where available)
# loadBalancerSourceRanges: []
ingress:
enabled: false
# hosts:
# - kibana.localhost.localdomain
# - localhost.localdomain/kibana
# annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
# tls:
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
serviceAccount:
# Specifies whether a service account should be created
create: false
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
# If set and create is false, the service account must be existing
name:
livenessProbe:
enabled: false
initialDelaySeconds: 30
timeoutSeconds: 10
readinessProbe:
enabled: false
initialDelaySeconds: 30
timeoutSeconds: 10
periodSeconds: 10
successThreshold: 5
# Enable an authproxy. Specify container in extraContainers
authProxyEnabled: false
extraContainers: |
# - name: proxy
# image: quay.io/gambol99/keycloak-proxy:latest
# args:
# - --resource=uri=/*
# - --discovery-url=https://discovery-url
# - --client-id=client
# - --client-secret=secret
# - --listen=0.0.0.0:5602
# - --upstream-url=http://127.0.0.1:5601
# ports:
# - name: web
# containerPort: 9090
resources: {}
# limits:
# cpu: 100m
# memory: 300Mi
# requests:
# cpu: 100m
# memory: 300Mi
priorityClassName: ""
# Affinity for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
# affinity: {}
# Tolerations for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
podAnnotations: {}
replicaCount: 1
revisionHistoryLimit: 3
# To export a dashboard from a running Kibana 6.3.x use:
# curl --user <username>:<password> -XGET https://kibana.yourdomain.com:5601/api/kibana/dashboards/export?dashboard=<some-dashboard-uuid> > my-dashboard.json
# A dashboard is defined by a name and a string with the json payload or the download url
dashboardImport:
timeout: 60
xpackauth:
enabled: false
username: myuser
password: mypass
dashboards: {}
# k8s: https://raw.githubusercontent.com/monotek/kibana-dashboards/master/k8s-fluentd-elasticsearch.json
# List of plugins to install using initContainer
# NOTE : We notice that lower resource constraints given to the chart + plugins are likely not going to work well.
plugins:
# set to true to enable plugins installation
enabled: false
# set to true to remove all kibana plugins before installation
reset: false
# Use <plugin_name,version,url> to add/upgrade plugin
values:
- logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.5.4-0.1.30.zip
# - elastalert-kibana-plugin,1.0.1,https://github.com/bitsensor/elastalert-kibana-plugin/releases/download/1.0.1/elastalert-kibana-plugin-1.0.1-6.4.2.zip
# - logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.4.2-0.1.30.zip
# - other_plugin
persistentVolumeClaim:
# set to true to use pvc
enabled: false
# set to true to use you own pvc
existingClaim: false
annotations: {}
accessModes:
- ReadWriteOnce
size: "5Gi"
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
# default security context
securityContext:
enabled: false
allowPrivilegeEscalation: false
runAsUser: 1000
fsGroup: 2000
extraConfigMapMounts:
- name: logtrail
configMap: logtrail
mountPath: /usr/share/kibana/plugins/logtrail/logtrail.json
subPath: logtrail.json
And the last thing you should do is add this ConfigMap resource to Kubernetes:
apiVersion: v1
kind: ConfigMap
metadata:
name: logtrail
namespace: logging
data:
logtrail.json: |
{
"version" : 1,
"index_patterns" : [
{
"es": {
"default_index": "logstash-*"
},
"tail_interval_in_seconds": 10,
"es_index_time_offset_in_seconds": 0,
"display_timezone": "local",
"display_timestamp_format": "MMM DD HH:mm:ss",
"max_buckets": 500,
"default_time_range_in_days" : 0,
"max_hosts": 100,
"max_events_to_keep_in_viewer": 5000,
"fields" : {
"mapping" : {
"timestamp" : "@timestamp",
"hostname" : "kubernetes.host",
"program": "kubernetes.pod_name",
"message": "log"
},
"message_format": "{{{log}}}"
},
"color_mapping" : {
}
}]
}
After that you're ready to helm install
with the values file specified via the -f
flag.
Getting input with --set
that matches to what the example in the values file has is a bit tricky. Following the example we want the values to be:
plugins:
enabled: true
values:
- logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.4.2-0.1.30.zip
The plugin.values
here is tricky because it is an array, which means you need to enclose with {}
. And the relevant entry contains commas, which have to be escaped with backslash. To get it to match you can use:
helm install --name kibana --namespace logging stable/kibana --set plugins.enabled=true,plugins.values={"logtrail\,0.1.30\,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.5.4-0.1.30.zip"}
If you add --dry-run --debug
then you can see what the computed values are for any command you run, including with --set
, so this can help check the match. This kind of value is easier to set with a custom values file referenced with -f as it avoids having to work out how the --set
evaluates to values.