I have a minimal (working) presto installation (one coordinator and a worker on the same host) that I want to expand. I've created a container with a worker node and it works when deployed via docker (IE shows up in presto CLI).
select * from system.runtime.nodes;
When I move said container my k8 cluster and create a few pods it seems that the pods can contact the coordinator but they never show up in the CLI. The logs for the pods show that they have discovered the coordinator and there aren't any error msgs in the coordinator logs so Im puzzled as to where the disconnect is.
apiVersion: apps/v1
kind: Deployment
metadata:
name: presto-worker
spec:
type: NodePort
selector:
matchLabels:
app: presto-worker
replicas: 2
template:
metadata:
labels:
app: presto-worker
spec:
containers:
- name: presto-image
image: docker.io/mystuff/presto-image:latest
ports:
- containerPort: 8080
Here's a working Helm chart (i.e., template package of k8s resources) for presto: https://github.com/helm/charts/tree/master/stable/presto.
Here's basic design of a presto cluster in k8s from above Chart:
discovery-server.enabled=true
in <presto-home>/etc/config.properties
of your presto coordinator or your coordinator is not discoverable at all. (a.k.a., can not have external worker process over network).According to this question, you need to make sure presto coordinator is accessible by worker process via DNS name like http://my-presto-coordinator:8080
.
This is what I got from chart stable/presto
by running helm template .
(render all template in stdout). you will need to replace RELEASE-NAME
with lowercase strings to use it:
---
# Source: charts/presto/templates/deployment-worker.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: RELEASE-NAME-presto-worker
labels:
app: presto
chart: presto-0.1
release: RELEASE-NAME
heritage: Tiller
component: worker
spec:
replicas: 2
selector:
matchLabels:
app: presto
release: RELEASE-NAME
component: worker
template:
metadata:
labels:
app: presto
release: RELEASE-NAME
component: worker
spec:
volumes:
- name: config-volume
configMap:
name: RELEASE-NAME-presto-worker
containers:
- name: presto-worker
image: "bivas/presto:0.196"
imagePullPolicy: IfNotPresent
command: ["/bin/bash"]
args:
- /etc/presto/docker-presto.sh
volumeMounts:
- mountPath: /etc/presto
name: config-volume
livenessProbe:
exec:
command:
- /bin/bash
- /etc/presto/health_check.sh
initialDelaySeconds: 10
periodSeconds: 25
readinessProbe:
exec:
command:
- /bin/bash
- /etc/presto/health_check.sh
initialDelaySeconds: 5
periodSeconds: 10
resources:
{}
It looks like you've combined parts of a Deployment and a Service; they're two different objects. You can break this up:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: presto-worker
spec:
selector:
matchLabels:
app: presto-worker
replicas: 2
template:
metadata:
labels:
app: presto-worker
spec:
containers:
- name: presto-image
image: docker.io/mystuff/presto-image:latest
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: presto-worker
spec:
type: NodePort
selector:
matchLabels:
app: presto-worker
ports:
- name: http
port: 8080
The Service selector points at specific pods; it needs to match the deployment spec's pod template labels. The Deployment selector names the pods that the Deployment manages. In both cases they point at pods but they're for different purposes.