Linkerd and k8s not working

4/8/2017

I'm trying to get my head around linkerd in kubernetes. I'm using the linkerd deamonset example from their website in my local minikube

It is all deployed in the production namespace. When I try to

http_proxy=$(kubectl --namespace=production get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}"):4140 curl -s http://apiserver/readinezs

Nothing happens. Where am I going wrong in my setup?

My Linkerd yaml:

# runs linkerd in a daemonset, in linker-to-linker mode
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: l5d-config
data:
  config.yaml: |-
    admin:
      port: 9990

    namers:
    - kind: io.l5d.k8s
      experimental: true
      host: localhost
      port: 8001

    telemetry:
    - kind: io.l5d.prometheus
    - kind: io.l5d.recentRequests
      sampleRate: 0.25

    usage:
      orgId: linkerd-examples-daemonset

    routers:
    - protocol: http
      label: outgoing
      dtab: |
        /srv        => /#/io.l5d.k8s/production/http;
        /host       => /srv;
        /svc        => /host;
        /host/world => /srv/world-v1;
      interpreter:
        kind: default
        transformers:
        - kind: io.l5d.k8s.daemonset
          namespace: production
          port: incoming
          service: l5d
      servers:
      - port: 4140
        ip: 0.0.0.0
      responseClassifier:
        kind: io.l5d.retryableRead5XX

    - protocol: http
      label: incoming
      dtab: |
        /srv        => /#/io.l5d.k8s/production/http;
        /host       => /srv;
        /svc        => /host;
        /host/world => /srv/world-v1;
      interpreter:
        kind: default
        transformers:
        - kind: io.l5d.k8s.localnode
      servers:
      - port: 4141
        ip: 0.0.0.0
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    app: l5d
  name: l5d
spec:
  template:
    metadata:
      labels:
        app: l5d
    spec:
      volumes:
      - name: l5d-config
        configMap:
          name: "l5d-config"
      containers:
      - name: l5d
        image: buoyantio/linkerd:0.9.1
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        args:
        - /io.buoyant/linkerd/config/config.yaml
        ports:
        - name: outgoing
          containerPort: 4140
          hostPort: 4140
        - name: incoming
          containerPort: 4141
        - name: admin
          containerPort: 9990
        volumeMounts:
        - name: "l5d-config"
          mountPath: "/io.buoyant/linkerd/config"
          readOnly: true

      - name: kubectl
        image: buoyantio/kubectl:v1.4.0
        args:
        - "proxy"
        - "-p"
        - "8001"
---
apiVersion: v1
kind: Service
metadata:
  name: l5d
spec:
  selector:
    app: l5d
  type: LoadBalancer
  ports:
  - name: outgoing
    port: 4140
  - name: incoming
    port: 4141
  - name: admin
    port: 9990

Here's my deployment for an apiservice:

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: apiserver-production
spec:
  replicas: 1
  template:
    metadata:
      name: apiserver
      labels:
        app: apiserver
        role: gateway
        env: production
    spec:
      dnsPolicy: ClusterFirst
      containers:
      - name: apiserver
        image: eu.gcr.io/xxxxx/apiservice:latest
        env:
        - name: MONGO_HOST
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: host
        - name: MONGO_PORT
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: port
        - name: MONGO_USR
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: username
        - name: MONGO_PWD
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: password
        - name: MONGO_DB
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: db
        - name: MONGO_PREFIX
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: prefix
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: http_proxy
          value: $(NODE_NAME):4140
        resources:
          limits:
            memory: "300Mi"
            cpu: "50m"
        imagePullPolicy: Always
        command:
        - "pm2-docker"
        - "processes.json"
        ports:
        - name: apiserver
          containerPort: 8080
      - name: kubectl
        image: buoyantio/kubectl:1.2.3
        args:
        - proxy
        - "-p"
        - "8001"

Here's the service:

kind: Service
apiVersion: v1
metadata:
  name: apiserver
spec:
  selector:
    app: apiserver
    role: gateway
  type: LoadBalancer
  ports:
  - name: http
    port: 8080
  - name: external
    port: 80
    targetPort: 8080

In my node application I'm using global tunnel:

const server = app.listen(port);
server.on('listening', function(){

  // make sure all traffic goes over linkerd
  globalTunnel.initialize({
    host: 'localhost',
    port: 4140
  });

 console.log(`Feathers application started on ${app.get('host')}:${app.get('port')} `);
-- Tino
feathersjs
kubernetes
linkerd

2 Answers

4/9/2017

Deploying two of the same node applications and making them send requests to each other it worked. Weirdly the requests don't show up in the linkerd dashboard.

-- Tino
Source: StackOverflow

4/9/2017

Where is your curl command being run?

http_proxy=$(kubectl --namespace=production get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}"):4140 curl -s http://apiserver/readinezs`

The linkerd service in the example doesn't expose a public IP address. You can confirm this with kubectl get svc/l5d -- I expect you'll see no external IP.

I think that you'll need to modify the service definition---or create an additional explicitly external service that exposes a ClusterIP---in order to receive ingress traffic.

-- Oliver Gould
Source: StackOverflow