Kubernetes ingress sending requests to 2 different services

5/29/2020

Update: I reproduced this in its own repo https://github.com/ericwooley/repro-ingress-issue

I'm trying to setup ingress on minikube with the ingress addon.

I have two services, and deployments

service 1 is email:

kind: Service
apiVersion: v1
metadata:
  name: email
  labels:
    app: email
spec:
  selector:
    app:  email
  ports:
  - name:  http
    port:  8081
    targetPort: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: email
spec:
  selector:
    matchLabels:
      app: email
  replicas: 1
  template:
    metadata:
      labels:
        app: email
    spec:
      containers:
      - name: email
        image: ericwooley/auth
        ports:
        - name: http
          containerPort: 8080

service 2 is auth

apiVersion: v1
kind: Service
metadata:
  name: auth
  labels:
    app: auth
spec:
  type: ClusterIP
  ports:
    - port: 8080
      targetPort: http
      name: http
      protocol: TCP
  selector:
    app: auth
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: auth
spec:
  selector:
    matchLabels:
      app: auth
  replicas: 1
  template:
    metadata:
      labels:
        app: auth
    spec:
      restartPolicy: Always
      containers:
        - name: auth
          image: ericwooley/auth
          imagePullPolicy: Never
          ports:
            - name: http
              containerPort: 8080

and my ingress

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: entry
spec:
  rules:
  - host: api.tengable.com
    http:
      paths:
      - path: /openapi
        pathType: Prefix
        backend:
          serviceName: auth
          servicePort: http

navigating too /openapi sends requests to both services. I can't figure out why it's not all going to the auth service

logs from services:

[auth-7b56b5dc98-jlzzz auth] ::ffff:172.17.0.2 - GET /openapi/ HTTP/1.1 200 3104 - 29.375 ms
[email-57679f8d9b-75j6r email] ::ffff:172.17.0.2 - GET /openapi/swagger-ui.css HTTP/1.1 200 141841 - 1.339 ms
[email-57679f8d9b-75j6r email] ::ffff:172.17.0.2 - GET /openapi/swagger-ui-init.js HTTP/1.1 200 5519 - 0.774 ms
[email-57679f8d9b-75j6r email] ::ffff:172.17.0.2 - GET /openapi/swagger-ui-standalone-preset.js HTTP/1.1 200 307009 - 17.737 ms
[auth-7b56b5dc98-jlzzz auth] ::ffff:172.17.0.2 - GET /openapi/swagger-ui-bundle.js HTTP/1.1 200 984506 - 1.126 ms
[auth-7b56b5dc98-jlzzz auth] ::ffff:172.17.0.2 - GET /openapi/favicon-32x32.png HTTP/1.1 200 628 - 1.453 ms

I'm fairly new to kubernetes, so any debugging tips would be awesome as well.

EDIT: Add describe for entry ingress and pods list

Describe Entry for ingress;

Name:             entry
Namespace:        default
Address:          192.168.39.189
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host                  Path  Backends
  ----                  ----  --------
  api.dev.tengable.com  
                        /openapi   auth:http (172.17.0.8:8080)
Annotations:            Events:
  Type                  Reason  Age                    From                      Message
  ----                  ------  ----                   ----                      -------
  Normal                CREATE  8m33s                  nginx-ingress-controller  Ingress default/entry
  Normal                UPDATE  7m50s (x2 over 8m28s)  nginx-ingress-controller  Ingress default/entry

Pods:

NAME                        READY   STATUS    RESTARTS   AGE     IP            NODE       NOMINATED NODE   READINESS GATES
auth-759b6cd8f8-2pvzj       1/1     Running   5          3m33s   172.17.0.7    minikube   <none>           <none>
email-589cffd945-kql25      1/1     Running   0          2m59s   172.17.0.8    minikube   <none>           <none>
postgres-68c74677b6-8vxg9   1/1     Running   0          2m59s   172.17.0.9    minikube   <none>           <none>
redis-85776b6757-lh9kc      1/1     Running   0          2m53s   172.17.0.10   minikube   <none>           <none>

Minikube ingress pod controller log

172.17.0.1 - - [29/May/2020:19:05:28 +0000] "GET /openapi/ HTTP/1.1" 200 1242 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:76.0) Gecko/20100101 Firefox/76.0" 403 0.047 [default-auth-http] [] 172.17.0.9:8080 3104 0.047 200 6e1bc857061b3758b3f21735d6d65839
172.17.0.1 - - [29/May/2020:19:05:28 +0000] "GET /openapi/swagger-ui.css HTTP/1.1" 200 22708 "http://api.dev.tengable.com:8080/openapi/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:76.0) Gecko/20100101 Firefox/76.0" 383 0.015 [default-auth-http] [] 172.17.0.7:8080 141841 0.016 200 4719376be00e46842f510ab4eedc8d1c
172.17.0.1 - - [29/May/2020:19:05:28 +0000] "GET /openapi/swagger-ui-init.js HTTP/1.1" 200 1285 "http://api.dev.tengable.com:8080/openapi/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:76.0) Gecko/20100101 Firefox/76.0" 372 0.043 [default-auth-http] [] 172.17.0.7:8080 5519 0.043 200 109e5ed08dd54e78dcdaed526973dfde
172.17.0.1 - - [29/May/2020:19:05:28 +0000] "GET /openapi/swagger-ui-standalone-preset.js HTTP/1.1" 200 99126 "http://api.dev.tengable.com:8080/openapi/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:76.0) Gecko/20100101 Firefox/76.0" 385 0.080 [default-auth-http] [] 172.17.0.9:8080 307009 0.080 200 a4e85558a69082ac5bec3dfe4f5c1300
172.17.0.1 - - [29/May/2020:19:05:28 +0000] "GET /openapi/swagger-ui-bundle.js HTTP/1.1" 200 310454 "http://api.dev.tengable.com:8080/openapi/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:76.0) Gecko/20100101 Firefox/76.0" 374 0.123 [default-auth-http] [] 172.17.0.9:8080 984506 0.123 200 bf5e3fb008ccf54f40de1923e0df56b7
172.17.0.1 - - [29/May/2020:19:05:29 +0000] "GET /openapi/favicon-32x32.png HTTP/1.1" 200 628 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:76.0) Gecko/20100101 Firefox/76.0" 330 0.003 [default-auth-http] [] 172.17.0.9:8080 628 0.004 200 1db7d7acc4699409a4b7bf0c2060ae2e

Edit 2: I ran k exec -it -n kube-system ingress-nginx-controller-7bb4c67d67-cnnlz cat /etc/nginx/nginx.conf and got the generated nginx.conf as well

# Configuration checksum: 11102474080819284102

# setup custom paths that do not require root access
pid /tmp/nginx.pid;

daemon off;

worker_processes 2;

worker_rlimit_nofile 523264;

worker_shutdown_timeout 240s ;

events {
    multi_accept        on;
    worker_connections  16384;
    use                 epoll;
}

http {
    lua_package_path "/etc/nginx/lua/?.lua;;";

    lua_shared_dict balancer_ewma 10M;
    lua_shared_dict balancer_ewma_last_touched_at 10M;
    lua_shared_dict balancer_ewma_locks 1M;
    lua_shared_dict certificate_data 20M;
    lua_shared_dict certificate_servers 5M;
    lua_shared_dict configuration_data 20M;
    lua_shared_dict ocsp_response_cache 5M;

    init_by_lua_block {
        collectgarbage("collect")

        -- init modules
        local ok, res

        ok, res = pcall(require, "lua_ingress")
        if not ok then
        error("require failed: " .. tostring(res))
        else
        lua_ingress = res
        lua_ingress.set_config({
            use_forwarded_headers = false,
            use_proxy_protocol = false,
            is_ssl_passthrough_enabled = false,
            http_redirect_code = 308,
        listen_ports = { ssl_proxy = "442", https = "443" },

            hsts = false,
            hsts_max_age = 15724800,
            hsts_include_subdomains = true,
            hsts_preload = false,
        })
        end

        ok, res = pcall(require, "configuration")
        if not ok then
        error("require failed: " .. tostring(res))
        else
        configuration = res
        end

        ok, res = pcall(require, "balancer")
        if not ok then
        error("require failed: " .. tostring(res))
        else
        balancer = res
        end

        ok, res = pcall(require, "monitor")
        if not ok then
        error("require failed: " .. tostring(res))
        else
        monitor = res
        end

        ok, res = pcall(require, "certificate")
        if not ok then
        error("require failed: " .. tostring(res))
        else
        certificate = res
        certificate.is_ocsp_stapling_enabled = false
        end

        ok, res = pcall(require, "plugins")
        if not ok then
        error("require failed: " .. tostring(res))
        else
        plugins = res
        end
        -- load all plugins that'll be used here
    plugins.init({  })
    }

    init_worker_by_lua_block {
        lua_ingress.init_worker()
        balancer.init_worker()

        monitor.init_worker()

        plugins.run()
    }

    geoip_country       /etc/nginx/geoip/GeoIP.dat;
    geoip_city          /etc/nginx/geoip/GeoLiteCity.dat;
    geoip_org           /etc/nginx/geoip/GeoIPASNum.dat;
    geoip_proxy_recursive on;

    aio                 threads;
    aio_write           on;

    tcp_nopush          on;
    tcp_nodelay         on;

    log_subrequest      on;

    reset_timedout_connection on;

    keepalive_timeout  75s;
    keepalive_requests 100;

    client_body_temp_path           /tmp/client-body;
    fastcgi_temp_path               /tmp/fastcgi-temp;
    proxy_temp_path                 /tmp/proxy-temp;
    ajp_temp_path                   /tmp/ajp-temp;

    client_header_buffer_size       1k;
    client_header_timeout           60s;
    large_client_header_buffers     4 8k;
    client_body_buffer_size         8k;
    client_body_timeout             60s;

    http2_max_field_size            4k;
    http2_max_header_size           16k;
    http2_max_requests              1000;
    http2_max_concurrent_streams    128;

    types_hash_max_size             2048;
    server_names_hash_max_size      1024;
    server_names_hash_bucket_size   64;
    map_hash_bucket_size            64;

    proxy_headers_hash_max_size     512;
    proxy_headers_hash_bucket_size  64;

    variables_hash_bucket_size      256;
    variables_hash_max_size         2048;

    underscores_in_headers          off;
    ignore_invalid_headers          on;

    limit_req_status                503;
    limit_conn_status               503;

    include /etc/nginx/mime.types;
    default_type text/html;

    gzip on;
    gzip_comp_level 5;
    gzip_http_version 1.1;
    gzip_min_length 256;
    gzip_types application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/javascript text/plain text/x-component;
    gzip_proxied any;
    gzip_vary on;

    # Custom headers for response

    server_tokens on;

    # disable warnings
    uninitialized_variable_warn off;

    # Additional available variables:
    # $namespace
    # $ingress_name
    # $service_name
    # $service_port
    log_format upstreaminfo '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id';

    map $request_uri $loggable {

        default 1;
    }

    access_log /var/log/nginx/access.log upstreaminfo  if=$loggable;

    error_log  /var/log/nginx/error.log notice;

    resolver 10.96.0.10 valid=30s ipv6=off;

    # See https://www.nginx.com/blog/websocket-nginx
    map $http_upgrade $connection_upgrade {
        default          upgrade;

        # See http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
        ''               '';

    }

    # Reverse proxies can detect if a client provides a X-Request-ID header, and pass it on to the backend server.
    # If no such header is provided, it can provide a random value.
    map $http_x_request_id $req_id {
        default   $http_x_request_id;

        ""        $request_id;

    }

    # Create a variable that contains the literal $ character.
    # This works because the geo module will not resolve variables.
    geo $literal_dollar {
        default "
quot;;
} server_name_in_redirect off; port_in_redirect off; ssl_protocols TLSv1.2; ssl_early_data off; # turn on session caching to drastically improve performance ssl_session_cache builtin:1000 shared:SSL:10m; ssl_session_timeout 10m; # allow configuring ssl session tickets ssl_session_tickets on; # slightly reduce the time-to-first-byte ssl_buffer_size 4k; # allow configuring custom ssl ciphers ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384'; ssl_prefer_server_ciphers on; ssl_ecdh_curve auto; # PEM sha: db0c5d7bbddcf6bfa82f6a621647e21c5e3bc1bd ssl_certificate /etc/ingress-controller/ssl/default-fake-certificate.pem; ssl_certificate_key /etc/ingress-controller/ssl/default-fake-certificate.pem; proxy_ssl_session_reuse on; upstream upstream_balancer { ### Attention!!! # # We no longer create "upstream" section for every backend. # Backends are handled dynamically using Lua. If you would like to debug # and see what backends ingress-nginx has in its memory you can # install our kubectl plugin https://kubernetes.github.io/ingress-nginx/kubectl-plugin. # Once you have the plugin you can use "kubectl ingress-nginx backends" command to # inspect current backends. # ### server 0.0.0.1; # placeholder balancer_by_lua_block { balancer.balance() } keepalive 32; keepalive_timeout 60s; keepalive_requests 100; } # Cache for internal auth checks proxy_cache_path /tmp/nginx-cache-auth levels=1:2 keys_zone=auth_cache:10m max_size=128m inactive=30m use_temp_path=off; # Global filters ## start server _ server { server_name _ ; listen 80 default_server reuseport backlog=511 ; listen 443 default_server reuseport backlog=511 ssl http2 ; set $proxy_upstream_name "-"; ssl_certificate_by_lua_block { certificate.call() } location / { set $namespace ""; set $ingress_name ""; set $service_name ""; set $service_port ""; set $location_path "/"; rewrite_by_lua_block { lua_ingress.rewrite({ force_ssl_redirect = false, ssl_redirect = false, force_no_ssl_redirect = false, use_port_in_redirects = false, }) balancer.rewrite() plugins.run() } # be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any # will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)` # other authentication method such as basic auth or external auth useless - all requests will be allowed. #access_by_lua_block { #} header_filter_by_lua_block { lua_ingress.header() plugins.run() } body_filter_by_lua_block { } log_by_lua_block { balancer.log() monitor.call() plugins.run() } access_log off; port_in_redirect off; set $balancer_ewma_score -1; set $proxy_upstream_name "upstream-default-backend"; set $proxy_host $proxy_upstream_name; set $pass_access_scheme $scheme; set $pass_server_port $server_port; set $best_http_host $http_host; set $pass_port $pass_server_port; set $proxy_alternative_upstream_name ""; client_max_body_size 1m; proxy_set_header Host $best_http_host; # Pass the extracted client certificate to the backend # Allow websocket connections proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header X-Request-ID $req_id; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Host $best_http_host; proxy_set_header X-Forwarded-Port $pass_port; proxy_set_header X-Forwarded-Proto $pass_access_scheme; proxy_set_header X-Scheme $pass_access_scheme; # Pass the original X-Forwarded-For proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for; # mitigate HTTPoxy Vulnerability # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/ proxy_set_header Proxy ""; # Custom headers to proxied server proxy_connect_timeout 5s; proxy_send_timeout 60s; proxy_read_timeout 60s; proxy_buffering off; proxy_buffer_size 4k; proxy_buffers 4 4k; proxy_max_temp_file_size 1024m; proxy_request_buffering on; proxy_http_version 1.1; proxy_cookie_domain off; proxy_cookie_path off; # In case of errors try the next upstream server before returning an error proxy_next_upstream error timeout; proxy_next_upstream_timeout 0; proxy_next_upstream_tries 3; proxy_pass http://upstream_balancer; proxy_redirect off; } # health checks in cloud providers require the use of port 80 location /healthz { access_log off; return 200; } # this is required to avoid error if nginx is being monitored # with an external software (like sysdig) location /nginx_status { allow 127.0.0.1; deny all; access_log off; stub_status on; } } ## end server _ ## start server api.dev.tengable.com server { server_name api.dev.tengable.com ; listen 80 ; listen 443 ssl http2 ; set $proxy_upstream_name "-"; ssl_certificate_by_lua_block { certificate.call() } location /openapi { set $namespace "default"; set $ingress_name "entry"; set $service_name "auth"; set $service_port "http"; set $location_path "/openapi"; rewrite_by_lua_block { lua_ingress.rewrite({ force_ssl_redirect = false, ssl_redirect = true, force_no_ssl_redirect = false, use_port_in_redirects = false, }) balancer.rewrite() plugins.run() } # be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any # will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)` # other authentication method such as basic auth or external auth useless - all requests will be allowed. #access_by_lua_block { #} header_filter_by_lua_block { lua_ingress.header() plugins.run() } body_filter_by_lua_block { } log_by_lua_block { balancer.log() monitor.call() plugins.run() } port_in_redirect off; set $balancer_ewma_score -1; set $proxy_upstream_name "default-auth-http"; set $proxy_host $proxy_upstream_name; set $pass_access_scheme $scheme; set $pass_server_port $server_port; set $best_http_host $http_host; set $pass_port $pass_server_port; set $proxy_alternative_upstream_name ""; client_max_body_size 1m; proxy_set_header Host $best_http_host; # Pass the extracted client certificate to the backend # Allow websocket connections proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header X-Request-ID $req_id; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Host $best_http_host; proxy_set_header X-Forwarded-Port $pass_port; proxy_set_header X-Forwarded-Proto $pass_access_scheme; proxy_set_header X-Scheme $pass_access_scheme; # Pass the original X-Forwarded-For proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for; # mitigate HTTPoxy Vulnerability # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/ proxy_set_header Proxy ""; # Custom headers to proxied server proxy_connect_timeout 5s; proxy_send_timeout 60s; proxy_read_timeout 60s; proxy_buffering off; proxy_buffer_size 4k; proxy_buffers 4 4k; proxy_max_temp_file_size 1024m; proxy_request_buffering on; proxy_http_version 1.1; proxy_cookie_domain off; proxy_cookie_path off; # In case of errors try the next upstream server before returning an error proxy_next_upstream error timeout; proxy_next_upstream_timeout 0; proxy_next_upstream_tries 3; proxy_pass http://upstream_balancer; proxy_redirect off; } location / { set $namespace ""; set $ingress_name ""; set $service_name ""; set $service_port ""; set $location_path "/"; rewrite_by_lua_block { lua_ingress.rewrite({ force_ssl_redirect = false, ssl_redirect = true, force_no_ssl_redirect = false, use_port_in_redirects = false, }) balancer.rewrite() plugins.run() } # be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any # will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)` # other authentication method such as basic auth or external auth useless - all requests will be allowed. #access_by_lua_block { #} header_filter_by_lua_block { lua_ingress.header() plugins.run() } body_filter_by_lua_block { } log_by_lua_block { balancer.log() monitor.call() plugins.run() } port_in_redirect off; set $balancer_ewma_score -1; set $proxy_upstream_name "upstream-default-backend"; set $proxy_host $proxy_upstream_name; set $pass_access_scheme $scheme; set $pass_server_port $server_port; set $best_http_host $http_host; set $pass_port $pass_server_port; set $proxy_alternative_upstream_name ""; client_max_body_size 1m; proxy_set_header Host $best_http_host; # Pass the extracted client certificate to the backend # Allow websocket connections proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header X-Request-ID $req_id; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Host $best_http_host; proxy_set_header X-Forwarded-Port $pass_port; proxy_set_header X-Forwarded-Proto $pass_access_scheme; proxy_set_header X-Scheme $pass_access_scheme; # Pass the original X-Forwarded-For proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for; # mitigate HTTPoxy Vulnerability # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/ proxy_set_header Proxy ""; # Custom headers to proxied server proxy_connect_timeout 5s; proxy_send_timeout 60s; proxy_read_timeout 60s; proxy_buffering off; proxy_buffer_size 4k; proxy_buffers 4 4k; proxy_max_temp_file_size 1024m; proxy_request_buffering on; proxy_http_version 1.1; proxy_cookie_domain off; proxy_cookie_path off; # In case of errors try the next upstream server before returning an error proxy_next_upstream error timeout; proxy_next_upstream_timeout 0; proxy_next_upstream_tries 3; proxy_pass http://upstream_balancer; proxy_redirect off; } } ## end server api.dev.tengable.com # backend for when default-backend-service is not configured or it does not have endpoints server { listen 8181 default_server reuseport backlog=511; set $proxy_upstream_name "internal"; access_log off; location / { return 404; } } # default server, used for NGINX healthcheck and access to nginx stats server { listen 127.0.0.1:10246; set $proxy_upstream_name "internal"; keepalive_timeout 0; gzip off; access_log off; location /healthz { return 200; } location /is-dynamic-lb-initialized { content_by_lua_block { local configuration = require("configuration") local backend_data = configuration.get_backends_data() if not backend_data then ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR) return end ngx.say("OK") ngx.exit(ngx.HTTP_OK) } } location /nginx_status { stub_status on; } location /configuration { client_max_body_size 21m; client_body_buffer_size 21m; proxy_buffering off; content_by_lua_block { configuration.call() } } location / { content_by_lua_block { ngx.exit(ngx.HTTP_NOT_FOUND) } } } } stream { lua_package_path "/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/?.lua;;"; lua_shared_dict tcp_udp_configuration_data 5M; init_by_lua_block { collectgarbage("collect") -- init modules local ok, res ok, res = pcall(require, "configuration") if not ok then error("require failed: " .. tostring(res)) else configuration = res end ok, res = pcall(require, "tcp_udp_configuration") if not ok then error("require failed: " .. tostring(res)) else tcp_udp_configuration = res end ok, res = pcall(require, "tcp_udp_balancer") if not ok then error("require failed: " .. tostring(res)) else tcp_udp_balancer = res end } init_worker_by_lua_block { tcp_udp_balancer.init_worker() } lua_add_variable $proxy_upstream_name; log_format log_stream '[$remote_addr] [$time_local] $protocol $status $bytes_sent $bytes_received $session_time'; access_log /var/log/nginx/access.log log_stream ; error_log /var/log/nginx/error.log; upstream upstream_balancer { server 0.0.0.1:1234; # placeholder balancer_by_lua_block { tcp_udp_balancer.balance() } } server { listen 127.0.0.1:10247; access_log off; content_by_lua_block { tcp_udp_configuration.call() } } # TCP services # UDP services }
-- Eric Wooley
kubernetes
minikube
nginx-ingress

1 Answer

5/30/2020

Github user: aledbf actually figured it out. https://github.com/kubernetes/ingress-nginx/issues/5622

The problem was that kustomize is using

commonLabel:
  app: something

Which is overriding all the selectors I used which were app: auth, app: email etc..

-- Eric Wooley
Source: StackOverflow