Expose phusion passenger-status via http

3/2/2018

When deploying a Ruby based Passenger standalone application via Kubernetes we ran into the issue of losing the capability of monitoring them via passenger-status. There is a telegraf plugin or a passenger exporter to forward metrics, which need access to the output of the passenger-status binary.

Following the philosophy of having one (main) process per Container using a Sidecar container to gather the metrics would be reasonable when deploying to Kubernetes. Accessing the output of passenger-status from the other container is the challenge here. Link the files into another container is not supported. Setting up an directory for both containers and copy executables seems to be over complex.

Communication between Containers within one Pod works via the loopback network. Therefore exposing metrics via HTTP is a common pattern to export those. So we are looking into different ways of exposing the passenger-status metrics via HTTP:

via Application

Running the command via Kernel#` kind of defeats the purpose of monitoring it. This will only return when there are enough passenger processes free to answer this request. Once the passenger queue gets full the monitoring will also not work anymore, which is exactly what we want to see here.

CGI script

As nginx only supports FastCGI it is necessary to have something like fcgiwrap to execute scripts. fciwrap itself needs to have another process running though, which itself needs monitoring. Furthermore it violates the idea of having one process per container.

Lua script

A lua snippet like this would probably work:

location /passenger-status {
  content_by_lua_block {
    os.execute("/opt/ruby/bin/passenger-status")
  }
}

However, adding Lua scripting to every production container just for this purpose seems to be cracking a walnut with a sledgehammer.

Second Passenger instance

Having a second tiny ruby script as passenger endpoint for the monitoring would also probably work:

http {
    ...

    server {
        listen 80;
        server_name _;
        root /app;
        passenger_enabled on;
        ...
    }

    server {
        listen 8080;
        server_name _;
        root /monitoring;
        passenger_enabled on;
        ...
    }

    ...
}

All in all I don't find any of those approaches satisfactory. What are your thoughts or solutions on this topic?

-- Julian
kubernetes
nginx
passenger

1 Answer

2/18/2019

We went the approach "Second Passenger instance" and have a second ruby process group within passenger. As described in the question already integrating works via adding a snippet like this to your nginx.conf:

    server {
        server_name _;
        listen 0.0.0.0:10254;
        root '/monitor/public';
        passenger_app_root '/monitor';
        passenger_app_group_name 'Prometheus exporter';
        passenger_spawn_method direct;
        passenger_enabled on;
        passenger_min_instances 1;
        passenger_load_shell_envvars off;
    }

This will start another ruby process serving a prometheus endpoint on http://<ip-of-this-server>:10254/metrics that exposes the passenger metrics to be collected by your usual Kubernetes monitoring infrastructure. A response of this could look like:

# HELP passenger_capacity Capacity used
# TYPE passenger_capacity gauge
passenger_capacity{supergroup_name="/app (development)",group_name="/app (development)",hostname="my-container"} 1
# HELP passenger_wait_list_size Requests in the queue
# TYPE passenger_wait_list_size gauge
passenger_wait_list_size{supergroup_name="/app (development)",group_name="/app (development)",hostname="my-container"} 0
# HELP passenger_processes_active Active processes
# TYPE passenger_processes_active gauge
passenger_processes_active{supergroup_name="/app (development)",group_name="/app (development)",hostname="my-container"} 0

Find the project at passenger-prometheus-exporter-app.

-- Julian
Source: StackOverflow