Change log level on runtime for containers

7/22/2019

im using logrus for logging for out applications which run on K8S we have env variable which we can set the log-level and change it when we restart out application our applications is running with docker containers on k8s Now we want to change the log-level on runtime, i.e. don’t restart the container and change it when it’s running and with this we can change it from error to debug, I think this is legitimic request but didn’t find any reference or any open source which doing this, any idea?

package logs

import (
    "fmt"
    "os"

    "github.com/sirupsen/logrus"
)

const (
    AppLogLevel = “APP_LOG_LEVEL"
    DefLvl = "info"
)


var Logger *logrus.Logger


func NewLogger() *logrus.Logger {

    var level logrus.Level
    lvl := getLogLevel()
    // In case level doesn't set will not print any message
    level = logLevel(lvl)
    logger := &logrus.Logger{
        Out:   os.Stdout,
        Level: level,
    }
    Logger = logger
    return Logger
}

// use from env
func getLogLevel() string {
    lvl, _ := os.LookupEnv(AppLogLevel)
    if lvl != "" {
        return lvl
    }
    return DefLvl
}

func logLevel(lvl string) logrus.Level {

    switch lvl {
    case "debug":
        // Used for tracing
        return logrus.DebugLevel
    case "info":
        return logrus.InfoLevel
    case "error":
        return logrus.ErrorLevel
    case "fatal":
        return logrus.FatalLevel
    default:
        panic(fmt.Sprintf("the specified %s log level is not supported", lvl))
    }
}

I know how to change the log level but I need a way to infuance the logger to change the level

-- Jhon D
docker
go
kubernetes
logging

3 Answers

7/22/2019

First off, understand this should happen on the application level. I.e. it's not something that Kubernetes is supposed to do for you.

That being said, you could have your application checking an environment variable's value (you are already doing this), and depending on what that value is, it can set the application's log-level. In other words, let the application code poll an environment variable to see if it has changed.

You can inject environment variables like Shahaf suggests, but that requires you to exec into the pod, which may not always be possible or good practice.

I would suggest you run kubectl set env rs [REPLICASET_NAME] SOME_ENVIRONMENT_VAR=1.

All of this being said, you need to consider why this is important. Kubernetes is built under the principle that "pods should be treated like cattle, not pets". Meaning when a pod is no longer useful, or out of sync, it should be terminated and a new one, that represents the code's current state, should be booted up in its stead.

Regardless of how you go about doing what you need to do, you REALLY shouldn't be doing this in production, or even in staging.

Instead let your app's underlying environment variables set a log-level that is appropriate for that environment.

-- Todai
Source: StackOverflow

7/22/2019

You can run the command kubectl exec -it <container_name> bash and use the command line inside the container to change the environment variable . You can do it by running the command export LOG_LEVEL=debug or export LOG_LEVEL=error inside the container.

-- Shahaf Shavit
Source: StackOverflow

7/22/2019

As a general Un*x statement, you cannot change an environment variable in a process after it has started. (You can setenv(3) your own environment, and you can specify a new process's environment when you execve(2) it, but once it's started, you can't change it again.)

This restriction carries through to higher levels. If you've docker run a container, its -e option to set an environment variable is one of the things you have to delete and recreate a container to change. The env: is one of the many immutable parts of a Kubernetes Pod specification; you also can't change it without deleting and recreating the pod.

If you've deployed the pod via a Deployment (and you really should), you can change the environment variable setting in the Deployment spec (edit the YAML file in source control and kubectl apply -f it, or directly kubectl edit). This will cause Kubernetes to start new pods with the new log value and shut down old ones, in that order, doing a zero-downtime update. Deleting and recreating pods like this is totally normal and happens whenever you want to, for example, change the image inside the deployment to have today's build.

If your application is capable of noticing changes to config files it's loaded (and it would have to be specially coded to do that) one other path that could work for you is to mount a ConfigMap into a container; if you change the ConfigMap contents, the files the container sees will change but it will not restart. I wouldn't go out of my way to write this just to avoid restarting a pod, though.

-- David Maze
Source: StackOverflow