I followed this instruction https://cloud.google.com/monitoring/agent/install-agent#linux-install
$ curl -O "https://repo.stackdriver.com/stack-install.sh"
$ sudo bash stack-install.sh --write-gcm
Unidentifiable or unsupported platform.
The content of /etc/os-release.
$ cat /etc/os-release
BUILD_ID=8820.0.0
NAME="Container-VM Image"
GOOGLE_CRASH_ID=Lakitu
VERSION_ID=55
BUG_REPORT_URL=https://crbug.com/new
PRETTY_NAME="Google Container-VM Image"
VERSION=55
GOOGLE_METRICS_PRODUCT_ID=26
HOME_URL="https://cloud.google.com/compute/docs/containers/vm-image/"
ID=gci
In order to update a particular package, the entire OS image needs to be updated
So, it seems that we must wait till update for a stackdriver agent installed version of image or give it up.
Also this vm image is not my choice. Newly created GKE nodes use Container-VM images by default. So for now I'll try to create nodes via gcloud container node-pools create --image-type
As far as I know (and what has been confirmed to me by Google), the new Chromium OS image currently does not support the Stackdriver agent. As a workaround I upgraded the node pool back to ‘container-vm’ (which has the Debian image) by using the following command:
$ gcloud container clusters upgrade YOUR_CLUSTER_NAME --image-type=container_vm --node-pool=YOUR_NODE_POOL
Replace the cluster name and set the node pool name to the one which was upgraded to gci earlier (In my case 'default-pool'). The node versions will be upgraded to the newest ones. You can however add an option to deploy another version.
You should now be able to install the Stackdriver agent just as you are used to and set up your desired custom metrics.
The way I was able to get around the issue with the agent's incompatibility with the new Chromium image was to deploy the agent as a container running in privileged mode (conveniently already built: https://github.com/wikiwi/stackdriver-agent) within a kubernetes DaemonSet so it runs on each host. Here's the YAML for what I ended up using (spaces matter):
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: stackdriver-agent
spec:
template:
metadata:
labels:
app: stackdriver-agent
spec:
containers:
- name: stackdriver-agent
image: wikiwi/stackdriver-agent
securityContext:
privileged: true
volumeMounts:
- mountPath: /mnt/proc
name: procmnt
env:
- name: MONITOR_HOST
value: "true"
volumes:
- name: procmnt
hostPath:
path: /proc
You can enable Stackdriver Monitoring Agent on Container OS VM Instances, just run this command (and restart it) in order to enable the monitoring agent:
gcloud compute instances add-metadata instance-name --metadata=google-monitoring-enabled=true
You can do
sudo systemctl start stackdriver-logging
sudo systemctl start stackdriver-monitoring
It will spin up some containers with the agent running. Data will show up in your stackdriver dashboard a few minutes later.
I didn't find it documented anywhere, so I can't tell in which images exactly this is available. But I tested it in Container-Optimized OS 77-12371.114.0 stable