I'm running Prometheus as a service :
admin1@POC-k8s-master:/tmp$ sudo systemctl status prometheus
[sudo] password for admin1:
● prometheus.service - Prometheus Monitoring
Loaded: loaded (/etc/systemd/system/prometheus.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2020-01-09 04:25:33 EST; 1h 55min ago
Main PID: 8812 (prometheus)
Tasks: 12
Memory: 97.8M
CPU: 20.163s
CGroup: /system.slice/prometheus.service
└─8812 /usr/local/bin/prometheus --config.file /etc/prometheus/prometheus.yml --storage.tsdb.path /var/lib/prometheus/ --web.console.templates=/etc/prometheus/consoles --web.console.libraries=/
Jan 09 04:25:34 POC-k8s-master prometheus[8812]: level=info ts=2020-01-09T09:25:34.1310759Z caller=main.go:222 host_details="(Linux 4.4.0-62-generic #83-Ubuntu SMP Wed Jan 18 14:10:15 UTC 2017 x86_64 POC-
Jan 09 04:25:34 POC-k8s-master prometheus[8812]: level=info ts=2020-01-09T09:25:34.1312489Z caller=main.go:223 fd_limits="(soft=1024, hard=4096)"
Jan 09 04:25:34 POC-k8s-master prometheus[8812]: level=info ts=2020-01-09T09:25:34.1339119Z caller=main.go:504 msg="Starting TSDB ..."
Jan 09 04:25:34 POC-k8s-master prometheus[8812]: level=info ts=2020-01-09T09:25:34.13421Z caller=web.go:382 component=web msg="Start listening for connections" address=0.0.0.0:9090
Jan 09 04:25:35 POC-k8s-master prometheus[8812]: level=info ts=2020-01-09T09:25:35.1504977Z caller=main.go:514 msg="TSDB started"
Jan 09 04:25:35 POC-k8s-master prometheus[8812]: level=info ts=2020-01-09T09:25:35.1506174Z caller=main.go:588 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Jan 09 04:25:35 POC-k8s-master prometheus[8812]: level=info ts=2020-01-09T09:25:35.1509515Z caller=main.go:491 msg="Server is ready to receive web requests."
Jan 09 06:00:01 POC-k8s-master prometheus[8812]: level=info ts=2020-01-09T11:00:01.4377765Z caller=compact.go:393 component=tsdb msg="compact blocks" count=1 mint=1578556800000 maxt=1578564000000
Jan 09 06:00:02 POC-k8s-master prometheus[8812]: level=info ts=2020-01-09T11:00:02.1968892Z caller=head.go:348 component=tsdb msg="head GC completed" duration=2.7095ms
Jan 09 06:00:02 POC-k8s-master prometheus[8812]: level=info ts=2020-01-09T11:00:02.4665533Z caller=head.go:357 component=tsdb msg="WAL truncation completed" duration=269.4061ms
Grafana as a K8s Object :
admin1@POC-k8s-master:~$ kubectl get all --all-namespaces | grep grafana
grafana pod/grafana-79465c4ffc-l4bxg 1/1 Running 4 21d
grafana service/grafana ClusterIP 10.97.79.21 <none> 80/TCP 21d
grafana service/grafana-ext NodePort 10.109.58.242 <none> 3000:32767/TCP 21d
grafana deployment.apps/grafana 1/1 1 1 21d
grafana replicaset.apps/grafana-79465c4ffc 1 1 1 21d
Screenshot showing no data points:
Screenshot for node exporter metrics :
Screenshot for Prometheus:
Screenshot for Prometheus data source working in grafana:
Please guide me to troubleshoot this issue!
Update : admin1@POC-k8s-master:~$ kubectl logs grafana-79465c4ffc-l4bxg -n grafana -f
t=2020-01-02T11:01:13+0000 lvl=info msg="Starting Grafana" logger=server version=5.0.4 commit=7dc36ae compiled=2018-03-28T11:52:41+0000
t=2020-01-02T11:01:13+0000 lvl=info msg="Config loaded from" logger=settings file=/usr/share/grafana/conf/defaults.ini
t=2020-01-02T11:01:13+0000 lvl=info msg="Config loaded from" logger=settings file=/etc/grafana/grafana.ini
t=2020-01-02T11:01:13+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.data=/var/lib/grafana"
t=2020-01-02T11:01:13+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.logs=/var/log/grafana"
t=2020-01-02T11:01:13+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.plugins=/var/lib/grafana/plugins"
t=2020-01-02T11:01:13+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.paths.provisioning=/etc/grafana/provisioning"
t=2020-01-02T11:01:13+0000 lvl=info msg="Config overridden from command line" logger=settings arg="default.log.mode=console"
t=2020-01-02T11:01:13+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_SECURITY_ADMIN_USER=admin"
t=2020-01-02T11:01:13+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_SECURITY_ADMIN_PASSWORD=*********"
t=2020-01-02T11:01:13+0000 lvl=info msg="Path Home" logger=settings path=/usr/share/grafana
t=2020-01-02T11:01:13+0000 lvl=info msg="Path Data" logger=settings path=/var/lib/grafana/data
t=2020-01-02T11:01:13+0000 lvl=info msg="Path Logs" logger=settings path=/var/log/grafana
t=2020-01-02T11:01:13+0000 lvl=info msg="Path Plugins" logger=settings path=/var/lib/grafana/plugins
t=2020-01-02T11:01:13+0000 lvl=info msg="Path Provisioning" logger=settings path=/etc/grafana/provisioning
t=2020-01-02T11:01:13+0000 lvl=info msg="App mode production" logger=settings
t=2020-01-02T11:01:13+0000 lvl=info msg="Initializing DB" logger=sqlstore dbtype=sqlite3
t=2020-01-02T11:01:13+0000 lvl=info msg="Starting DB migration" logger=migrator
t=2020-01-02T11:01:13+0000 lvl=info msg="Executing migration" logger=migrator id="copy data account to org"
t=2020-01-02T11:01:13+0000 lvl=info msg="Skipping migration condition not fulfilled" logger=migrator id="copy data account to org"
t=2020-01-02T11:01:13+0000 lvl=info msg="Executing migration" logger=migrator id="copy data account_user to org_user"
t=2020-01-02T11:01:13+0000 lvl=info msg="Skipping migration condition not fulfilled" logger=migrator id="copy data account_user to org_user"
t=2020-01-02T11:01:13+0000 lvl=info msg="Starting plugin search" logger=plugins
t=2020-01-02T11:01:13+0000 lvl=info msg="Initializing Alerting" logger=alerting.engine
t=2020-01-02T11:01:13+0000 lvl=info msg="Initializing CleanUpService" logger=cleanup
t=2020-01-02T11:01:14+0000 lvl=info msg="Initializing Stream Manager"
t=2020-01-02T11:01:14+0000 lvl=info msg="Initializing HTTP Server" logger=http.server address=0.0.0.0:3000 protocol=http subUrl= socket=
Data is now coming as kube-state-metrics in kube-system was not running! after resolving that issue it worked. Thanks to all for helping