I am following but not able to understand and troubleshoot why deployment status is failed. https://linuxacademy.com/cp/courses/lesson/course/2205/lesson/2/module/218
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > /tmp/get_helm.sh
chmod 700 /tmp/get_helm.sh
DESIRED_VERSION=v2.8.2 /tmp/get_helm.sh
helm init --wait
kubectl --namespace=kube-system create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
helm ls
cd ~/
git clone https://github.com/kubernetes/charts
cd charts
git checkout efdcffe0b6973111ec6e5e83136ea74cdbe6527d
cd ../
vi prometheus-values.yml
prometheus-values.yml:
alertmanager:
persistentVolume:
enabled: false
server:
persistentVolume:
enabled: false
Then run:
helm install -f prometheus-values.yml charts/stable/prometheus --name prometheus --namespace prometheus
vi grafana-values.yml
grafana-values.yml:
adminPassword: password
Then run:
helm install -f grafana-values.yml charts/stable/grafana/ --name grafana --namespace grafana
vi grafana-ext.yml
grafana-ext.yml:
kind: Service
apiVersion: v1
metadata:
namespace: grafana
name: grafana-ext
spec:
type: NodePort
selector:
app: grafana
ports:
- protocol: TCP
port: 3000
nodePort: 8080
Then run:
kubectl apply -f grafana-ext.yml
You can check on the status of the prometheus and grafana pods with these commands:
kubectl get pods -n prometheus
kubectl get pods -n grafana
When setting up your dastasource in grafana, use this url:
http://prometheus-server.prometheus.svc.cluster.local
having an issue for the same in 2 different environment
k8s cluster running in aws got Status Failed
$ kubectl describe deploy prometheus -n prometheus
Error from server (NotFound): deployments.extensions "prometheus" not found
$ helm ls --all
NAME REVISION UPDATED STATUS CHART NAMESPACE
grafana 1 Tue Dec 17 12:26:32 2019 DEPLOYED grafana-1.8.0 grafana
prometheus 1 Wed Dec 18 10:24:58 2019 FAILED prometheus-9.5.4 prometheus
k8s cluster running on hyper V
admin1@POC-k8s-master:~/poc-cog$ helm init --wait
$HELM_HOME has been configured at /home/admin1/.helm.
Error: error installing: the server could not find the requested resource
admin1@POC-k8s-master:~/.helm$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
poc-k8s-master Ready master 24d v1.16.3 192.168.137.2 <none> Ubuntu 16.04.6 LTS 4.4.0-62-generic docker://19.3.5
poc-k8s-node1 Ready <none> 24d v1.16.3 192.168.137.3 <none> Ubuntu 16.04.6 LTS 4.4.0-62-generic docker://18.6.2
admin@ip-172-20-49-150:~/dev-migration/stage$ helm install stable/prometheus
Error: release loping-owl failed: clusterroles.rbac.authorization.k8s.io "loping-owl-prometheus-kube-state-metrics" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["namespaces"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["namespaces"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["persistentvolumeclaims"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["persistentvolumeclaims"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["resourcequotas"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["resourcequotas"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["replicationcontrollers"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["replicationcontrollers"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["limitranges"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["limitranges"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["persistentvolumeclaims"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["persistentvolumeclaims"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["persistentvolumes"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["persistentvolumes"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["daemonsets"], APIGroups:["extensions"], Verbs:["list"]} PolicyRule{Resources:["daemonsets"], APIGroups:["extensions"], Verbs:["watch"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["list"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["watch"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["list"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["watch"]} PolicyRule{Resources:["replicasets"], APIGroups:["extensions"], Verbs:["list"]} PolicyRule{Resources:["replicasets"], APIGroups:["extensions"], Verbs:["watch"]} PolicyRule{Resources:["daemonsets"], APIGroups:["apps"], Verbs:["get"]} PolicyRule{Resources:["daemonsets"], APIGroups:["apps"], Verbs:["list"]} PolicyRule{Resources:["daemonsets"], APIGroups:["apps"], Verbs:["watch"]} PolicyRule{Resources:["deployments"], APIGroups:["apps"], Verbs:["get"]} PolicyRule{Resources:["deployments"], APIGroups:["apps"], Verbs:["list"]} PolicyRule{Resources:["deployments"], APIGroups:["apps"], Verbs:["watch"]} PolicyRule{Resources:["statefulsets"], APIGroups:["apps"], Verbs:["get"]} PolicyRule{Resources:["statefulsets"], APIGroups:["apps"], Verbs:["list"]} PolicyRule{Resources:["statefulsets"], APIGroups:["apps"], Verbs:["watch"]} PolicyRule{Resources:["cronjobs"], APIGroups:["batch"], Verbs:["list"]} PolicyRule{Resources:["cronjobs"], APIGroups:["batch"], Verbs:["watch"]} PolicyRule{Resources:["jobs"], APIGroups:["batch"], Verbs:["list"]} PolicyRule{Resources:["jobs"], APIGroups:["batch"], Verbs:["watch"]} PolicyRule{Resources:["horizontalpodautoscalers"], APIGroups:["autoscaling"], Verbs:["list"]} PolicyRule{Resources:["horizontalpodautoscalers"], APIGroups:["autoscaling"], Verbs:["watch"]} PolicyRule{Resources:["poddisruptionbudgets"], APIGroups:["policy"], Verbs:["list"]} PolicyRule{Resources:["poddisruptionbudgets"], APIGroups:["policy"], Verbs:["watch"]} PolicyRule{Resources:["certificatesigningrequests"], APIGroups:["certificates.k8s.io"], Verbs:["list"]} PolicyRule{Resources:["certificatesigningrequests"], APIGroups:["certificates.k8s.io"], Verbs:["watch"]}] user=&{system:serviceaccount:kube-system:tiller b474eab9-b753-11e9-83a0-06e8a114eea2 [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found, clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]]
admin@ip-172-20-49-150:~/dev-migration/stage$ kubectl --namespace=kube-system create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
Error from server (AlreadyExists): clusterrolebindings.rbac.authorization.k8s.io "add-on-cluster-admin" already exists
admin@ip-172-20-49-150:~/dev-migration/stage$ helm inspect prometheus
Error: failed to download "prometheus"