Im struggling to get Traefik working on K8s with ACME enabled. I want to store the certs as suggested on a persistantVolume. This for the fact that requesting certs is rateLimited and in case of pod restarts the cert would get lost. Below is my full config that is used for stable/traefik (helm chart) and installed in Azure AKS.
There are a issue that do not seem to work (or im just doing it wrong ofcourse).
pod has unbound immediate PersistentVolumeClaims
This is the initial error that i receive when booting up the pods. The weird thing is that the PersistantVolumeClaim is actually there and ready. When i change the volume itself in my Azure portal it also says its mount to my server
traefik-acme
Namespace: default
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/azure-disk
Creation Time: 2019-04-16T09:55 UTC
Status: Bound
Volume: pvc-b673da74-602d-11e9-a537-9275388
Access modes: ReadWriteOnce
Storage class: default
Also the storageClass itself is active:
$ kubectl get sc --all-namespaces
NAME PROVISIONER AGE
default (default) kubernetes.io/azure-disk 4d
managed-premium kubernetes.io/azure-disk 4d
When i then wait a little longer i receive below error: Unable to mount volumes for pod "traefik-d65fcbc8b-lkzsh_default(b68c8aa3-602d-11e9-a537-92753888c74b)": timeout expired waiting for volumes to attach or mount for pod "default"/"traefik-d65fcbc8b-lkzsh". list of unmounted volumes=[acme]. list of unattached volumes=[config acme default-token-p2lgf]
Here the full K8s event trace:
pod has unbound immediate PersistentVolumeClaims
default-scheduler
2019-04-16T09:55 UTC
Successfully assigned default/traefik-d65fcbc8b-lkzsh to aks-default-22301976-0
default-scheduler
2019-04-16T09:55 UTC
Unable to mount volumes for pod "traefik-d65fcbc8b-lkzsh_default(b68c8aa3-602d-11e9-a537-92753888c74b)": timeout expired waiting for volumes to attach or mount for pod "default"/"traefik-d65fcbc8b-lkzsh". list of unmounted volumes=[acme]. list of unattached volumes=[config acme default-token-p2lgf]
kubelet aks-default-22301976-0
2019-04-16T09:57 UTC
AttachVolume.Attach succeeded for volume "pvc-b673da74-602d-11e9-a537-92753888c74b"
attachdetach-controller
2019-04-16T09:58 UTC
Container image "traefik:1.7.9" already present on machine
kubelet aks-default-22301976-0
2019-04-16T10:01 UTC
Created container
kubelet aks-default-22301976-0
2019-04-16T10:00 UTC
Started container
kubelet aks-default-22301976-0
2019-04-16T10:00 UTC
Back-off restarting failed container
kubelet aks-default-22301976-0
2019-04-16T10:02 UTC
Installing the helm chart of Traefik done with:
helm install -f values.yaml stable/traefik --name traefik
Below is the full values.yaml
used to install the chart
## Default values for Traefik
image: traefik
imageTag: 1.7.9
testFramework:
image: "dduportal/bats"
tag: "0.4.0"
## can switch the service type to NodePort if required
serviceType: LoadBalancer
# loadBalancerIP: ""
# loadBalancerSourceRanges: []
whiteListSourceRange: []
externalTrafficPolicy: Cluster
replicas: 1
# startupArguments:
# - "--ping"
# - "--ping.entrypoint=http"
podDisruptionBudget: {}
# maxUnavailable: 1
# minAvailable: 2
# priorityClassName: ""
# rootCAs: []
resources: {}
debug:
enabled: false
deploymentStrategy: {}
# rollingUpdate:
# maxSurge: 1
# maxUnavailable: 0
# type: RollingUpdate
securityContext: {}
env: {}
nodeSelector: {}
# key: value
affinity: {}
# key: value
tolerations: []
# - key: "key"
# operator: "Equal|Exists"
# value: "value"
# effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)"
## Kubernetes ingress filters
# kubernetes:
# endpoint:
# namespaces:
# - default
# labelSelector:
# ingressClass:
# ingressEndpoint:
# hostname: "localhost"
# ip: "127.0.0.1"
# publishedService: "namespace/servicename"
# useDefaultPublishedService: false
proxyProtocol:
enabled: false
# trustedIPs is required when enabled
trustedIPs: []
# - 10.0.0.0/8
forwardedHeaders:
enabled: false
# trustedIPs is required when enabled
trustedIPs: []
# - 10.0.0.0/8
## Add arbitrary ConfigMaps to deployment
## Will be mounted to /configs/, i.e. myconfig.json would
## be mounted to /configs/myconfig.json.
configFiles: {}
# myconfig.json: |
# filecontents...
## Add arbitrary Secrets to deployment
## Will be mounted to /secrets/, i.e. file.name would
## be mounted to /secrets/mysecret.txt.
## The contents will be base64 encoded when added
secretFiles: {}
# mysecret.txt: |
# filecontents...
ssl:
enabled: false
enforced: false
permanentRedirect: false
upstream: false
insecureSkipVerify: false
generateTLS: false
# defaultCN: "example.com"
# or *.example.com
defaultSANList: []
# - example.com
# - test1.example.com
defaultIPList: []
# - 1.2.3.4
# cipherSuites: []
# https://docs.traefik.io/configuration/entrypoints/#specify-minimum-tls-version
# tlsMinVersion: VersionTLS12
defaultCert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVtekNDQTRPZ0F3SUJBZ0lKQUpBR1FsTW1DMGt5TUEwR0NTcUdTSWIzRFFFQkJRVUFNSUdQTVFzd0NRWUQKVlFRR0V3SlZVekVSTUE4R0ExVUVDQk1JUTI5c2IzSmhaRzh4RURBT0JnTlZCQWNUQjBKdmRXeGtaWEl4RkRBUwpCZ05WQkFvVEMwVjRZVzF3YkdWRGIzSndNUXN3Q1FZRFZRUUxFd0pKVkRFV01CUUdBMVVFQXhRTktpNWxlR0Z0CmNHeGxMbU52YlRFZ01CNEdDU3FHU0liM0RRRUpBUllSWVdSdGFXNUFaWGhoYlhCc1pTNWpiMjB3SGhjTk1UWXgKTURJME1qRXdPVFV5V2hjTk1UY3hNREkwTWpFd09UVXlXakNCanpFTE1Ba0dBMVVFQmhNQ1ZWTXhFVEFQQmdOVgpCQWdUQ0VOdmJHOXlZV1J2TVJBd0RnWURWUVFIRXdkQ2IzVnNaR1Z5TVJRd0VnWURWUVFLRXd0RmVHRnRjR3hsClEyOXljREVMTUFrR0ExVUVDeE1DU1ZReEZqQVVCZ05WQkFNVURTb3VaWGhoYlhCc1pTNWpiMjB4SURBZUJna3EKaGtpRzl3MEJDUUVXRVdGa2JXbHVRR1Y0WVcxd2JHVXVZMjl0TUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQwpBUThBTUlJQkNnS0NBUUVBdHVKOW13dzlCYXA2SDROdUhYTFB6d1NVZFppNGJyYTFkN1ZiRUJaWWZDSStZNjRDCjJ1dThwdTNhVTVzYXVNYkQ5N2pRYW95VzZHOThPUHJlV284b3lmbmRJY3RFcmxueGpxelUyVVRWN3FEVHk0bkEKNU9aZW9SZUxmZXFSeGxsSjE0VmlhNVFkZ3l3R0xoRTlqZy9jN2U0WUp6bmg5S1dZMnFjVnhEdUdEM2llaHNEbgphTnpWNFdGOWNJZm1zOHp3UHZPTk5MZnNBbXc3dUhUKzNiSzEzSUloeDI3ZmV2cXVWcENzNDFQNnBzdStWTG4yCjVIRHk0MXRoQkN3T0wrTithbGJ0ZktTcXM3TEFzM25RTjFsdHpITHZ5MGE1RGhkakpUd2tQclQrVXhwb0tCOUgKNFpZazErRUR0N09QbGh5bzM3NDFRaE4vSkNZK2RKbkFMQnNValFJREFRQUJvNEgzTUlIME1CMEdBMVVkRGdRVwpCQlJwZVc1dFhMdHh3TXJvQXM5d2RNbTUzVVVJTERDQnhBWURWUjBqQklHOE1JRzVnQlJwZVc1dFhMdHh3TXJvCkFzOXdkTW01M1VVSUxLR0JsYVNCa2pDQmp6RUxNQWtHQTFVRUJoTUNWVk14RVRBUEJnTlZCQWdUQ0VOdmJHOXkKWVdSdk1SQXdEZ1lEVlFRSEV3ZENiM1ZzWkdWeU1SUXdFZ1lEVlFRS0V3dEZlR0Z0Y0d4bFEyOXljREVMTUFrRwpBMVVFQ3hNQ1NWUXhGakFVQmdOVkJBTVVEU291WlhoaGJYQnNaUzVqYjIweElEQWVCZ2txaGtpRzl3MEJDUUVXCkVXRmtiV2x1UUdWNFlXMXdiR1V1WTI5dGdna0FrQVpDVXlZTFNUSXdEQVlEVlIwVEJBVXdBd0VCL3pBTkJna3EKaGtpRzl3MEJBUVVGQUFPQ0FRRUFjR1hNZms4TlpzQit0OUtCemwxRmw2eUlqRWtqSE8wUFZVbEVjU0QyQjRiNwpQeG5NT2pkbWdQcmF1SGI5dW5YRWFMN3p5QXFhRDZ0YlhXVTZSeENBbWdMYWpWSk5aSE93NDVOMGhyRGtXZ0I4CkV2WnRRNTZhbW13QzFxSWhBaUE2MzkwRDNDc2V4N2dMNm5KbzdrYnIxWVdVRzN6SXZveGR6OFlEclpOZVdLTEQKcFJ2V2VuMGxNYnBqSVJQNFhac25DNDVDOWdWWGRoM0xSZTErd3lRcTZoOVFQaWxveG1ENk5wRTlpbVRPbjJBNQovYkozVktJekFNdWRlVTZrcHlZbEpCemRHMXVhSFRqUU9Xb3NHaXdlQ0tWVVhGNlV0aXNWZGRyeFF0aDZFTnlXCnZJRnFhWng4NCtEbFNDYzkzeWZrL0dsQnQrU0tHNDZ6RUhNQjlocVBiQT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
defaultKey: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdHVKOW13dzlCYXA2SDROdUhYTFB6d1NVZFppNGJyYTFkN1ZiRUJaWWZDSStZNjRDCjJ1dThwdTNhVTVzYXVNYkQ5N2pRYW95VzZHOThPUHJlV284b3lmbmRJY3RFcmxueGpxelUyVVRWN3FEVHk0bkEKNU9aZW9SZUxmZXFSeGxsSjE0VmlhNVFkZ3l3R0xoRTlqZy9jN2U0WUp6bmg5S1dZMnFjVnhEdUdEM2llaHNEbgphTnpWNFdGOWNJZm1zOHp3UHZPTk5MZnNBbXc3dUhUKzNiSzEzSUloeDI3ZmV2cXVWcENzNDFQNnBzdStWTG4yCjVIRHk0MXRoQkN3T0wrTithbGJ0ZktTcXM3TEFzM25RTjFsdHpITHZ5MGE1RGhkakpUd2tQclQrVXhwb0tCOUgKNFpZazErRUR0N09QbGh5bzM3NDFRaE4vSkNZK2RKbkFMQnNValFJREFRQUJBb0lCQUhrTHhka0dxNmtCWWQxVAp6MkU4YWFENnhneGpyY2JSdGFCcTc3L2hHbVhuQUdaWGVWcE81MG1SYW8wbHZ2VUgwaE0zUnZNTzVKOHBrdzNmCnRhWTQxT1dDTk1PMlYxb1MvQmZUK3Zsblh6V1hTemVQa0pXd2lIZVZMdVdEaVVMQVBHaWl4emF2RFMyUnlQRmEKeGVRdVNhdE5pTDBGeWJGMG5Zd3pST3ZoL2VSa2NKVnJRZlZudU1melFkOGgyMzZlb1UxU3B6UnhSNklubCs5UApNc1R2Wm5OQmY5d0FWcFo5c1NMMnB1V1g3SGNSMlVnem5oMDNZWUZJdGtDZndtbitEbEdva09YWHBVM282aWY5ClRIenBleHdubVJWSmFnRG85bTlQd2t4QXowOW80cXExdHJoU1g1U2p1K0xyNFJvOHg5bytXdUF1VnVwb0lHd0wKMWVseERFRUNnWUVBNzVaWGp1enNJR09PMkY5TStyYVFQcXMrRHZ2REpzQ3gyZnRudk1WWVJKcVliaGt6YnpsVQowSHBCVnk3NmE3WmF6Umxhd3RGZ3ljMlpyQThpM0F3K3J6d1pQclNJeWNieC9nUVduRzZlbFF1Y0FFVWdXODRNCkdSbXhKUGlmOGRQNUxsZXdRalFjUFJwZVoxMzlYODJreGRSSEdma1pscHlXQnFLajBTWExRSEVDZ1lFQXcybkEKbUVXdWQzZFJvam5zbnFOYjBlYXdFUFQrbzBjZ2RyaENQOTZQK1pEekNhcURUblZKV21PeWVxRlk1eVdSSEZOLwpzbEhXU2lTRUFjRXRYZys5aGlMc0RXdHVPdzhUZzYyN2VrOEh1UUtMb2tWWEFUWG1NZG9xOWRyQW9INU5hV2lECmRSY3dEU2EvamhIN3RZV1hKZDA4VkpUNlJJdU8vMVZpbDBtbEk5MENnWUVBb2lsNkhnMFNUV0hWWDNJeG9raEwKSFgrK1ExbjRYcFJ5VEg0eldydWY0TjlhYUxxNTY0QThmZGNodnFiWGJHeEN6U3RxR1E2cW1peUU1TVpoNjlxRgoyd21zZEpxeE14RnEzV2xhL0lxSzM0cTZEaHk3cUNld1hKVGRKNDc0Z3kvY0twZkRmeXZTS1RGZDBFejNvQTZLCmhqUUY0L2lNYnpxUStQREFQR0YrVHFFQ2dZQmQ1YnZncjJMMURzV1FJU3M4MHh3MDBSZDdIbTRaQVAxdGJuNk8KK0IvUWVNRC92UXBaTWV4c1hZbU9lV2Noc3FCMnJ2eW1MOEs3WDY1NnRWdGFYay9nVzNsM3ZVNTdYSFF4Q3RNUwpJMVYvcGVSNHRiN24yd0ZncFFlTm1XNkQ4QXk4Z0xiaUZhRkdRSDg5QWhFa0dTd1d5cWJKc2NoTUZZOUJ5OEtUCkZaVWZsUUtCZ0V3VzJkVUpOZEJMeXNycDhOTE1VbGt1ZnJxbllpUTNTQUhoNFZzWkg1TXU0MW55Yi95NUUyMW4KMk55d3ltWGRlb3VJcFZjcUlVTXl0L3FKRmhIcFJNeVEyWktPR0QyWG5YaENNVlRlL0FQNDJod294Nm02QkZpQgpvemZFa2wwak5uZmREcjZrL1p2MlQ1TnFzaWxaRXJBQlZGOTBKazdtUFBIa0Q2R1ZMUUJ4Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
# Basic auth to protect all the routes. Can use htpasswd to generate passwords
# > htpasswd -n -b testuser testpass
# > testuser:$apr1$JXRA7j2s$LpVns9vsme8FHN0r.aSt11
auth: {}
# basic:
# testuser: $apr1$JXRA7j2s$LpVns9vsme8FHN0r.aSt11
kvprovider:
## If you want to run Traefik in HA mode, you will need to setup a KV Provider. Therefore you can choose one of
## * etcd
## * consul
## * boltdb
## * zookeeper
##
## ref: https://docs.traefik.io/user-guide/cluster/
## storeAcme has to be enabled to support HA Support using acme, but at least one kvprovider is needed
storeAcme: false
importAcme: false
# etcd:
# endpoint: etcd-service:2379
# useAPIV3: false
# watch: true
# prefix: traefik
## Override default configuration template.
## For advanced users :)
##
## Optional
# filename: consul.tmpl
# username: foo
# password: bar
# tls:
# ca: "/etc/ssl/ca.crt"
# cert: "/etc/ssl/consul.crt"
# key: "/etc/ssl/consul.key"
# insecureSkipVerify: true
#
# consul:
# endpoint: consul-service:8500
# watch: true
# prefix: traefik
## Override default configuration template.
## For advanced users :)
##
## Optional
# filename: consul.tmpl
# username: foo
# password: bar
# tls:
# ca: "/etc/ssl/ca.crt"
# cert: "/etc/ssl/consul.crt"
# key: "/etc/ssl/consul.key"
# insecureSkipVerify: true
## only relevant for etcd
acme:
enabled: true
email: me@gmail.com
onHostRule: true
staging: true
logging: true
# Configure a Let's Encrypt certificate to be managed by default.
# This is the only way to request wildcard certificates (works only with dns challenge).
domains:
enabled: true
# List of sets of main and (optional) SANs to generate for
# for wildcard certificates see https://docs.traefik.io/configuration/acme/#wildcard-domains
domainsList:
- main: "*.k8s-test.hardstyletop40.com"
# - sans:
# - "k8s-test.hardstyletop40.com"
# - main: "*.example2.com"
# - sans:
# - "test1.example2.com"
# - "test2.example2.com"
## ACME challenge type: "tls-sni-01", "tls-alpn-01", "http-01" or "dns-01"
## Note the chart's default of tls-sni-01 has been DEPRECATED and (except in
## certain circumstances) DISABLED by Let's Encrypt. It remains as a default
## value in this chart to preserve legacy behavior and avoid a breaking
## change. Users of this chart should strongly consider making the switch to
## the recommended "tls-alpn-01" (avaialbe since v1.7), dns-01 or http-01
## (available since v1.5) challenge.
challengeType: tls-alpn-01
## Configure dnsProvider to perform domain verification using dns challenge
## Applicable only if using the dns-01 challenge type
delayBeforeCheck: 0
resolvers: []
# - 1.1.1.1:53
# - 8.8.8.8:53
dnsProvider:
name: nil
auroradns:
AURORA_USER_ID: ""
AURORA_KEY: ""
AURORA_ENDPOINT: ""
azure:
AZURE_CLIENT_ID: ""
AZURE_CLIENT_SECRET: ""
AZURE_SUBSCRIPTION_ID: ""
AZURE_TENANT_ID: ""
AZURE_RESOURCE_GROUP: ""
cloudflare:
CLOUDFLARE_EMAIL: ""
CLOUDFLARE_API_KEY: ""
digitalocean:
DO_AUTH_TOKEN: ""
dnsimple:
DNSIMPLE_OAUTH_TOKEN: ""
DNSIMPLE_BASE_URL: ""
dnsmadeeasy:
DNSMADEEASY_API_KEY: ""
DNSMADEEASY_API_SECRET: ""
DNSMADEEASY_SANDBOX: ""
dnspod:
DNSPOD_API_KEY: ""
dyn:
DYN_CUSTOMER_NAME: ""
DYN_USER_NAME: ""
DYN_PASSWORD: ""
exoscale:
EXOSCALE_API_KEY: ""
EXOSCALE_API_SECRET: ""
EXOSCALE_ENDPOINT: ""
gandi:
GANDI_API_KEY: ""
godaddy:
GODADDY_API_KEY: ""
GODADDY_API_SECRET: ""
gcloud:
GCE_PROJECT: ""
GCE_SERVICE_ACCOUNT_FILE: ""
linode:
LINODE_API_KEY: ""
namecheap:
NAMECHEAP_API_USER: ""
NAMECHEAP_API_KEY: ""
ns1:
NS1_API_KEY: ""
otc:
OTC_DOMAIN_NAME: ""
OTC_USER_NAME: ""
OTC_PASSWORD: ""
OTC_PROJECT_NAME: ""
OTC_IDENTITY_ENDPOINT: ""
ovh:
OVH_ENDPOINT: ""
OVH_APPLICATION_KEY: ""
OVH_APPLICATION_SECRET: ""
OVH_CONSUMER_KEY: ""
pdns:
PDNS_API_URL: ""
rackspace:
RACKSPACE_USER: ""
RACKSPACE_API_KEY: ""
rfc2136:
RFC2136_NAMESERVER: ""
RFC2136_TSIG_ALGORITHM: ""
RFC2136_TSIG_KEY: ""
RFC2136_TSIG_SECRET: ""
RFC2136_TIMEOUT: ""
route53:
AWS_REGION: ""
AWS_ACCESS_KEY_ID: ""
AWS_SECRET_ACCESS_KEY: ""
vultr:
VULTR_API_KEY: ""
## Save ACME certs to a persistent volume.
## WARNING: If you do not do this and you did not have configured
## a kvprovider, you will re-request certs every time a pod (re-)starts
## and you WILL be rate limited!
persistence:
enabled: true
annotations: {}
## acme data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass: "default"
accessMode: ReadWriteOnce
size: 1Gi
## A manually managed Persistent Volume Claim
## Requires persistence.enabled: true
## If defined, PVC must be created manually before volume will be bound
##
# existingClaim:
dashboard:
enabled: true
domain: traefik.k8s-test.hardstyletop40.com
# serviceType: ClusterIP
service: {}
# annotations:
# key: value
ingress: {}
# annotations:
# key: value
# labels:
# key: value
# tls:
# - hosts:
# - traefik.example.com
# secretName: traefik-default-cert
auth: {}
# basic:
# username: password
statistics: {}
## Number of recent errors to show in the ‘Health’ tab
# recentErrors:
service:
# annotations:
# key: value
# labels:
# key: value
## Further config for service of type NodePort
## Default config with empty string "" will assign a dynamic
## nodePort to http and https ports
nodePorts:
http: ""
https: ""
## If static nodePort configuration is required it can be enabled as below
## Configure ports in allowable range (eg. 30000 - 32767 on minikube)
# nodePorts:
# http: 30080
# https: 30443
gzip:
enabled: true
traefikLogFormat: json
accessLogs:
enabled: false
## Path to the access logs file. If not provided, Traefik defaults it to stdout.
# filePath: ""
format: common # choices are: common, json
## for JSON logging, finer-grained control over what is logged. Fields can be
## retained or dropped, and request headers can be retained, dropped or redacted
fields:
# choices are keep, drop
defaultMode: keep
names: {}
# ClientUsername: drop
headers:
# choices are keep, drop, redact
defaultMode: keep
names: {}
# Authorization: redact
rbac:
enabled: false
## Enable the /metrics endpoint, for now only supports prometheus
## set to true to enable metric collection by prometheus
metrics:
prometheus:
enabled: false
## If true, prevents exposing port 8080 on the main Traefik service, reserving
## it to the dashboard service only
restrictAccess: false
# buckets: [0.1,0.3,1.2,5]
datadog:
enabled: false
# address: localhost:8125
# pushinterval: 10s
statsd:
enabled: false
# address: localhost:8125
# pushinterval: 10s
deployment:
# labels to add to the pod container metadata
# podLabels:
# key: value
# podAnnotations:
# key: value
hostPort:
httpEnabled: false
httpsEnabled: false
dashboardEnabled: false
# httpPort: 80
# httpsPort: 443
# dashboardPort: 8080
sendAnonymousUsage: false
tracing:
enabled: false
serviceName: traefik
# backend: choices are jaeger, zipkin, datadog
# jaeger:
# localAgentHostPort: "127.0.0.1:6831"
# samplingServerURL: http://localhost:5778/sampling
# samplingType: const
# samplingParam: 1.0
# zipkin:
# httpEndpoint: http://localhost:9411/api/v1/spans
# debug: false
# sameSpan: false
# id128bit: true
# datadog:
# localAgentHostPort: "127.0.0.1:8126"
# debug: false
# globalTag: ""
## Create HorizontalPodAutoscaler object.
##
# autoscaling:
# minReplicas: 1
# maxReplicas: 10
# metrics:
# - type: Resource
# resource:
# name: cpu
# targetAverageUtilization: 60
# - type: Resource
# resource:
# name: memory
# targetAverageUtilization: 60
## Timeouts
##
# timeouts:
# ## responding are timeouts for incoming requests to the Traefik instance
# responding:
# readTimeout: 0s
# writeTimeout: 0s
# idleTimeout: 180s
# ## forwarding are timeouts for requests forwarded to the backend servers
# forwarding:
# dialTimeout: 30s
# responseHeaderTimeout: 0s
The main issue is that attaching external Azure resources is slow on which initially a retry is there. There the pod gives a lot of errors that it can not mount since the volume is dynamicly created. Due the retry jt recover after a few minutes.
In fact the actual container crash was due an issue with ACME and Traefik itself and not directly with the volumes.
For your issue, it seems you misunderstand the persist volume claims. When you use the command:
kubectl get sc --all-namespaces
It just shows the storage class, not the persist volume claims. The storage class is used to define how a unit of storage is dynamically created with a persistent volume. You need to create the persist volume claims as you need like this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: azure-managed-disk
spec:
accessModes:
- ReadWriteOnce
storageClassName: managed-premium
resources:
requests:
storage: 5Gi
And you can use the command to show the persist volume claims like below:
kubectl get pvc --all-namespaces
And it actually shows the persist volume claims that you create. Take a look at the Dynamically create and use a persistent volume with Azure disks in Azure Kubernetes Service (AKS). Or Use the special disk that you create.
Update
Also, I get the error as you, but when the pod in the running state, I check inside the pod and find the volumes all mounted correctly. So I guess if the error came because the pod is not in the running state. When the pod in the running state, volumes will mount as expected.