update
I did
kubectl get ingressroute -A
NAMESPACE NAME AGE
example example-ingress 44h
example example-ingress-route 40h
and then I did
kubectl delete ingress example-ingress -n example
ingress.extensions "example-ingress" deleted
and now http://example.com gives 404
but https://example.com works fine with secured cert and all?
I have a cluster where I have a simple dockerized php app that just displays “hello” on the page.
In the cluster I have installed traefik and cert-manager via their helm charts, as I am using cert-manager for lets encrypt:
https://hub.helm.sh/charts/traefik/traefik
https://hub.helm.sh/charts/jetstack/cert-manager
When I visit my domain via http it works and I can see “hello”
But when I visit my domain with https it just says “404 page not found”
Error in the traefik pod:
E0916 10:48:39.456348 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.3/tools/cache/reflector.go:105: Failed to list *v1alpha1.IngressRoute: v1alpha1.IngressRouteList.Items: []v1alpha1.IngressRoute: v1alpha1.IngressRoute.Spec: v1alpha1.IngressRouteSpec.TLS: readObjectStart: expect { or n, but found [, error found in #10 byte of ...|}],"tls":[{"hosts":[|..., bigger context ...|ices":[{"name":"example-app","port":80}]}],"tls":[{"hosts”:[“example.com"],"secretName|...
When I click the https shield on the browser and click more information it informs me:
Verified by: CN=TRAEFIK DEFAULT CERT
DNS Name 31047792e374617b441b6f82cacde627.1dc1fc2f960b83b2f533f2ff411e82bf.traefik.default
For setting up cert-manager I followed most of this guide: https://opensource.com/article/20/3/ssl-letsencrypt-k3s
When I do:
kubectl get issuers -n example
NAME READY AGE
example-issuer-staging True 15h
When I do:
kubectl get certificates -n example
NAME READY SECRET AGE
domain-com True domain-com-tls 15h
When I do a curl on http and https here are my results:
curl -v http://example.com
* Trying domain-ip…
* TCP_NODELAY set
* Connected to example.com (domain-ip) port 80 (#0)
> GET / HTTP/1.1
> Host: example.com
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: text/html; charset=UTF-8
< Date: Tue, 15 Sep 2020 15:41:45 GMT
< Server: nginx
< X-Powered-By: PHP/7.4.9
< Content-Length: 5
<
* Connection #0 to host example.com left intact
hello* Closing connection 0
curl -v https://example.com
* Trying domain-ip...
* TCP_NODELAY set
* Connected to example.com (domain-ip) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS alert, unknown CA (560):
* SSL certificate problem: unable to get local issuer certificate
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
When I do:
kubectl get secret -n example
NAME TYPE DATA AGE
domain-com-tls kubernetes.io/tls 2 19h
When I do:
kubectl get ing -A
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
example example-ingress <none> example.com 80, 443 13d
When I do:
kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cert-manager cert-manager ClusterIP 10.245.95.66 <none> 9402/TCP 16h
cert-manager cert-manager-webhook ClusterIP 10.245.86.7 <none> 443/TCP 16h
default kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 23d
example example-app ClusterIP 10.245.132.184 <none> 80/TCP,443/TCP 15m
kube-system kube-dns ClusterIP 10.245.0.10 <none> 53/UDP,53/TCP,9153/TCP 23d
routing traefik LoadBalancer 10.245.21.52 external-ip 80:31635/TCP,443:31142/TCP 2d1
When I do:
kubectl describe certificates domain-com -n example
Name: domain-com
Namespace: example
Labels: <none>
Annotations: <none>
API Version: cert-manager.io/v1
Kind: Certificate
Metadata:
Creation Timestamp: 2020-09-15T17:41:27Z
Generation: 1
Managed Fields:
API Version: cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:conditions:
f:notAfter:
f:notBefore:
f:renewalTime:
Manager: controller
Operation: Update
Time: 2020-09-15T17:41:27Z
API Version: cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:commonName:
f:dnsNames:
f:issuerRef:
.:
f:kind:
f:name:
f:secretName:
Manager: kubectl
Operation: Update
Time: 2020-09-15T17:41:27Z
Resource Version: 2018179
Self Link: /apis/cert-manager.io/v1/namespaces/example/certificates/domain-com
UID: 1ddb2c20-0fa5-414b-af4f-32c4e02cf41f
Spec:
Common Name: example.com
Dns Names:
example.com
Issuer Ref:
Kind: Issuer
Name: example-issuer
Secret Name: domain-com-tls
Status:
Conditions:
Last Transition Time: 2020-09-15T17:41:27Z
Message: Certificate is up to date and has not expired
Reason: Ready
Status: True
Type: Ready
Not After: 2020-12-14T12:11:24Z
Not Before: 2020-09-15T12:11:24Z
Renewal Time: 2020-11-14T12:11:24Z
Events: <none>
When I do:
kubectl describe pods -n example example-app-main-g9tzn
Name: example-app-main-g9tzn
Namespace: example
Priority: 0
Node: cluster-name-3gkmj/10.110.0.5
Start Time: Wed, 16 Sep 2020 11:16:06 +0200
Labels: app=example-app
Annotations: <none>
Status: Running
IP: 10.244.0.75
IPs:
IP: 10.244.0.75
Controlled By: ReplicaSet/example-app-main
Containers:
example-app-container:
Container ID: docker://bede3ad52bc2d54d343bd0c8ec36ad39854b65e97522f9e0153b6d33f18d05bf
Image: richarvey/nginx-php-fpm:1.10.3
Image ID: docker-pullable://richarvey/nginx-php-fpm@sha256:140e92581255ce5e19d144b883560fa891a632fedaf68910ba4b65550d5b12a5
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 16 Sep 2020 11:16:10 +0200
Ready: True
Restart Count: 0
Environment:
SSH_KEY: secret
GIT_REPO: login-details:project-name/source.git
GIT_EMAIL: user@example.com
GIT_NAME: user
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-bphcm (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-bphcm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-bphcm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 21m default-scheduler Successfully assigned example/example-app-main-g9tzn to cluster-3gkmj
Normal Pulling 21m kubelet, cluster-3gkmj Pulling image "richarvey/nginx-php-fpm:1.10.3"
Normal Pulled 21m kubelet, cluster-3gkmj Successfully pulled image "richarvey/nginx-php-fpm:1.10.3"
Normal Created 21m kubelet, cluster-3gkmj Created container example-app-container
Normal Started 21m kubelet, cluster-3gkmj Started container example-app-container
When I do:
kubectl describe deployment traefik -n routing
Name: traefik
Namespace: routing
CreationTimestamp: Sun, 13 Sep 2020 18:14:53 +0200
Labels: app.kubernetes.io/instance=traefik
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=traefik
helm.sh/chart=traefik-9.1.1
Annotations: deployment.kubernetes.io/revision: 1
meta.helm.sh/release-name: traefik
meta.helm.sh/release-namespace: routing
Selector: app.kubernetes.io/instance=traefik,app.kubernetes.io/name=traefik
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: app.kubernetes.io/instance=traefik
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=traefik
helm.sh/chart=traefik-9.1.1
Service Account: traefik
Containers:
traefik:
Image: traefik:2.2.8
Ports: 9000/TCP, 8000/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Args:
--global.checknewversion
--global.sendanonymoususage
--entryPoints.traefik.address=:9000/tcp
--entryPoints.web.address=:8000/tcp
--entryPoints.websecure.address=:8443/tcp
--api.dashboard=true
--ping=true
--providers.kubernetescrd
--providers.kubernetesingress
--accesslog=true
--accesslog.fields.defaultmode=keep
--accesslog.fields.headers.defaultmode=drop
Liveness: http-get http://:9000/ping delay=10s timeout=2s period=10s #success=1 #failure=3
Readiness: http-get http://:9000/ping delay=10s timeout=2s period=10s #success=1 #failure=1
Environment: <none>
Mounts:
/data from data (rw)
/tmp from tmp (rw)
Volumes:
data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
tmp:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: traefik-7bfff8d8f6 (1/1 replicas created)
Events: <none>
I am trying to figure out what is going wrong, so any help would be great!
Here is my file structure for my php app:
example
- example-ingress-route.yml
- example-app.yml
- example-issuer.yml
- example-service.yml
- example-solver.yml
Content of: example-ingress-route.yml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
namespace: example
name: example-ingress-route
annotations:
kubernetes.io/ingress.class: "traefik"
cert-manager.io/cluster-issuer: example-issuer
traefik.ingress.kubernetes.io/router.entrypoints: web, websecure
traefik.frontend.redirect.entryPoint: https
spec:
entryPoints:
- web
- websecure
routes:
- match: Host(`example.com`)
kind: Rule
services:
- name: example-app
namespace: example
port: 443
tls:
hosts:
- example.com
options:
namespace: example
secretName: domain-com-tls
Content of: example-app.yml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
namespace: example
name: 'example-app-main'
labels:
app: 'example-app'
tier: 'frontend'
spec:
replicas: 1
selector:
matchLabels:
app: 'example-app'
template:
metadata:
labels:
app: 'example-app'
spec:
containers:
- name: example-app-container
image: richarvey/nginx-php-fpm:1.10.3
imagePullPolicy: Always
env:
- name: SSH_KEY
value: ‘hidden’
- name: GIT_REPO
value: 'git@gitlab.example.com:project//source.git'
- name: GIT_EMAIL
value: ‘hidden’
- name: GIT_NAME
value: ‘hidden’
ports:
- containerPort: 80
Content of: example-issuer.yml
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: example-issuer
namespace: example
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: letsencrypt@example.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: domain-com-tls
# Enable the HTTP-01 challenge provider
solvers:
# An empty 'selector' means that this solver matches all domains
- http01:
ingress:
class: traefik
Content of: example-service.yml
apiVersion: v1
kind: Service
metadata:
namespace: example
name: 'example-app'
spec:
type: ClusterIP
ports:
- protocol: TCP
name: http
port: 80
targetPort: 80
- protocol: TCP
name: https
port: 443
targetPort: 443
selector:
app: 'example-app'
Content of: example-solver.yml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: domain-com
namespace: example
spec:
secretName: domain-com-tls
issuerRef:
name: example-issuer
kind: Issuer
commonName: example.com
dnsNames:
- example.com
Some error in your YAMLs
1. In example-ingress-route.yml
you have "cert-manager.io/cluster-issuer: example-issuer"
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
namespace: example
name: example-ingress-route
annotations:
kubernetes.io/ingress.class: "traefik"
cert-manager.io/cluster-issuer: example-issuer
But you have created a issuer in example-issuer.yml
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: example-issuer
namespace: example
You can change to ClusterIssuers in example-issuer.yml