I have a service and ingress setup on my minikube kubernetes cluster which exposes the domain name hello.life.com Now I need to access this domain from inside another pod as curl http://hello.life.com and this should return proper html
My service is as follows:
apiVersion: v1
kind: Service
metadata:
labels:
app: bulging-zorse-key
chart: key-0.1.0
heritage: Tiller
release: bulging-zorse
name: bulging-zorse-key-svc
namespace: abc
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
name: bulging-zorse-key
type: ClusterIP
status:
loadBalancer: {}
My ingress is as follows:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
labels:
app: bulging-zorse-key
chart: key-0.1.0
heritage: Tiller
release: bulging-zorse
name: bulging-zorse-key-ingress
namespace: dev
spec:
rules:
- host: hello.life.com
http:
paths:
- backend:
serviceName: bulging-zorse-key-svc
servicePort: 80
path: /
status:
loadBalancer:
ingress:
- {}
Can someone please help me out as to what changes do I need to make to get it working?
Thanks in advance!!!
I found a good explanation of your problem and the solution in the Custom DNS Entries For Kubernetes article:
Suppose we have a service,
foo.default.svc.cluster.local
that is available to outside clients asfoo.example.com
. That is, when looked up outside the cluster,foo.example.com
will resolve to the load balancer VIP - the external IP address for the service. Inside the cluster, it will resolve to the same thing, and so using this name internally will cause traffic to hairpin - travel out of the cluster and then back in via the external IP.
The solution is:
Instead, we want
foo.example.com
to resolve to the internal ClusterIP, avoiding the hairpin.To do this in CoreDNS, we make use of the rewrite plugin. This plugin can modify a query before it is sent down the chain to whatever backend is going to answer it.
To get the behavior we want, we just need to add a rewrite rule mapping
foo.example.com
tofoo.default.svc.cluster.local
:
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
rewrite name foo.example.com foo.default.svc.cluster.local
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
creationTimestamp: "2019-01-09T15:02:52Z"
name: coredns
namespace: kube-system
resourceVersion: "8309112"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
uid: a2ef5ff1-141f-11e9-9043-42010a9c0003
Note: In your case, you have to put ingress service name as a destination for the alias.
(E.g.: rewrite name hello.life.com ingress-service-name.ingress-namespace.svc.cluster.local
) Make sure you're using correct service name and namespace name.
Once we add that to the ConfigMap via
kubectl edit configmap coredns -n kube-system
orkubectl apply -f patched-coredns-deployment.yaml -n kube-system
, we have to wait 10-15 minutes. Recent CoreDNS versions includes reload plugin.
reload
Name
reload - allows automatic reload of a changed Corefile.
Description
This plugin periodically checks if the Corefile has changed by reading it and calculating its MD5 checksum. If the file has changed, it reloads CoreDNS with the new Corefile. This eliminates the need to send a SIGHUP or SIGUSR1 after changing the Corefile.
The reloads are graceful - you should not see any loss of service when the reload happens. Even if the new Corefile has an error, CoreDNS will continue to run the old config and an error message will be printed to the log. But see the Bugs section for failure modes.
In some environments (for example, Kubernetes), there may be many CoreDNS instances that started very near the same time and all share a common Corefile. To prevent these all from reloading at the same time, some jitter is added to the reload check interval. This is jitter from the perspective of multiple CoreDNS instances; each instance still checks on a regular interval, but all of these instances will have their reloads spread out across the jitter duration. This isn't strictly necessary given that the reloads are graceful, and can be disabled by setting the jitter to 0s.
Jitter is re-calculated whenever the Corefile is reloaded.
Running our test pod, we can see this works:
$ kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools If you don't see a command prompt, try pressing enter. / # host foo foo.default.svc.cluster.local has address 10.0.0.72 / # host foo.example.com foo.example.com has address 10.0.0.72 / # host bar.example.com Host bar.example.com not found: 3(NXDOMAIN) / #