Connect to private ip address - graphql - kubernetes

9/26/2018

How to connect graphql api which is on private network and accessible through private ip address. My frontend server and api is on the VNET.

import { ApolloClient } from 'apollo-client'
import { InMemoryCache } from 'apollo-cache-inmemory'
import { createUploadLink } from 'apollo-upload-client'

const uploadLink = createUploadLink({
    uri: 'http://10.0.0.10:3000'+'/api'
})

const client = new ApolloClient({
    link: uploadLink,
    cache: new InMemoryCache()
})
export default client

Both applications are running on kubernetes same cluster different pods. Private services are accessible within cluster and when I exec into the frontend pod I am able to access graphql end point with private ip address.

But, On the browser, it's not connecting and giving this error: ERR_CONNECTION_REFUSED

frontend (public ip) --> graphql (private ip)

-- Ronak Patel
graphql
kubernetes
reactjs

2 Answers

9/26/2018

You seem to answer your own question: that IP address is private.
You'll want to set a service definition in order to expose it to the public.

-- samhain1138
Source: StackOverflow

9/27/2018

The 3 main methods for accessing an internal kubernetes service from outside are: NodePort, LoadBalancer, and Ingress.

You can read about some of the main differences between them here https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0


NodePort

Map a randomly or manually selected high port from a certain range to a service on a 1 to 1 basis.

Either allow kubernetes to randomly select a high port, or manually define a high port from a predefined range which is by default 30000–32767 (but can be changed), and map it to an internal service port on a 1 to 1 basis.

Warning: Although it is possible to manually define a NodePort port number per service, it is generally not recommended due to possible issues such as port conflicts. So in most cases, you should let the cluster randomly select a NodePort port number for you.

From official docs: https://kubernetes.io/docs/concepts/services-networking/service/#nodeport

If you set the type field to NodePort, the Kubernetes master will allocate a port from a range specified by --service-node-port-range flag (default: 30000-32767), and each Node will proxy that port (the same port number on every Node) into your Service.


LoadBalancer

Attach a service to an external ip provided by an IP Provider service such as a cloud provider Public IP Service.

The functionality of this service type depends on external drivers/plugins. Most modern clouds offer support to supply public IPs for LoadBalancer definitions. But if you are spinning a custom cluster with no means to assign public IPs (such as with Rancher with no IP provider plugins), the best you can probably do with this is assign an IP of a host machine to a single service.

From the official docs: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer

On cloud providers which support external load balancers, setting the type field to LoadBalancer will provision a load balancer for your Service. The actual creation of the load balancer happens asynchronously, and information about the provisioned balancer will be published in the Service’s .status.loadBalancer field.


Ingress

Run a central application router service which receives all traffic on a certain port (or ports) and routes it to services based on parameters like the requested domain and path.

To install it you must create an application router service (such as nginx) which runs in your cluster and analyzes every new resource of type Ingress that is created. Then you create Ingress resource that define the routing rules you would like such as which DNS request to listen to and which service to forward the request to.

Although multiple solutions exist for this purpose, I recommend Nginx Ingress

https://github.com/helm/charts/tree/master/stable/nginx-ingress https://github.com/kubernetes/ingress-nginx

Official Docs:

What is Ingress? Typically, services and pods have IPs only routable by the cluster network. All traffic that ends up at an edge router is either dropped or forwarded elsewhere. Conceptually, this might look like:

internet
    |   ------------   [ Services ] An Ingress is a collection of rules that allow inbound connections to reach the cluster services.

internet
    |    [ Ingress ]    --|-----|--    [ Services ] It can be configured to give services externally-reachable URLs, load balance

traffic, terminate SSL, offer name based virtual hosting, and more. Users request ingress by POSTing the Ingress resource to the API server. An Ingress controller is responsible for fulfilling the Ingress, usually with a loadbalancer, though it may also configure your edge router or additional frontends to help handle the traffic in an HA manner.

-- yosefrow
Source: StackOverflow