ignite CommunicationSpi questions in PAAS environment

12/5/2019

My environment is that the ignite client is on kubernetes and the ignite server is running on a normal server. In such an environment, TCP connections are not allowed from the server to the client. For this reason, CommunicationSpi(server -> client) cannot be allowed. What I'm curious about is what issues can occur in situations where Communication Spi is not available? In this environment, Is there a way to make a CommunicationSpi(server -> client) connection?

-- Lee Changmyung
deployment
ignite
kubernetes

2 Answers

12/5/2019

In Kubernetes, the service is used to communicate with pods.

The default service type in Kubernetes is ClusterIP

ClusterIP is an internal IP address reachable from inside of the Kubernetes cluster only. The ClusterIP enables the applications running within the pods to access the service.

To expose the pods outside the kubernetes cluster, you will need k8s service of NodePort or LoadBalancer type.

  • NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort> .

    Please note that it is needed to have external IP address assigned to one of the nodes in cluster and a Firewall rule that allows ingress traffic to that port. As a result kubeproxy on Kubernetes node (the external IP address is attached to) will proxy that port to the pods selected by the service.

  • LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.

Alternatively it is possible to use Ingress

There is a very good article on acessing Kubernetes Pods from Outside of cluster .

Hope that helps.

Edited on 09-Dec-2019

upon your comment I recall that it's possible to use hostNetwork and hostPort methods.

hostNetwork

The hostNetwork setting applies to the Kubernetes pods. When a pod is configured with hostNetwork: true, the applications running in such a pod can directly see the network interfaces of the host machine where the pod was started. An application that is configured to listen on all network interfaces will in turn be accessible on all network interfaces of the host machine. Example:

apiVersion: v1 
kind: Pod 
metadata: 
  name: nginx 
spec: 
  hostNetwork: true 
  containers: 
    - name: nginx 
      image: nginx

You can check that the application is running with: curl -v http://kubenode01.example.com

Note that every time the pod is restarted Kubernetes can reschedule the pod onto a different node and so the application will change its IP address. Besides that two applications requiring the same port cannot run on the same node. This can lead to port conflicts when the number of applications running on the cluster grows.

What is the host networking good for? For cases where a direct access to the host networking is required.

hostPort

The hostPort setting applies to the Kubernetes containers. The container port will be exposed to the external network at :, where the hostIP is the IP address of the Kubernetes node where the container is running and the hostPort is the port requested by the user.

apiVersion: v1 
kind: Pod 
metadata: 
  name: nginx 
spec: 
  containers: 
    - name: nginx 
      image: nginx 
      ports: 
        - containerPort: 8086 
          hostPort: 443

The hostPort feature allows to expose a single container port on the host IP. Using the hostPort to expose an application to the outside of the Kubernetes cluster has the same drawbacks as the hostNetwork approach discussed in the previous section. The host IP can change when the container is restarted, two containers using the same hostPort cannot be scheduled on the same node.

What is the hostPort used for? For example, the nginx based Ingress controller is deployed as a set of containers running on top of Kubernetes. These containers are configured to use hostPorts 80 and 443 to allow the inbound traffic on these ports from the outside of the Kubernetes cluster.

-- Nick
Source: StackOverflow

12/8/2019

To support such a deployment configuration you would need to dance a lot around a network configuration - setting up K8 Services, Ignite AddressResolver, etc. The Ignite community is already aware of this inconvenience and working on an out-of-the-box solution.

In the meantime, I would advise you to do one of these:

-- dmagda
Source: StackOverflow