I am new to Kubernetes. I have this scenario for multi-tenancy
1) I have 3 namespaces as shown here:
default,
tenant1-namespace,
tenant2-namespace2) namespace default has two database pods
tenant1-db - listening on port 5432
tenant2-db - listening on port 5432namespace tenant1-ns has one app pod
tenant1-app - listening on port 8085namespace tenant2-ns has one app pod
tenant2-app - listening on port 80853) I have applied 3 network policies on default namespace
a) to restrict access to both db pods from other namespaces
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
b) to allow access to tenant1-db pod from tenant1-app of tenant1-ns only
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces-except-specific-pod-1
namespace: default
spec:
podSelector:
matchLabels:
k8s-app: tenant1-db
ingress:
- from:
- namespaceSelector:
matchLabels:
name: tenant1-development
- podSelector:
matchLabels:
app: tenant1-appc) to allow access to tenant2-db pod from tenant2-app of tenant2-ns only
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces-except-specific-pod-2
namespace: default
spec:
podSelector:
matchLabels:
k8s-app: tenant2-db
ingress:
- from:
- namespaceSelector:
matchLabels:
name: tenant2-development
- podSelector:
matchLabels:
app: tenant2-appI want to restrict access of tenant1-db to tenant1-app only, tenant2-db to tenant2-app only. But it seems tenant2-app can access tenant1-db which should not happen.
Below is db-config.js for tenant2-app
module.exports = {
HOST: "tenant1-db",
USER: "postgres",
PASSWORD: "postgres",
DB: "tenant1db",
dialect: "postgres",
pool: {
max: 5,
min: 0,
acquire: 30000,
idle: 10000
}
};
As you can see I am pointing tenant2-app to use tenant1-db, I want to restrict tennat1-db to tenant1-app only? what modifications needs to do in network policies ?
Updates :
tenant1 deployment & Services yamls
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: tenant1-app-deployment
namespace: tenant1-namespace
spec:
selector:
matchLabels:
app: tenant1-app
replicas: 1 # tells deployment to run 1 pods matching the template
template:
metadata:
labels:
app: tenant1-app
spec:
containers:
- name: tenant1-app-container
image: tenant1-app-dock-img:v1
ports:
- containerPort: 8085
---
# https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service
kind: Service
apiVersion: v1
metadata:
name: tenant1-app-service
namespace: tenant1-namespace
spec:
selector:
app: tenant1-app
ports:
- protocol: TCP
port: 8085
targetPort: 8085
nodePort: 31005
type: LoadBalancer tenant2-app deployments & service yamls
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: tenant2-app-deployment
namespace: tenant2-namespace
spec:
selector:
matchLabels:
app: tenant2-app
replicas: 1 # tells deployment to run 1 pods matching the template
template:
metadata:
labels:
app: tenant2-app
spec:
containers:
- name: tenant2-app-container
image: tenant2-app-dock-img:v1
ports:
- containerPort: 8085
---
# https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service
kind: Service
apiVersion: v1
metadata:
name: tenant2-app-service
namespace: tenant2-namespace
spec:
selector:
app: tenant2-app
ports:
- protocol: TCP
port: 8085
targetPort: 8085
nodePort: 31006
type: LoadBalancer Updates 2 :
db-pod1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: null
generation: 1
labels:
k8s-app: tenant1-db
name: tenant1-db
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: tenant1-db
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
k8s-app: tenant1-db
name: tenant1-db
spec:
volumes:
- name: tenant1-pv-storage
persistentVolumeClaim:
claimName: tenant1-pv-claim
containers:
- env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
- name: POSTGRES_DB
value: tenant1db
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
image: postgres:11.5-alpine
imagePullPolicy: IfNotPresent
name: tenant1-db
volumeMounts:
- mountPath: "/var/lib/postgresql/data/pgdata"
name: tenant1-pv-storage
resources: {}
securityContext:
privileged: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}db-pod2.ymal
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: null
generation: 1
labels:
k8s-app: tenant2-db
name: tenant2-db
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: tenant2-db
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
k8s-app: tenant2-db
name: tenant2-db
spec:
volumes:
- name: tenant2-pv-storage
persistentVolumeClaim:
claimName: tenant2-pv-claim
containers:
- env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
- name: POSTGRES_DB
value: tenant2db
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
image: postgres:11.5-alpine
imagePullPolicy: IfNotPresent
name: tenant2-db
volumeMounts:
- mountPath: "/var/lib/postgresql/data/pgdata"
name: tenant2-pv-storage
resources: {}
securityContext:
privileged: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}Update 3 :
kubectl get svc -n default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d2h
nginx ClusterIP 10.100.24.46 <none> 80/TCP 5d1h
tenant1-db LoadBalancer 10.111.165.169 10.111.165.169 5432:30810/TCP 4d22h
tenant2-db LoadBalancer 10.101.75.77 10.101.75.77 5432:30811/TCP 2d22h
kubectl get svc -n tenant1-namespace
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tenant1-app-service LoadBalancer 10.111.200.49 10.111.200.49 8085:31005/TCP 3d
tenant1-db ExternalName <none> tenant1-db.default.svc.cluster.local 5432/TCP 2d23h
kubectl get svc -n tenant2-namespace
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tenant1-db ExternalName <none> tenant1-db.default.svc.cluster.local 5432/TCP 2d23h
tenant2-app-service LoadBalancer 10.99.139.18 10.99.139.18 8085:31006/TCP 2d23hReferring from the docs Let's understand the below policy that you have for tenant2.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces-except-specific-pod-2
namespace: default
spec:
podSelector:
matchLabels:
k8s-app: tenant2-db
ingress:
- from:
- namespaceSelector:
matchLabels:
name: development
- podSelector:
matchLabels:
app: tenant2-appThe above network policy that you have defined has two elements in the form array which says allow connections from Pods in the local (default) namespace with the label app=tenant2-app, or from any Pod in any namespace with the label name=development.
If you merge the rules into a single rule as below it should solve the issue.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces-except-specific-pod-2
namespace: default
spec:
podSelector:
matchLabels:
k8s-app: tenant2-db
ingress:
- from:
- namespaceSelector:
matchLabels:
name: tenant2-development
podSelector:
matchLabels:
app: tenant2-appAbove network policy means allow connections from Pods with the label app=tenant2-app in namespaces with the label name=tenant2-development.
Add a label name=tenant2-development to the tenant2-ns namespace.
Do the same exercise for tenant1 as well as bellow:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces-except-specific-pod-1
namespace: default
spec:
podSelector:
matchLabels:
k8s-app: tenant1-db
ingress:
- from:
- namespaceSelector:
matchLabels:
name: tenant1-development
podSelector:
matchLabels:
app: tenant1-appAdd a label name=tenant1-development to the tenant1-ns namespace.