I have a k8s template for deploying pods and services. I am using this template to deploy different services based on some parameters(different names, labels) on AKS.
Some service gets their External-IP and few of the services External-IP is always in pending state.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
service/ca1st-orgc LoadBalancer 10.0.25.227 <pending> 7054:30907/TCP 17m
service/ca1st-orgc-db-mysql LoadBalancer 10.0.97.81 52.13.67.9 3306:31151/TCP 17m
service/kafka1st ClusterIP 10.0.15.90 <none> 9092/TCP,9093/TCP 17m
service/kafka2nd ClusterIP 10.0.17.22 <none> 9092/TCP,9093/TCP 17m
service/kafka3rd ClusterIP 10.0.02.07 <none> 9092/TCP,9093/TCP 17m
service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 20m
service/orderer1st-orgc LoadBalancer 10.0.17.19 <pending> 7050:30971/TCP 17m
service/orderer2nd-orgc LoadBalancer 10.0.02.15 13.06.27.31 7050:31830/TCP 17m
service/peer1st-orga LoadBalancer 10.0.10.19 <pending> 7051:31402/TCP,7052:32368/TCP,7053:31786/TCP,5984:30721/TCP 17m
service/peer1st-orgb LoadBalancer 10.0.218.48 13.06.25.13 7051:31892/TCP,7052:30326/TCP,7053:31419/TCP,5984:31882/TCP 17m
service/peer2nd-orga LoadBalancer 10.0.86.64 <pending> 7051:30590/TCP,7052:31870/TCP,7053:30362/TCP,5984:30036/TCP 17m
service/peer2nd-orgb LoadBalancer 10.0.195.212 52.13.58.3 7051:30476/TCP,7052:30091/TCP,7053:30099/TCP,5984:32614/TCP 17m
service/zookeeper1st ClusterIP 10.0.57.192 <none> 2888/TCP,3888/TCP,2181/TCP 17m
service/zookeeper2nd ClusterIP 10.0.174.25 <none> 2888/TCP,3888/TCP,2181/TCP 17m
service/zookeeper3rd ClusterIP 10.0.210.166 <none> 2888/TCP,3888/TCP,2181/TCP 17m
Funny thing is, it's the same template which is being used to deploy all the related services. For an instance, services which are prefixed with peer, being deployed by same template.
Has anyone faced this?
Deployment template for an orderer Pod
apiVersion: v1
kind: Pod
metadata:
name: {{ orderer.name }}
labels:
k8s-app: {{ orderer.name }}
type: orderer
{% if (project_version is version('1.4.0','>=') or 'stable' in project_version or 'latest' in project_version) and fabric.metrics is defined and fabric.metrics %}
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: /metrics
prometheus.io/port: '8443'
prometheus.io/scheme: 'http'
{% endif %}
spec:
{% if creds %}
imagePullSecrets:
- name: regcred
{% endif %}
restartPolicy: OnFailure
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: fabriccerts
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchExpressions:
- key: type
operator: In
values:
- orderer
topologyKey: kubernetes.io/hostname
containers:
- name: {{ orderer.name }}
image: {{ fabric.repo.url }}fabric-orderer:{{ fabric.baseimage_tag }}
{% if 'latest' in project_version or 'stable' in project_version %}
imagePullPolicy: Always
{% else %}
imagePullPolicy: IfNotPresent
{% endif %}
env:
{% if project_version is version('1.3.0','<') %}
- { name: "ORDERER_GENERAL_LOGLEVEL", value: "{{ fabric.logging_level | default('ERROR') | lower }}" }
{% elif project_version is version('1.4.0','>=') or 'stable' in project_version or 'latest' in project_version %}
- { name: "FABRIC_LOGGING_SPEC", value: "{{ fabric.logging_level | default('ERROR') | lower }}" }
{% endif %}
- { name: "ORDERER_GENERAL_LISTENADDRESS", value: "0.0.0.0" }
- { name: "ORDERER_GENERAL_GENESISMETHOD", value: "file" }
- { name: "ORDERER_GENERAL_GENESISFILE", value: "/etc/hyperledger/fabric/artifacts/keyfiles/genesis.block" }
- { name: "ORDERER_GENERAL_LOCALMSPID", value: "{{ orderer.org }}" }
- { name: "ORDERER_GENERAL_LOCALMSPDIR", value: "/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/msp" }
- { name: "ORDERER_GENERAL_TLS_ENABLED", value: "{{ tls | lower }}" }
{% if tls %}
- { name: "ORDERER_GENERAL_TLS_PRIVATEKEY", value: "/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/tls/server.key" }
- { name: "ORDERER_GENERAL_TLS_CERTIFICATE", value: "/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/tls/server.crt" }
- { name: "ORDERER_GENERAL_TLS_ROOTCAS", value: "[/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/tls/ca.crt]" }
{% endif %}
{% if (project_version is version_compare('2.0.0','>=') or ('stable' in project_version or 'latest' in project_version)) and fabric.consensus_type is defined and fabric.consensus_type == 'etcdraft' %}
- { name: "ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY", value: "/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/tls/server.key" }
- { name: "ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE", value: "/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/tls/server.crt" }
- { name: "ORDERER_GENERAL_CLUSTER_ROOTCAS", value: "[/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/tls/ca.crt]" }
{% elif fabric.consensus_type | default('kafka') == 'kafka' %}
- { name: "ORDERER_KAFKA_RETRY_SHORTINTERVAL", value: "1s" }
- { name: "ORDERER_KAFKA_RETRY_SHORTTOTAL", value: "30s" }
- { name: "ORDERER_KAFKA_VERBOSE", value: "true" }
{% endif %}
{% if mutualtls %}
{% if project_version is version('1.1.0','>=') or 'stable' in project_version or 'latest' in project_version %}
- { name: "ORDERER_GENERAL_TLS_CLIENTAUTHREQUIRED", value: "true" }
{% else %}
- { name: "ORDERER_GENERAL_TLS_CLIENTAUTHENABLED", value: "true" }
{% endif %}
- { name: "ORDERER_GENERAL_TLS_CLIENTROOTCAS", value: "[{{ rootca | list | join (", ")}}]" }
{% endif %}
{% if (project_version is version('1.4.0','>=') or 'stable' in project_version or 'latest' in project_version) and fabric.metrics is defined and fabric.metrics %}
- { name: "ORDERER_OPERATIONS_LISTENADDRESS", value: ":8443" }
- { name: "ORDERER_OPERATIONS_TLS_ENABLED", value: "false" }
- { name: "ORDERER_METRICS_PROVIDER", value: "prometheus" }
{% endif %}
{% if fabric.orderersettings is defined and fabric.orderersettings.ordererenv is defined %}
{% for pkey, pvalue in fabric.orderersettings.ordererenv.items() %}
- { name: "{{ pkey }}", value: "{{ pvalue }}" }
{% endfor %}
{% endif %}
{% include './resource.j2' %}
volumeMounts:
- { mountPath: "/etc/hyperledger/fabric/artifacts", name: "task-pv-storage" }
command: ["orderer"]
Deployment config for LoadBalancer
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: {{ orderer.name }}
name: {{ orderer.name }}
spec:
selector:
k8s-app: {{ orderer.name }}
{% if fabric.k8s.exposeserviceport %}
type: LoadBalancer
{% endif %}
ports:
- name: port1
port: 7050
{% if fabric.metrics is defined and fabric.metrics %}
- name: scrapeport
port: 8443
{% endif %}
Interesting thing is, I don't see any Events(on running kubectl describe service orderer1st-orgc) for the services which haven't got their External-IP
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Please share your thoughts.
There was an issue with my cluster. I am not sure what it was but, the same set of LoadBalancers never used to get their Public IP. No matter how many times you cleanup all the pvc, services and pods. I deleted the cluster and re-created one. Everything works as expected in the new cluster.
All the LoadBalancers gets their public IP.