One replica using twice the memory as the other for a k8s pod

3/10/2019

I have the following kubernetes deployment.yml for running bitcoind

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: bitcoin
  namespace: prod
spec:
  serviceName: bitcoinrpc-service
  replicas: 2
  selector:
    matchLabels:
      app: bitcoin-node
  template:
    metadata:
      annotations:
        sidecar.istio.io/inject: "false" #because it needs to discover and connect to other peers
        sidecar.istio.io/proxyImage: docker.io/istio/proxyv2:0.8.0
      labels:
        app: bitcoin-node  
    spec:
      containers:
      - name: bitcoin-node-mainnet
        image: <my image>
        imagePullPolicy: Always
        ports:
        - containerPort: 8332 
        - containerPort: 38832
        env:
        - name: RPC_USER
          valueFrom:
            secretKeyRef:
              name: secrets
              key: bitcoind_rpc_username
        - name: RPC_PASSWORD
          valueFrom:
            secretKeyRef:
              name: secrets
              key: bitcoind_rpc_password
        - name: RPC_ALLOW_IP
          value: "0.0.0.0/0"
        - name: RPC_PORT
          value: "8332"
        - name: PORT
          value: "8333"  
        - name: RPC_THREADS
          value: "64"
        - name: RPC_TIMEOUT
          value: "300"
        - name: SERVER
          value: "1"  
        - name: TX_INDEX
          value: "1"
        - name: ADDR_INDEX
          value: "1"
        - name: MAX_MEMPOOL
          value: "10000"
        - name: DBCACHE
          value: "4096"
        - name: MEMPOOL_EXPIRY
          value: "336"
        - name: ZMQPUBHASHBLOCK
          value: "tcp://*:38832"
        - name: ZMQPUBHASHTX
          value: "tcp://*:38832"  
        - name: ZMQPUBRAWTX
          value: "tcp://*:38832"
        volumeMounts:
        - name: bitcoin-chaindata
          mountPath: /root/.bitcoin
        resources:
          requests:
            memory: "8Gi" # 8 GB
            cpu: "3000m"  # 3 CPUs
          limits:
            memory: "16Gi" # 16 GB
            cpu: "3000" #  3 CPUs  
        livenessProbe:
          httpGet:
              path: /rest/chaininfo.json
              port: 8332
          initialDelaySeconds: 120 #wait this period after staring fist time
          periodSeconds: 240  # polling interval
          timeoutSeconds: 60    # wish to receive response within this time period
        readinessProbe: 
          httpGet:
              path: /rest/chaininfo.json
              port: 8332
          initialDelaySeconds: 120 #wait this period after staring fist time
          periodSeconds: 240    # polling interval
          timeoutSeconds: 60  
        command: ["/bin/ash"]
        args: ["-c","/app/bitcoin/bin/bitcoind  -printtoconsole \
                  -pid=/root/.bitcoin/bitcoind.pid \
                  -rest \
                  -port=${PORT} \
                  -daemon=0 \
                  -rpcuser=${RPC_USER}  \
                  -rpcpassword=${RPC_PASSWORD} \
                  -rpcport=${RPC_PORT} \
                  -rpcallowip=${RPC_ALLOW_IP} \
                  -rpcthreads=${RPC_THREADS} \
                  -server=${SERVER} \
                  -txindex=${TX_INDEX} \
                  -maxmempool=${MAX_MEMPOOL} \
                  -dbcache=${DBCACHE} \
                  -mempoolexpiry=${MEMPOOL_EXPIRY} \
                  -rpcworkqueue=500 \
                  -zmqpubhashblock=${ZMQPUBHASHBLOCK} \
                  -zmqpubhashtx=${ZMQPUBHASHTX} \
                  -zmqpubrawtx=${ZMQPUBRAWTX} \
                  -addresstype=legacy"]

                  # -rpctimeout=${RPC_TIMEOUT} \
                  # -addrindex=${ADDR_INDEX} \

  volumeClaimTemplates:
  - metadata:
      name: bitcoin-chaindata
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: fast
      resources:
        requests:
          storage: 500Gi   

Because I restrict the maximum CPU usage to 16GB, I would have expected both the pods to use something lesser than 16GB. However, I can see on stackdriver that one of the pods uses about 12GB, while the other one goes upto 32GB. In what case would this happen?

I have a 2 x 35GB cluster.

-- kosta
google-kubernetes-engine
kubernetes

1 Answer

3/10/2019

We will be able to control the resources at the container level and by which we can control at the pod level(pod cpu limit is sum of limits of all containers in it).

I don't think, we can control across replication.

-- Dinesh Balasubramanian
Source: StackOverflow