Kubernetes VPA : issue with targetref selector + minimal resources

3/19/2019

I have 2 issues : - my Vertical pod autoscaler doesn't follow my minimal resource policies :

Spec:
  Resource Policy:
    Container Policies:
      Min Allowed:
        Cpu:     50m        <==== mini allowed for CPU
        Memory:  75Mi
      Mode:      auto
  Target Ref:
    API Version:  extensions/v1beta1
    Kind:         Deployment
    Name:         hello-world
  Update Policy:
    Update Mode:  Auto
Status:
  Conditions:
    Last Transition Time:  2019-03-19T19:11:36Z
    Status:                True
    Type:                  RecommendationProvided
  Recommendation:
    Container Recommendations:
      Container Name:  hello-world
      Lower Bound:
        Cpu:     25m
        Memory:  262144k
      Target:
        Cpu:     25m       <==== actual CPU configured by the VPA
        Memory:  262144k
  • I configured my VPA to use the new kind of label selector using targetref but in the recommender logs it says I'm using the legacy one :

    Error while fetching legacy selector. Reason: v1beta1 selector not found

Here is my deployment configuration :

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: hello-world
  namespace: hello-world
  labels:
    name: hello-world
spec:
  selector:
    matchLabels:
      name: hello-world
  replicas: 2
  template:
    metadata:
      labels:
        name: hello-world
    spec:
      securityContext:
        fsGroup: 101
      containers:
        - name: hello-world
          image: xxx/hello-world:latest
          imagePullPolicy: Always
          ports:
            - containerPort: 3000
              protocol: TCP
          resources:
            limits:
              cpu: 500m
              memory: 500Mi
            requests:
              cpu: 100m
              memory: 150Mi
          volumeMounts:
          - mountPath: /u/app/www/images
            name: nfs-volume
      volumes:
      - name: nfs-volume
        persistentVolumeClaim:
          claimName: hello-world

Here is my VPA configuration :

---
apiVersion: "autoscaling.k8s.io/v1beta2"
kind: VerticalPodAutoscaler
metadata:
  name: hello-world
  namespace: hello-world
spec:
  targetRef:
    apiVersion: "extensions/v1beta1"
    kind: Deployment
    name: hello-world
  resourcePolicy:
    containerPolicies:
    - minAllowed:
        cpu: 50m
        memory: 75Mi
      mode: auto
  updatePolicy:
    updateMode: "Auto"

I'm running kubernetes v1.13.2, VPA v0.4 and here is his configuration :

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: vpa-recommender
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: vpa-recommender
    spec:
      serviceAccountName: vpa-recommender
      containers:
      - name: recommender
        image: k8s.gcr.io/vpa-recommender:0.4.0
        imagePullPolicy: Always
        resources:
          limits:
            cpu: 200m
            memory: 1000Mi
          requests:
            cpu: 50m
            memory: 500Mi
        ports:
        - containerPort: 8080
        command:
        - ./recommender
        - --alsologtostderr=false
        - --logtostderr=false
        - --prometheus-address=http://prometheus-service.monitoring:9090/
        - --prometheus-cadvisor-job-name=cadvisor
        - --v=10

Thanks

-- Skullone
autoscaling
kubernetes

2 Answers

12/8/2019

The following error:

Error while fetching legacy selector. Reason: v1beta1 selector not found"

was fixed in VPA v0.7. See this commit for reference.

-- Peng Liu
Source: StackOverflow

3/19/2019

I don't think you are using old fetcher.

Here is a code:

legacySelector, fetchLegacyErr := feeder.legacySelectorFetcher.Fetch(vpa)
if fetchLegacyErr != nil {
    glog.Errorf("Error while fetching legacy selector. Reason: %+v", fetchLegacyErr)
}
selector, fetchErr := feeder.selectorFetcher.Fetch(vpa)
if fetchErr != nil {
    glog.Errorf("Cannot get target selector from VPA's targetRef. Reason: %+v", fetchErr)
}

Autoscaler just try to get legacy selector first and then use a new one.

About a resource limitations.

Here is a comment in a source code (PodResourcePolicy is a "resourcePolicy" block in a spec):

PodResourcePolicy controls how autoscaler computes the recommended resources for containers belonging to the pod. There can be at most one entry for every named container and optionally a single wildcard entry with containerName = '*', which handles all containers that don't have individual policies.

I think, you should also set ContainerName in your spec, because you want one pod-wide policy:

apiVersion: "autoscaling.k8s.io/v1beta2"
kind: VerticalPodAutoscaler
metadata:
  name: hello-world
  namespace: hello-world
spec:
  targetRef:
    apiVersion: "extensions/v1beta1"
    kind: Deployment
    name: hello-world
  resourcePolicy:
    containerPolicies:
    - minAllowed:
        cpu: 50m
        memory: 75Mi
      mode: auto
      containerName: "*" # Added line
  updatePolicy:
    updateMode: "Auto"
-- Anton Kostenko
Source: StackOverflow