Kubernetes — Почему Hpa Масштабируется В Неправильном Направлении?

  • Автор темы Litvinovvp
  • Обновлено
  • 20, Oct 2024
  • #1

У меня есть два горизонтальных модуля автомасштабирования для разных развертываний с идентичный значение показателя и цель, и они масштабируются противоположный направления. Состояние одного («dev»)

 
$ kubectl get hpa
NAME         REFERENCE               TARGETS               MINPODS   MAXPODS   REPLICAS   AGE
ows-dev      Deployment/ows-dev      399m/700m, 83m/750m   20        40        20         13d
ows-pyspy    Deployment/ows-pyspy    400m/700m, 83m/750m   1         2         2          2d16h
ows-stable   Deployment/ows-stable   399m/700m, 83m/750m   2         5         5          44h
other        Deployment/other        232m/150m             2         6         6          8d
and the condition of another ("stable") is $ kubectl describe hpa/ows-dev ... Metrics: ( current / target ) "flask_http_request" on pods: 401m / 700m resource cpu on pods: 88m / 750m Min replicas: 20 Max replicas: 40 Deployment pods: 20 current / 20 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ScaleDownStabilized recent recommendations were higher than current one, applying the highest recent recommendation ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric flask_http_request ScalingLimited True TooFewReplicas the desired replica count is less than the minimum replica count Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulRescale 21m (x2 over 20h) horizontal-pod-autoscaler New size: 21; reason: pods metric flask_http_request above target Normal SuccessfulRescale 21m (x2 over 20h) horizontal-pod-autoscaler New size: 22; reason: pods metric flask_http_request above target Normal SuccessfulRescale 20m horizontal-pod-autoscaler New size: 23; reason: pods metric flask_http_request above target Normal SuccessfulRescale 19m horizontal-pod-autoscaler New size: 24; reason: pods metric flask_http_request above target Normal SuccessfulRescale 14m (x2 over 24h) horizontal-pod-autoscaler New size: 22; reason: All metrics below target Normal SuccessfulRescale 13m (x4 over 21h) horizontal-pod-autoscaler New size: 21; reason: All metrics below target Normal SuccessfulRescale 11m (x5 over 22h) horizontal-pod-autoscaler New size: 20; reason: All metrics below target !?

describe

Как мы можем отладить то, что hpa is doing? (My guess is that we've unintentionally configured multiple deployments with a metric query that picks up all of those deployments collectively, and it looks like the API is reporting on that value averaged over все ресурсы, но, возможно, внутри hpa is only averaging within each deployment, and hence working from a different value than reported?)

ScalingLimited: TooManyReplicas doesn't give very much more detail:

ScalingLimited: TooFewReplicas

#kubernetes #автомасштабирование

Litvinovvp


Рег
03 May, 2010

Тем
74

Постов
208

Баллов
588
Тем
403,760
Комментарии
400,028
Опыт
2,418,908

Интересно