Skip to content

Be able to config HPA for each individual service #1390

@lihaif

Description

@lihaif

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

The current helm chart allows to config HPA in the global level, which means all Open Match services inherit from the same global HAP configuration.

In our use case, we may generate huge amount of requests to Open Match Query service in a short period of time, which overloads the existing Open Match Query service pods. Below is the output of the kubectl top pods

kepler-loadtest-open-match-query-cd55b976-c5jch            1016m        631Mi
kepler-loadtest-open-match-query-cd55b976-vz5x5            466m         696Mi
kepler-loadtest-open-match-query-cd55b976-w4kqk            993m         501Mi

The existing pods are killed for OOM. I can see that some new pods are started by HPA, but it takes time to start up new pods. And the longer it takes, the more workload our system will generate. I can see the pods can no longer start up like below.

kepler-loadtest-open-match-query-cd55b976-455ld                   0/1     OOMKilled           1          3m10s
kepler-loadtest-open-match-query-cd55b976-bmgjw                   0/1     Running             1          3m10s
kepler-loadtest-open-match-query-cd55b976-c5jch                   1/1     Running             2          26m
kepler-loadtest-open-match-query-cd55b976-pbj5x                   0/1     CrashLoopBackOff    1          2m55s
kepler-loadtest-open-match-query-cd55b976-vhdxw                   1/1     Running             1          3m10s
kepler-loadtest-open-match-query-cd55b976-vz5x5                   0/1     CrashLoopBackOff    3          26m
kepler-loadtest-open-match-query-cd55b976-w4kqk                   0/1     OOMKilled           2          26m
kepler-loadtest-open-match-query-cd55b976-xrnkn                   0/1     Running             1          2m55s

We want to be able to configure HPA for each individual service, so that we can set the minimum pod number of Open Match Query service to be 10 while others still keep the value as 3. We want to reserve enough pods for Open Match query service to handle some pulse workload while other services do not need to keep that many pods to save cost.

Describe the solution you'd like
A clear and concise description of what you want to happen.

Be able to overwrite the default global HPA configuration for each individual Open Match service using value.yaml in the helm chart.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions