-
Notifications
You must be signed in to change notification settings - Fork 8k
Description
Bug description
Istio CPU usage is high when there is a large amount of workload entry registration.
For example, istiod pod was throttled by cpu limit (8 cpu cores), when workload entry was created at QPS of 1k/s, uniformly distributed among 1k namespaces.
[ ] Configuration Infrastructure
[ ] Docs
[ ] Installation
[ ] Networking
[x] Performance and Scalability
[ ] Policies and Telemetry
[ ] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure
Expected behavior
Pilot can handle registration of appropriate number of WorkloadEntry instances, and gracefully handles high churn of WorkloadEntry instances.
Steps to reproduce the bug
wrote a simple program to concurrently create workload entry at 1k/s QPS, uniformly under 1k namespaces.
have we done any perf testing as the test plan mentioned in the design doc.
I also followed the instruction here and did some cpu profiling.
pprof.pilot-discovery.samples.cpu.003.pb.gz
Version (include the output of istioctl version --remote
and kubectl version
and helm version
if you used Helm)
istioctl version --remote
client version: 1.6.0
control plane version: 1.6.0
data plane version: none
kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T23:35:15Z", GoVersion:"go1.14.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.12-airbnb0", GitCommit:"da578f6bba72b15d934dd1a9d53acbc2c61863e7", GitTreeState:"clean", BuildDate:"2020-05-27T13:58:10Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"}
How was Istio installed?
istioctl to generate manifest
kubectl to apply generated yamls
istio operator to do the actual work
Environment where bug was observed (cloud vendor, OS, etc)
AWS, AL2