Skip to content

Analyzing Istio Performance

John Howard edited this page Aug 19, 2024 · 16 revisions

Analyzing Istio Performance

Control Plane

Most control plane components are instrumented with pprof. This allows profiling of memory, CPU usage, etc.

To profile Pilot, follow these steps:

All of these steps assume Pilot is located at localhost:8080. For most situations, this can be achieved with kubectl port-forward -n istio-system PILOT-POD 8080.

More details about pprof can be found here.

Memory

To profile memory usage:

$ go tool pprof -http=:8888 localhost:8080/debug/pprof/heap
Fetching profile over HTTP from http://localhost:8080/debug/pprof/heap
Saved profile in /home/pprof/pprof.pilot-discovery.alloc_objects.alloc_space.inuse_objects.inuse_space.032.pb.gz

Running this will fetch a memory profile from Pilot, open up a web UI, and save the profile to a gz file.

On the web UI (localhost:8888 in this example), you can view current memory and total allocated memory, in different formats. Generally the "Flame Graph" view is the easiest to understand.

CPU

To profile CPU:

$ go tool pprof -http=:8888 localhost:8080/debug/pprof/profile

Note: whereas memory will capture a snapshot of current usage and all lifetime allocations, this will poll for 30s and capture CPU usage during this time.

Goroutines

To debug a goroutine leak or deadlock, it can be useful to dump all active goroutines. This can be done with

curl 'http://localhost:8080/debug/pprof/goroutine?debug=2'

Note: output may be very large

Data Plane

Profile

Istio 1.21 and older

On Istio 1.21 and older, Envoy comes with a Heap and CPU profiler. This is enabled, run for some time, then disabled. During this time, CPU samples are captured (for cpu), and allocations are captured (for heap).

Note: the heap capture is only recording allocations during the time window (unlike Go's heap profile).

export POD=pod-name
export NS=istio-system
export PROFILER="cpu" # Can also be "heap", for a heap profile
kubectl exec -n "$NS" "$POD" -c istio-proxy -- curl -X POST -s "http://localhost:15000/${PROFILER}profiler?enable=y"
sleep 15
kubectl exec -n "$NS" "$POD" -c istio-proxy -- curl -X POST -s "http://localhost:15000/${PROFILER}profiler?enable=n"
rm -rf /tmp/envoy
kubectl cp -n "$NS" "$POD":/var/lib/istio/data /tmp/envoy -c istio-proxy
kubectl cp -n "$NS" "$POD":/lib/x86_64-linux-gnu /tmp/envoy/lib -c istio-proxy
kubectl cp -n "$NS" "$POD":/usr/local/bin/envoy /tmp/envoy/lib/envoy -c istio-proxy

Istio 1.22 and newer

A heap dump can be obtained from curl -o heap http://localhost:15000/heap_dump from a sidecar. Unlike in 1.21 and older, this is a snapshot of the current heap at a point in time.

Envoy no longer has a built-in CPU profiler. Instead, perf can be used. You will need to find the process ID of Envoy, then run sudo perf record -p <PID> -g. This can be run anywhere the PID is accessible from (on the node, in the same container, in a pod on the same host with hostPID, etc). This can be visualized with pprof still, but you will need perf_data_converter as well.

Visualize profile pprof installation

Install pprof, then run:

PPROF_BINARY_PATH=/tmp/envoy/lib/ pprof -pdf /tmp/envoy/lib/envoy /tmp/envoy/envoy.prof

Or, interactively

PPROF_BINARY_PATH=/tmp/envoy/lib/ pprof /tmp/envoy/lib/envoy /tmp/envoy/envoy.prof

Or, through the web UI

PPROF_BINARY_PATH=/tmp/envoy/lib/ pprof -http=localhost:8000 /tmp/envoy/lib/envoy /tmp/envoy/envoy.prof

Ztunnel

Memory profiling is currently not supported.

CPU Profiles can be captured from the pprof endpoint:

$ kubectl port-forward ZTUNNEl-POD  -n istio-system 15000 & 
$ go tool pprof localhost:15000/debug/pprof/profile

perf can also be used to analyze, though you will need to pass --call-graph dwarf which is not supported by perf_data_converter (thus making visualization with pprof not work).

Troubleshooting

Dev Environment

Writing Code

Pull Requests

Testing

Performance

Releases

Misc

Central Istiod

Pilot

Telemetry

Clone this wiki locally