-
Notifications
You must be signed in to change notification settings - Fork 2k
Closed
Description
Related issue: #1624
Dapr 1.5.1
I'm using Prometheus to monitor Dapr sidecars. I've got a high volume service in which many different paths are being called. Therefore the path label has high cardinality and I'm ending up with huge amounts of data (20MB+ on each scrape run) and Prometheus memory usage.
We end up with huge amounts of time series like the following:
dapr_http_client_completed_count{app_id="appx",dapr_io_enabled="true",instance="10.2.8.2:9090",job="kubernetes-service-endpoints",method="GET",namespace="appx-ns",node="aks-systempool-vmss000003",path="appx/xxxx-3d21-4ed1-9590-xxx/addresses/xxxx-6c74-4c9b-bf11-xxx",service="appx-dapr",status="200",cluster="x"} 1 1641981974242
dapr_http_client_completed_count{app_id="appx",dapr_io_enabled="true",instance="10.2.8.2:9090",job="kubernetes-service-endpoints",method="GET",namespace="appx-ns",node="aks-systempool-vmss000003",path="appx/xxxx-7e42-234d-af2d-xxx/addresses/xxxx-9c97-234d-bc19-xxx",service="appx-dapr",status="200",cluster="x"} 1 1641981974242
dapr_http_client_completed_count{app_id="appx",dapr_io_enabled="true",instance="10.2.8.2:9090",job="kubernetes-service-endpoints",method="GET",namespace="appx-ns",node="aks-systempool-vmss000003",path="appx/xxxx-3040-234d-b350-xxx",service="appx-dapr",status="200",cluster="x"} 1 1641981974242
I can setup a relabel config to drop the label however that will still ingest 20MB+ from each pod every x seconds which is a problem. To work around this issue we had to disable dapr metrics completely for our high volume services.
I'd suggest either:
- Dropping the path label
- Make it configurable (on/off/path depth)
Some interesting discussions in other projects regarding the same type of issue:
prometheus/client_golang#491
zsais/go-gin-prometheus#36
lmolkova and yaron2
Metadata
Metadata
Assignees
Labels
kind/bugSomething isn't workingSomething isn't working