You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The batch size of a remote write request is one of the main ways of ensuring we can achieve the desired throughput while also not OOM'ing Prometheus. We should double check that metadata appended to the queue manager does not count against the max samples (or exemplars or histograms etc.) that can be in a batch/write request.
I think at the moment this is not set up correctly, it looks like the rw2.0 branch currently when we append via queue.Append we currently are counting the metadata against the batch size since we append to the batch and then check if the length == cap. It may be the case that we actually never call Append with metadata, I think we should be calling either AppendMetadata for 1.0 or StoreMetadata for 2.0, in which case the metadata field from the timeSeries struct within queue_manager.go can potentially be removed or at least clarified with a comment.