-
Notifications
You must be signed in to change notification settings - Fork 609
New bufferer component #5330
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New bufferer component #5330
Conversation
In general does it make sense to move responsibility of timing and actions from the Bufferer to the individual instances? This would make the instances more self contained and potentially avoid a good chunk of the locking. Would be harder to do global cuts but we could do tenant specific then? Or trigger via a channel with at timer in the Bufferer. |
From chatting would calling |
Can we document the failure modes and what happens? Do we need 100% data fidelity or can 99.9%, since the data will be correctly handled by long term storage regardless. |
Assuming failures happen is it better to return incorrect data or no data? |
Development has been moved to a dev branch. See #5430 |
What this PR does:
The bufferer is a Kafka-based ingestion service that consumes trace data from Kafka topics instead of receiving it via gRPC like the traditional ingester. It reads serialized traces from Kafka, processes them through the same data pipeline as the ingester (live traces → head blocks → WAL → complete blocks).
Checklist
CHANGELOG.md
updated - the order of entries should be[CHANGE]
,[FEATURE]
,[ENHANCEMENT]
,[BUGFIX]