Skip to content

Releases: grafana/tempo

v2.8.2

06 Aug 20:29
v2.8.2
6b92615
Compare
Choose a tag to compare
  • [CHANGE] Update Go to version 1.24.4. #5323 (@stoewer)
  • [BUGFIX] Add nil check to partitionAssignmentVar. #5198 (@mapno)
  • [BUGFIX] Correct instant query calculation. #5252 (@ruslan-mikhailov)
  • [BUGFIX] Fix tracing context propagation in distributor HTTP write requests. #5312 (@mapno)
  • [BUGFIX] Fix search by trace:id with short trace ID. #5331 (@ruslan-mikhailov)
  • [BUGFIX] Fix bug where most_recent=true wouldn't return most recent results when query overlapped ingesters and few other blocks. #5438 (@joe-elliott)
  • [BUGFIX] Fix panic when counter series is missing during avg_over_time aggregation. #5300 (@ie-pham)

v2.8.1

18 Jun 13:46
dab5bb7
Compare
Choose a tag to compare
  • [BUGFIX] Fix ingester issue where a hash collision could lead to spans stored incorrectly #5276 (@carles-grafana)

v2.8.0

10 Jun 17:17
ab76780
Compare
Choose a tag to compare

Breaking Changes

  • [CHANGE] BREAKING CHANGE Change default http-listen-port from 80 to 3200 #4960 (@martialblog)
  • [CHANGE] BREAKING CHANGE Upgrade OTEL Collector to v0.122.1 #4893 (@javiermolinar)
    The name dimension from tempo_receiver_accepted_span and tempo_receiver_refused_spans changes from tempo/jaeger_receiver to jaeger/jaeger_receiver
  • [CHANGE] BREAKING CHANGE Convert SLO metric query_frontend_bytes_processed_per_second from a histogram to a counter as it's more performant. #4748 (@carles-grafana)
  • [CHANGE] BREAKING CHANGE Remove tempo serverless #4599 (@electron0zero)
    Following config options are no longer valid, please remove them if you are using these in your tempo config:
    querier:
        search:
            prefer_self: <int>
            external_hedge_requests_at: <duration>
            external_hedge_requests_up_to:  <duration>
            external_backend: <string>
            google_cloud_run: <string>
            external_endpoints: <array>
    
    Tempo serverless related metric tempo_querier_external_endpoint_duration_seconds, tempo_querier_external_endpoint_hedged_roundtrips_total and tempo_feature_enabled are being removed.
  • [CHANGE] BREAKING CHANGE Removed internal_error as a reason from tempo_discarded_spans_total. #4554 (@joe-elliott)
  • [CHANGE] BREAKING CHANGE Enforce max attribute size at event, link, and instrumentation scope. Make config per-tenant.
    Renamed max_span_attr_byte to max_attribute_bytes #4633 (@ie-pham)
  • [CHANGE] BREAKING CHANGE Removed otel jaeger exporter. #4926 (@javiermolinar)

Changes

Features

  • [FEATURE] Added most_recent=true query hint to TraceQL to return most recent results. #4238 (@joe-elliott)
  • [FEATURE] TraceQL metrics: sum_over_time #4786 (@javiermolinar)
  • [FEATURE] Add support for topk and bottomk functions for TraceQL metrics #4646 (@electron0zero)
  • [FEATURE] TraceQL: add support for querying by parent span id #4692 (@ie-pham)

Enhancements

  • [ENHANCEMENT] Add throughput SLO and metrics for the TraceByID endpoint. #4668 (@carles-grafana)
    configurable via the throughput_bytes_slo field, and it will populate op="traces" label in slo and throughput metrics.
  • [ENHANCEMENT] Add ability to add artificial delay to push requests #4716 #4899 #5035 (@yvrhdn, @mapno)
  • [ENHANCEMENT] tempo-vulture now generates spans with a parent, instead of only root spans #5154 (@carles-grafana)
  • [ENHANCEMENT] Add default mutex and blocking values. #4979 (@mattdurham)
  • [ENHANCEMENT] Improve Tempo build options #4755 (@stoewer)
  • [ENHANCEMENT] Rewrite traces using rebatching #4690 (@stoewer @joe-elliott)
  • [ENHANCEMENT] Reorder span iterators #4754 (@stoewer)
  • [ENHANCEMENT] Update minio to version #4341 (@javiermolinar)
  • [ENHANCEMENT] Prevent queries in the ingester from blocking flushing traces to disk and memory spikes. #4483 (@joe-elliott)
  • [ENHANCEMENT] Update tempo operational dashboard for new block-builder and v2 traces api #4559 (@mdisibio)
  • [ENHANCEMENT] Improve metrics-generator performance and stability by applying queue back pressure and concurrency #4721 (@mdisibio)
  • [ENHANCEMENT] Improve block-builder performance by flushing blocks concurrently #4565 (@mdisibio)
  • [ENHANCEMENT] Improve block-builder performance #4596 (@mdisibio)
  • [ENHANCEMENT] Improve block-builder performance by not using WAL stage #4647 #4671 (@mdisibio)
  • [ENHANCEMENT] Export new tempo_ingest_group_partition_lag metric from block-builders and metrics-generators #4571 (@mdisibio)
  • [ENHANCEMENT] Overall iterator performance improvement by using max definition level to ignore parts of the RowNumber while nexting. #4753 (@joe-elliott)
  • [ENHANCEMENT] Use distroless base container images for improved security #4556 (@carles-grafana)
  • [ENHANCEMENT] Rhythm: add block builder to resources dashboard #4556 (@javiermolinar)
  • [ENHANCEMENT] Upgrade prometheus to version 3.1.0 #4805 (@javiermolinar)
  • [ENHANCEMENT] Update dskit to latest version #4681 (@javiermolinar) #4865 (@javiermolinar)
  • [ENHANCEMENT] Increase query-frontend default batch size #4844 (@javiermolinar)
  • [ENHANCEMENT] Improve TraceQL perf by reverting EqualRowNumber to an inlineable function.#4705 (@joe-elliott)
  • [ENHANCEMENT] Rhythm: Implement MaxBytesPerCycle #4835 (@javiermolinar)
  • [ENHANCEMENT] Rhythm: fair partition consumption in blockbuilders #4655 (@javiermolinar)
  • [ENHANCEMENT] Rhythm: retry on commit error #4874 (@javiermolinar)
  • [ENHANCEMENT] Skip creating one span-traces for every pushed spans in metrics generator #4844 (@javiermolinar)
  • [ENHANCEMENT] Improve Tempo / Writes dashboard by adding a kafka panel #4947 (@javiermolinar)
  • [ENHANCEMENT] Improve memcached memory usage by pooling buffers #4970 (@joe-elliott)
  • [ENHANCEMENT] metrics-generator: allow skipping localblocks and consuming from a different source of data #4686 (@flxbk)
  • [ENHANCEMENT] compactor: restore dedicated columns logging for completed blocks #4832 (@edgarkz)
  • [ENHANCEMENT] Compactor: pooling changes to reduce memory usage #4985 (@mdisibio)
  • [ENHANCEMENT] distributor: add IPv6 support #4840 (@gjacquet)
  • [ENHANCEMENT] Support TraceQL Metrics checks in Vulture #4886 (@ruslan-mikhailov)
  • [ENHANCEMENT] Add memcached to the resources dashboard #5049 (@javiermolinar)
  • [ENHANCEMENT] Include partition owned metric for blockbuilders #5042 (@javiermolinar)
  • [ENHANCEMENT] Query-frontend: logs add msg to the log line #4975 (@jmichalek132)
  • [ENHANCEMENT] Host Info Processor: track host identifying resource attribute in metric #5152 (@rlankfo)
  • [ENHANCEMENT] Vulture checks recent traces #5157 (@ruslan-mikhailov)
  • [ENHANCEMENT] Add jitter in backendworker to avoid thundering herd from workers [#515...
Read more

v2.8.0-rc.1

05 Jun 18:02
88ee115
Compare
Choose a tag to compare
v2.8.0-rc.1 Pre-release
Pre-release

Enhancements

Bugfixes

  • [BUGFIX] Excluded nestedSetParent and other values from compare() function #5196 (@mdisibio)
  • [BUGFIX] Fix distributor issue where a hash collision could lead to spans stored incorrectly #5186 (@mdisibio)
  • [BUGFIX] Fix structural metrics rate by aggregation #5204 (@zalegrala)
  • [BUGFIX] TraceQL Metrics: right exemplars for histogram and quantiles #5145 (@ruslan-mikhailov)

v2.8.0-rc.0

29 May 14:14
53b0541
Compare
Choose a tag to compare
v2.8.0-rc.0 Pre-release
Pre-release

Breaking Changes

  • [CHANGE] BREAKING CHANGE Change default http-listen-port from 80 to 3200 #4960 (@martialblog)
  • [CHANGE] BREAKING CHANGE Upgrade OTEL Collector to v0.122.1 #4893 (@javiermolinar)
    The name dimension from tempo_receiver_accepted_span and tempo_receiver_refused_spans changes from tempo/jaeger_receiver to jaeger/jaeger_receiver
  • [CHANGE] BREAKING CHANGE Convert SLO metric query_frontend_bytes_processed_per_second from a histogram to a counter as it's more performant. #4748 (@carles-grafana)
  • [CHANGE] BREAKING CHANGE Remove tempo serverless #4599 (@electron0zero)
    Following config options are no longer valid, please remove them if you are using these in your tempo config:
    querier:
        search:
            prefer_self: <int>
            external_hedge_requests_at: <duration>
            external_hedge_requests_up_to:  <duration>
            external_backend: <string>
            google_cloud_run: <string>
            external_endpoints: <array>
    
    Tempo serverless related metric tempo_querier_external_endpoint_duration_seconds, tempo_querier_external_endpoint_hedged_roundtrips_total and tempo_feature_enabled are being removed.
  • [CHANGE] BREAKING CHANGE Removed internal_error as a reason from tempo_discarded_spans_total. #4554 (@joe-elliott)
  • [CHANGE] BREAKING CHANGE Enforce max attribute size at event, link, and instrumentation scope. Make config per-tenant.
    Renamed max_span_attr_byte to max_attribute_bytes #4633 (@ie-pham)
  • [CHANGE] BREAKING CHANGE Removed otel jaeger exporter. #4926 (@javiermolinar)

Changes

Features

  • [FEATURE] Added most_recent=true query hint to TraceQL to return most recent results. #4238 (@joe-elliott)
  • [FEATURE] TraceQL metrics: sum_over_time #4786 (@javiermolinar)
  • [FEATURE] Add support for topk and bottomk functions for TraceQL metrics #4646 (@electron0zero)
  • [FEATURE] TraceQL: add support for querying by parent span id #4692 (@ie-pham)

Enhancements

  • [ENHANCEMENT] Add throughput SLO and metrics for the TraceByID endpoint. #4668 (@carles-grafana)
    configurable via the throughput_bytes_slo field, and it will populate op="traces" label in slo and throughput metrics.
  • [ENHANCEMENT] Add ability to add artificial delay to push requests #4716 #4899 #5035 (@yvrhdn, @mapno)
  • [ENHANCEMENT] tempo-vulture now generates spans with a parent, instead of only root spans #5154 (@carles-grafana)
  • [ENHANCEMENT] Add default mutex and blocking values. #4979 (@mattdurham)
  • [ENHANCEMENT] Improve Tempo build options #4755 (@stoewer)
  • [ENHANCEMENT] Rewrite traces using rebatching #4690 (@stoewer @joe-elliott)
  • [ENHANCEMENT] Reorder span iterators #4754 (@stoewer)
  • [ENHANCEMENT] Update minio to version #4341 (@javiermolinar)
  • [ENHANCEMENT] Prevent queries in the ingester from blocking flushing traces to disk and memory spikes. #4483 (@joe-elliott)
  • [ENHANCEMENT] Update tempo operational dashboard for new block-builder and v2 traces api #4559 (@mdisibio)
  • [ENHANCEMENT] Improve metrics-generator performance and stability by applying queue back pressure and concurrency #4721 (@mdisibio)
  • [ENHANCEMENT] Improve block-builder performance by flushing blocks concurrently #4565 (@mdisibio)
  • [ENHANCEMENT] Improve block-builder performance #4596 (@mdisibio)
  • [ENHANCEMENT] Improve block-builder performance by not using WAL stage #4647 #4671 (@mdisibio)
  • [ENHANCEMENT] Export new tempo_ingest_group_partition_lag metric from block-builders and metrics-generators #4571 (@mdisibio)
  • [ENHANCEMENT] Overall iterator performance improvement by using max definition level to ignore parts of the RowNumber while nexting. #4753 (@joe-elliott)
  • [ENHANCEMENT] Use distroless base container images for improved security #4556 (@carles-grafana)
  • [ENHANCEMENT] Rhythm: add block builder to resources dashboard #4556 (@javiermolinar)
  • [ENHANCEMENT] Upgrade prometheus to version 3.1.0 #4805 (@javiermolinar)
  • [ENHANCEMENT] Update dskit to latest version #4681 (@javiermolinar) #4865 (@javiermolinar)
  • [ENHANCEMENT] Increase query-frontend default batch size #4844 (@javiermolinar)
  • [ENHANCEMENT] Improve TraceQL perf by reverting EqualRowNumber to an inlineable function.#4705 (@joe-elliott)
  • [ENHANCEMENT] Rhythm: Implement MaxBytesPerCycle #4835 (@javiermolinar)
  • [ENHANCEMENT] Rhythm: fair partition consumption in blockbuilders #4655 (@javiermolinar)
  • [ENHANCEMENT] Rhythm: retry on commit error #4874 (@javiermolinar)
  • [ENHANCEMENT] Skip creating one span-traces for every pushed spans in metrics generator #4844 (@javiermolinar)
  • [ENHANCEMENT] Improve Tempo / Writes dashboard by adding a kafka panel #4947 (@javiermolinar)
  • [ENHANCEMENT] Improve memcached memory usage by pooling buffers #4970 (@joe-elliott)
  • [ENHANCEMENT] metrics-generator: allow skipping localblocks and consuming from a different source of data #4686 (@flxbk)
  • [ENHANCEMENT] compactor: restore dedicated columns logging for completed blocks #4832 (@edgarkz)
  • [ENHANCEMENT] Compactor: pooling changes to reduce memory usage #4985 (@mdisibio)
  • [ENHANCEMENT] distributor: add IPv6 support #4840 (@gjacquet)
  • [ENHANCEMENT] Support TraceQL Metrics checks in Vulture #4886 (@ruslan-mikhailov)
  • [ENHANCEMENT] Add memcached to the resources dashboard #5049 (@javiermolinar)
  • [ENHANCEMENT] Include partition owned metric for blockbuilders #5042 (@javiermolinar)
  • [ENHANCEMENT] Query-frontend: logs add msg to the log line #4975 (@jmichalek132)
  • [ENHANCEMENT] Host Info Processor: track host identifying resource attribute in metric #5152 (@rlankfo)
  • [ENHANCEMENT] Vulture checks recent traces #5157 (@ruslan-mikhailov)
  • [ENHANCEMENT] Add jitter in backendworker to avoid thundering herd from workers [#515...
Read more

v2.7.2

25 Mar 18:41
b33945a
Compare
Choose a tag to compare

Bugfixes

  • [BUGFIX] Fix rare panic that occurred when a querier modified results from ingesters/generators while they were being marshalled to proto. #4790 (@joe-elliott)
    This bug also impacted query correctness on recent trace data by returning incomplete results before they were ready.

Thanks to @pavleec for filing an issue bringing the query correctness element of this bug to our attention as well as testing a fix.

v2.7.1

14 Feb 15:04
35cf980
Compare
Choose a tag to compare

Changes

Thanks to @edgarkz for championing this issue and providing details of the various compression settings on their install.

v2.7.0

13 Jan 18:11
b0da6b4
Compare
Choose a tag to compare

Deprecations

  • Tempo serverless features are now deprecated and will be removed in an upcoming release #4017 @electron0zero

Breaking Changes

  • Added maximum spans per span set to prevent queries from overwhelming read path. Users can set max_spans_per_span_set to 0 to obtain the old behavior. #4275 (@carles-grafana)
  • The dynamic injection of X-Scope-OrgID header for metrics generator remote-writes is changed. If the header is aleady set in per-tenant overrides or global tempo configuration, then it is honored and not overwritten. #4021 (@mdisibio)
  • Migrate from OpenTracing to OpenTelemetry instrumentation. Removed the use_otel_tracer configuration option. Use the OpenTelemetry environment variables to configure the span exporter #4028,#3646 (@andreasgerstmayr)
    To continue using the Jaeger exporter, use the following environment variable: OTEL_TRACES_EXPORTER=jaeger
  • Update the Open-Telemetry dependencies to v0.116.0 #4466 (@yvrhdn)
    After this update the Open-Telemetry Collector receiver will connect to localhost instead of all interfaces 0.0.0.0.
    Due to this, Tempo installations running inside Docker have to update the address they listen.
    For more details on this change, see #4465
    For more information about the security risk this change addresses, see https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks
  • Removed querier_forget_delay setting from the frontend. This configuration option did nothing. #3996 (@joe-elliott)
  • Use Prometheus fast regexp for TraceQL regular expression matchers. #4329 (@joe-elliott)
    All regular expression matchers will now be fully anchored. span.foo =~ "bar" will now be evaluated as span.foo =~ "^bar$"

Changes

  • [CHANGE] Disable gRPC compression in the querier and distributor for performance reasons #4429 (@carles-grafana)
    If you would like to re-enable it, we recommend 'snappy'. Use the following settings:
ingester_client:
    grpc_client_config:
        grpc_compression: "snappy"
metrics_generator_client:
    grpc_client_config:
        grpc_compression: "snappy"
querier:
    frontend_worker:
        grpc_client_config:
            grpc_compression: "snappy"
  • [CHANGE] slo: include request cancellations within SLO [#4355] (#4355) (@electron0zero)
    request cancellations are exposed under result label in tempo_query_frontend_queries_total and tempo_query_frontend_queries_within_slo_total with completed or canceled values to differentiate between completed and canceled requests.
  • [CHANGE] update default config values to better align with production workloads #4340 (@electron0zero)
  • [CHANGE] tempo-cli: add support for /api/v2/traces endpoint #4127 (@electron0zero)
    BREAKING CHANGE The tempo-cli now uses the /api/v2/traces endpoint by default,
    please use --v1 flag to use /api/traces endpoint, which was the default in previous versions.
  • [CHANGE] TraceByID: don't allow concurrent_shards greater than query_shards. #4074 (@electron0zero)
  • [CHANGE] BREAKING CHANGE The dynamic injection of X-Scope-OrgID header for metrics generator remote-writes is changed. If the header is aleady set in per-tenant overrides or global tempo configuration, then it is honored and not overwritten. #4021 (@mdisibio)
  • [CHANGE] BREAKING CHANGE Migrate from OpenTracing to OpenTelemetry instrumentation. Removed the use_otel_tracer configuration option. Use the OpenTelemetry environment variables to configure the span exporter #4028,#3646 (@andreasgerstmayr)
    To continue using the Jaeger exporter, use the following environment variable: OTEL_TRACES_EXPORTER=jaeger.
  • [CHANGE] No longer send the final diff in GRPC streaming. Instead we rely on the streamed intermediate results. #4062 (@joe-elliott)
  • [CHANGE] Update Go to 1.23.3 #4146 #4147 #4380 (@javiermolinar @mdisibio)
  • [CHANGE] Return 422 for TRACE_TOO_LARGE queries #4160 (@zalegrala)
  • [CHANGE] Tighten file permissions #4251 (@zalegrala)
  • [CHANGE] Drop max live traces log message and rate limit trace too large. #4418 (@joe-elliott)
  • [CHANGE] Update the Open-Telemetry dependencies to v0.116.0 #4466 (@yvrhdn)
    BREAKING CHANGE After this update the Open-Telemetry Collector receiver will connect to localhost instead of all interfaces 0.0.0.0.
    Due to this, Tempo installations running inside Docker have to update the address they listen.
    For more details on this change, see #4465
    For more information about the security risk this change addresses, see https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks

Features

Enhancements

  • [ENHANCEMENT] TraceQL: Add range condition for byte predicates #4198 (@ie-pham)
  • [ENHANCEMENT] Add throughput and SLO metrics in the tags and tag values endpoints #4148 (@electron0zero)
  • [ENHANCEMENT] BREAKING CHANGE Add maximum spans per span set. Users can set max_spans_per_span_set to 0 to obtain the old behavior. #4275 (@carles-grafana)
  • [ENHANCEMENT] Add query-frontend limit for max length of query expression #4397 (@electron0zero)
  • [ENHANCEMENT] distributor: return trace id length when it is invalid #4407 (@carles-grafana)
  • [ENHANCEMENT] Changed log level from INFO to DEBUG for the TempoDB Find operation using traceId to reduce excessive/unwanted logs in log search. #4179 (@Aki0x137)
  • [ENHANCEMENT] Pushdown collection of results from generators in the querier #4119 (@electron0zero)
  • [ENHANCEMENT] The span multiplier now also sources its value from the resource attributes. #4210 (@JunhoeKim)
  • [ENHANCEMENT] TraceQL: Attribute iterators collect matched array values #3867 (@electron0zero, @stoewer)
  • [ENHANCEMENT] Allow returning partial traces that exceed the MaxBytes limit for V2 #3941 (@javiermolinar)
  • [ENHANCEMENT] Added new middleware to validate request query values #3993 (@javiermolinar)
  • [ENHANCEMENT] Prevent massive allocations in the frontend if there is not sufficient pressure from the query pipeline. #3996 (@joe-elliott)
    BREAKING CHANGE Removed querier_forget_delay setting from the frontend. This configuration option did nothing.
  • [ENHANCEMENT] Update metrics-generator config in Tempo distributed docker compose example to serve TraceQL metrics #4003 (@javiermolinar)
  • [ENHANCEMENT] Reduce allocs related to marshalling dedicated columns repeatedly in the query frontend. #4007 (@joe-elliott)
  • [ENHANCEMENT] Improve performance of TraceQL queries #4114 (@mdisibio)
  • [ENHANCEMENT] Improve performance of TraceQL queries #4163 (@mdisibio)
  • [ENHANCEMENT] Improve performance of some TraceQL queries using select() operation #4438 (@mdisibio)
  • [ENHANCEMENT] Reduce memory usage of classic histograms in the span-metrics and service-graphs processors #4232 (@mdisibio)
  • [ENHANCEMENT] ...
Read more

v2.7.0-rc.0

03 Jan 17:19
9e8f582
Compare
Choose a tag to compare
v2.7.0-rc.0 Pre-release
Pre-release

Deprecations

  • Tempo serverless features are now deprecated and will be removed in an upcoming release #4017 @electron0zero

Breaking Changes

  • Added maximum spans per span set to prevent queries from overwhelming read path. Users can set max_spans_per_span_set to 0 to obtain the old behavior. #4275 (@carles-grafana)
  • The dynamic injection of X-Scope-OrgID header for metrics generator remote-writes is changed. If the header is aleady set in per-tenant overrides or global tempo configuration, then it is honored and not overwritten. #4021 (@mdisibio)
  • Migrate from OpenTracing to OpenTelemetry instrumentation. Removed the use_otel_tracer configuration option. Use the OpenTelemetry environment variables to configure the span exporter #4028,#3646 (@andreasgerstmayr)
    To continue using the Jaeger exporter, use the following environment variable: OTEL_TRACES_EXPORTER=jaeger
  • Update the Open-Telemetry dependencies to v0.116.0 #4466 (@yvrhdn)
    After this update the Open-Telemetry Collector receiver will connect to localhost instead of all interfaces 0.0.0.0.
    Due to this, Tempo installations running inside Docker have to update the address they listen.
    For more details on this change, see #4465
    For more information about the security risk this change addresses, see https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks
  • Removed querier_forget_delay setting from the frontend. This configuration option did nothing. #3996 (@joe-elliott)
  • Use Prometheus fast regexp for TraceQL regular expression matchers. #4329 (@joe-elliott)
    All regular expression matchers will now be fully anchored. span.foo =~ "bar" will now be evaluated as span.foo =~ "^bar$"

Changes

  • [CHANGE] Disable gRPC compression in the querier and distributor for performance reasons #4429 (@carles-grafana)
    If you would like to re-enable it, we recommend 'snappy'. Use the following settings:
ingester_client:
    grpc_client_config:
        grpc_compression: "snappy"
metrics_generator_client:
    grpc_client_config:
        grpc_compression: "snappy"
querier:
    frontend_worker:
        grpc_client_config:
            grpc_compression: "snappy"
  • [CHANGE] slo: include request cancellations within SLO [#4355] (#4355) (@electron0zero)
    request cancellations are exposed under result label in tempo_query_frontend_queries_total and tempo_query_frontend_queries_within_slo_total with completed or canceled values to differentiate between completed and canceled requests.
  • [CHANGE] update default config values to better align with production workloads #4340 (@electron0zero)
  • [CHANGE] tempo-cli: add support for /api/v2/traces endpoint #4127 (@electron0zero)
    BREAKING CHANGE The tempo-cli now uses the /api/v2/traces endpoint by default,
    please use --v1 flag to use /api/traces endpoint, which was the default in previous versions.
  • [CHANGE] TraceByID: don't allow concurrent_shards greater than query_shards. #4074 (@electron0zero)
  • [CHANGE] BREAKING CHANGE The dynamic injection of X-Scope-OrgID header for metrics generator remote-writes is changed. If the header is aleady set in per-tenant overrides or global tempo configuration, then it is honored and not overwritten. #4021 (@mdisibio)
  • [CHANGE] BREAKING CHANGE Migrate from OpenTracing to OpenTelemetry instrumentation. Removed the use_otel_tracer configuration option. Use the OpenTelemetry environment variables to configure the span exporter #4028,#3646 (@andreasgerstmayr)
    To continue using the Jaeger exporter, use the following environment variable: OTEL_TRACES_EXPORTER=jaeger.
  • [CHANGE] No longer send the final diff in GRPC streaming. Instead we rely on the streamed intermediate results. #4062 (@joe-elliott)
  • [CHANGE] Update Go to 1.23.3 #4146 #4147 #4380 (@javiermolinar @mdisibio)
  • [CHANGE] Return 422 for TRACE_TOO_LARGE queries #4160 (@zalegrala)
  • [CHANGE] Tighten file permissions #4251 (@zalegrala)
  • [CHANGE] Drop max live traces log message and rate limit trace too large. #4418 (@joe-elliott)
  • [CHANGE] Update the Open-Telemetry dependencies to v0.116.0 #4466 (@yvrhdn)
    BREAKING CHANGE After this update the Open-Telemetry Collector receiver will connect to localhost instead of all interfaces 0.0.0.0.
    Due to this, Tempo installations running inside Docker have to update the address they listen.
    For more details on this change, see #4465
    For more information about the security risk this change addresses, see https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks

Features

Enhancements

  • [ENHANCEMENT] TraceQL: Add range condition for byte predicates #4198 (@ie-pham)
  • [ENHANCEMENT] Add throughput and SLO metrics in the tags and tag values endpoints #4148 (@electron0zero)
  • [ENHANCEMENT] BREAKING CHANGE Add maximum spans per span set. Users can set max_spans_per_span_set to 0 to obtain the old behavior. #4275 (@carles-grafana)
  • [ENHANCEMENT] Add query-frontend limit for max length of query expression #4397 (@electron0zero)
  • [ENHANCEMENT] distributor: return trace id length when it is invalid #4407 (@carles-grafana)
  • [ENHANCEMENT] Changed log level from INFO to DEBUG for the TempoDB Find operation using traceId to reduce excessive/unwanted logs in log search. #4179 (@Aki0x137)
  • [ENHANCEMENT] Pushdown collection of results from generators in the querier #4119 (@electron0zero)
  • [ENHANCEMENT] The span multiplier now also sources its value from the resource attributes. #4210 (@JunhoeKim)
  • [ENHANCEMENT] TraceQL: Attribute iterators collect matched array values #3867 (@electron0zero, @stoewer)
  • [ENHANCEMENT] Allow returning partial traces that exceed the MaxBytes limit for V2 #3941 (@javiermolinar)
  • [ENHANCEMENT] Added new middleware to validate request query values #3993 (@javiermolinar)
  • [ENHANCEMENT] Prevent massive allocations in the frontend if there is not sufficient pressure from the query pipeline. #3996 (@joe-elliott)
    BREAKING CHANGE Removed querier_forget_delay setting from the frontend. This configuration option did nothing.
  • [ENHANCEMENT] Update metrics-generator config in Tempo distributed docker compose example to serve TraceQL metrics #4003 (@javiermolinar)
  • [ENHANCEMENT] Reduce allocs related to marshalling dedicated columns repeatedly in the query frontend. #4007 (@joe-elliott)
  • [ENHANCEMENT] Improve performance of TraceQL queries #4114 (@mdisibio)
  • [ENHANCEMENT] Improve performance of TraceQL queries #4163 (@mdisibio)
  • [ENHANCEMENT] Improve performance of some TraceQL queries using select() operation #4438 (@mdisibio)
  • [ENHANCEMENT] Reduce memory usage of classic histograms in the span-metrics and service-graphs processors #4232 (@mdisibio)
  • [ENHANCEMENT] ...
Read more

v2.6.1

22 Oct 18:34
f97027a
Compare
Choose a tag to compare

Bugfixes

Changes

  • [CHANGE] BREAKING CHANGE tempo-query is no longer a jaeger instance with grpcPlugin. Its now a standalone server. Serving a grpc api for jaeger on 0.0.0.0:7777 by default. #3840 (@frzifus)

Enhancements

  • [ENHANCEMENT] Register gRPC health server to tempo-query #4178 (@frzifus)
  • [ENHANCEMENT] Support Tempo on IBM s390x #4175 (@pavolloffay)
  • [ENHANCEMENT] tempo-query: separate tls settings for server and client #4177 (@frzifus)
  • [ENHANCEMENT] Speedup tempo-query trace search by allowing parallel queries #4159 (@pavolloffay)