Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: redpanda-data/redpanda
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v24.3.15
Choose a base ref
...
head repository: redpanda-data/redpanda
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: v24.3.16
Choose a head ref
  • 8 commits
  • 9 files changed
  • 5 contributors

Commits on Jun 12, 2025

  1. storage: call reserve() in storage::range()

    For some reason, this `reserve()` call was dropped in commit [1] when the
    container type was switched from `std::vector<>` to `chunked_vector<>`.
    
    Fix the regression by adding the call to `reserve()` back to the function
    before calling `push_back()` in a loop.
    
    [1]: a59cdd8
    
    (cherry picked from commit 75e2239)
    WillemKauf authored and vbotbuildovich committed Jun 12, 2025
    Configuration menu
    Copy the full SHA
    cf2a032 View commit details
    Browse the repository at this point in the history

Commits on Jun 17, 2025

  1. Merge pull request #26444 from vbotbuildovich/backport-pr-26439-v24.3…

    ….x-183
    
    [v24.3.x] `storage`: call `reserve()` in `storage::range()`
    piyushredpanda authored Jun 17, 2025
    Configuration menu
    Copy the full SHA
    3a13b4d View commit details
    Browse the repository at this point in the history
  2. m/record: added very basic defensive check to prevent crashes

    Redpada store `model::record_batch` records as opaque bytes. The bytes
    are materialized to individual `model::record` lazily when iterating
    over the batch records. During the record materialization record header
    vector is parsed. The vector size is stored in a variable length encoded
    integer. If the data in the record buffer are invalid the size may be
    incorrectly deserialized leading to a very large allocation. Added basic
    defensive check that compares the requested vector size with the number
    of bytes left in the buffer. It the size is greater then the exception
    is thrown as we know that header is never smaller than one byte.
    
    Signed-off-by: Michał Maślanka <michal@redpanda.com>
    (cherry picked from commit 7f109db)
    mmaslankaprv authored and vbotbuildovich committed Jun 17, 2025
    Configuration menu
    Copy the full SHA
    98bbc12 View commit details
    Browse the repository at this point in the history

Commits on Jun 18, 2025

  1. Merge pull request #26494 from vbotbuildovich/backport-pr-26482-v24.3…

    ….x-914
    
    [v24.3.x] m/record: added very basic defensive check to prevent crashes
    piyushredpanda authored Jun 18, 2025
    Configuration menu
    Copy the full SHA
    5da7317 View commit details
    Browse the repository at this point in the history

Commits on Jun 19, 2025

  1. rpk: Add --upload-url for remote-bundle

    (cherry picked from commit 223f63c)
    JFlath authored and vbotbuildovich committed Jun 19, 2025
    Configuration menu
    Copy the full SHA
    ab00dc3 View commit details
    Browse the repository at this point in the history
  2. Merge pull request #26514 from vbotbuildovich/backport-pr-26399-v24.3…

    ….x-860
    
    [v24.3.x] rpk: Remote bundle upload
    r-vasquez authored Jun 19, 2025
    Configuration menu
    Copy the full SHA
    bfada7b View commit details
    Browse the repository at this point in the history

Commits on Jun 21, 2025

  1. kafka: use chunked_vector in describe_groups_handler::handle

    We saw an occurance of an oversized allocation in the wild here.
    
    Confusingly enough, the underlying data type of `describe_groups_response_data`
    is a `chunked_vector` already, which means we were just moving entries from
    an `std::vector` into a `chunked_vector` here anyways.
    
    Use a `chunked_vector<ss::future<described_group>>` for `described`
    and `std::move()` assign it to `response.data.groups` to avoid
    future potential for an oversized allocation here.
    
    `ssx::when_all_succeed<>()` must be used (as the type is no longer
    `std::vector<>`)
    
    (cherry picked from commit 2082d12)
    WillemKauf authored and vbotbuildovich committed Jun 21, 2025
    Configuration menu
    Copy the full SHA
    d524967 View commit details
    Browse the repository at this point in the history

Commits on Jun 23, 2025

  1. Merge pull request #26531 from vbotbuildovich/backport-pr-26530-v24.3…

    ….x-406
    
    [v24.3.x] `kafka`: use `chunked_vector` in `describe_groups_handler::handle`
    piyushredpanda authored Jun 23, 2025
    Configuration menu
    Copy the full SHA
    88ea42b View commit details
    Browse the repository at this point in the history
Loading