Skip to content

Conversation

lnkuiper
Copy link
Contributor

This PR proudly presents a full rewrite of DuckDB's sorting code. It is currently integrated into the ORDER BY operator, and should be integrated into other operators over a series of PRs, such that the current sort code can be removed from the code base at some point.

Current Implementation

DuckDB's current sorting implementation was written a few years ago by an inexperienced 1st year PhD student (me).
It offered many improvements over the very basic sorting implementation that we had at the time, which had to fit in-memory, was single-threaded, and just very slow in general for larger data sizes. The current implementation can sort larger-than-memory data, is fully parallel, and quite efficient. However, it also has a few major downsides:

  1. It uses the old RowDataCollection for materializing and spilling data, which requires explicit pointer swizzling, which is inefficient and error-prone, compared to the lazy pointer recomputation that the new `TupleDataCollection uses
  2. It uses a cascaded 2-way merge sort after the initial thread-local sorting phase, which merges the payload (non-sorting columns) throughout the sort, causing a lot of data movement (and spilling, if larger-than-memory).
  3. It has a non-standard API that makes it difficult to parallelize properly in operators other than ORDER BY such as WINDOW.
  4. It uses variable-size allocations to sort large arrays of data, instead of the default 256KiB block size.
  5. It sorts using code made specifically to sort dynamically-sized data, instead of having statically compiled sorting code.
  6. It does not adapt to pre-sorted data.

New Implementation

The new sorting code is written by a much more experienced software developer (me again).
It tackles all of the downsides of the current implementation:

  1. It is built using the TupleDataCollection, making spilling more efficient and less error-prone.
  2. It uses a k-way merge sort, reducing data movement (and spilling), and allowing a LIMIT on top to short-circuit the sort, if present, and the progress bar totally works throughout. (The k-way merge is parallelized by generalizing Merge Path, which will be explained in more detail in a blog post.)
  3. It uses the default Sink/Source API that all of our operators have.
  4. It uses the default 256KiB block size. I've implemented a C++ iterator to iterate over data spread out over many blocks that uses FastMod to efficiently do random access.
  5. It is fully templated (i.e., fully static), and can therefore use out-of-the-box sorting algorithms (a combination of ska_sort, vergesort and pdqsort). If a new and better one comes out, we can easily switch (as long as it's written for C++ iterators). This is much nicer for us, as we can focus on the data management side of things, and we can benefit from whatever algorithm experts come up with.
  6. It is highly adaptive to pre-sorted data.

Sorting Performance

This PR description doesn't mean much if I don't have some hard numbers to back it up, so here's a little benchmark I ran on my M1 Max MacBook Pro with 10 cores and 64 GiB of RAM. I've set DuckDB's memory limit to 30 GB, otherwise MacOS can decide to use swap space, which slows down the query significantly. I ran each query 5 times and took the median, unless the query ran for longer than a minute (I wasn't patient enough to wait for that), and I also did not wait for anything that ook longer than 5 minutes.

Table Column Type(s) Rows [Millions] Current [s] New [s] Speedup [x]
Ascending 1 UBIGINT 10 0.110 0.033 3.333
Ascending 1 UBIGINT 100 0.912 0.181 5.038
Ascending 1 UBIGINT 1000 15.302 1.475 10.374
Descending 1 UBIGINT 10 0.121 0.034 3.558
Descending 1 UBIGINT 100 0.908 0.207 4.386
Descending 1 UBIGINT 1000 15.789 1.712 9.222
Random 1 UBIGINT 10 0.120 0.094 1.276
Random 1 UBIGINT 100 1.028 0.587 1.751
Random 1 UBIGINT 1000 17.554 6.493 2.703
TPC-H SF1 l_comment 1 VARCHAR ~6 0.848 0.296 2.864
TPC-H SF 10 l_comment 1 VARCHAR ~60 8.465 3.090 2.739
TPC-H SF 100 l_comment 1 VARCHAR ~600 300+ 35.187 8.525+
TPC-H SF 1 lineitem by l_shipdate 15 Mixed ~6 0.328 0.189 1.735
TPC-H SF 10 lineitem by l_shipdate 15 Mixed ~60 3.353 1.520 2.205
TPC-H SF 100 lineitem by l_shipdate 15 Mixed ~600 273.982 80.919 3.385

The first 12 results are "thin" sorting queries, which sort a single column. When the input data has a pattern, the new implementation is more than 3x faster than the current implementation. For random integers, it becomes relatively faster the more data is being sorted. For strings (l_comment), this is especially the case (at SF 100), as the new implementation uses less memory, and can therefore sort in memory, making it much faster than the current implementation.

The last 3 results are "wide" sorting queries, which sort an entire table by a single column. The new implementation is ~2x faster in main memory (at SF 1 and SF 10), and becomes relatively even faster when the data exceeds the memory limit (at SF 100).

Thread Scaling Performance

The new implementation should also scale much better with more threads, as it performs partition computation entirely in parallel. The cost of merging goes up with the number of threads, but the cost of merging is much lower in the new implementation, so this should also be favorable for the new implementation. For this benchmark I've sorted the random 100 million integers from the previous benchmark with 1, 2, 4, and 8 threads.

Threads Current [s] New [s] Current Speedup vs. 1 Thread [x] New Speedup vs. 1 Thread [x]
1 3.240 4.234 1.000 1.000
2 2.121 2.193 1.527 1.930
4 1.401 1.216 2.312 3.481
8 0.920 0.654 3.521 6.474

As we can see, the current implementation than the new one at 1/2 threads, which I think I can explain. This may be explained by the new implementation using an in-place radix sort, which uses less memory, but is slower as a result of that. Although the performance is worse at 1/2 threads, this story is very different with more threads. The new implementation gets much closer to linear scaling as more threads are added, scaling almost 2x better than the current implementation.

Next Steps

Besides integrating this into more operators, I want to do more profiling to speed up the sorting performance, and enable "approximate sorting" for index creation. When building the ART index, we sort the data to speed up the process, but the data only really needs to be "kind of" sorted, not perfectly sorted to reap the benefits. I attempted already doing that in this PR, but ran into some issues, so I left this for a future. PR.

lnkuiper added 30 commits March 10, 2025 16:04
@duckdb-draftbot duckdb-draftbot marked this pull request as draft May 23, 2025 08:40
@lnkuiper lnkuiper marked this pull request as ready for review May 23, 2025 13:03
@Mytherin
Copy link
Collaborator

Thanks! Awesome work

@Mytherin Mytherin merged commit 4759904 into duckdb:main May 28, 2025
52 checks passed
krlmlr added a commit to duckdb/duckdb-r that referenced this pull request Jun 5, 2025
New Sorting Implementation (duckdb/duckdb#17584)
Output hashes in unittest and fix order (duckdb/duckdb#17664)
@lnkuiper lnkuiper deleted the sorting branch July 8, 2025 08:25
@ericemc3
Copy link

Is this beautiful PR scheduled for 1.4, or already in production?

@Mytherin
Copy link
Collaborator

This is scheduled for 1.4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants