-
Notifications
You must be signed in to change notification settings - Fork 2.5k
Top-N Rework using new sort code #2172
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Mytherin
added a commit
that referenced
this pull request
Oct 18, 2024
This PR reworks the Top-N implementation to use a heap of sort keys. Previously, we used to lean on our sort implementation, and would "sort of" make a heap by re-sorting and discarding entries, in combination with some early filtering. See #2172 The main reason we implemented it this way is that we had to implement the Top-N operator for many types, included nested types, and it was easier to lean on the existing sort implementation - which was also an improvement over the `Value`-based implementation we had previously. Now that we have sort keys, it is much easier to implement the Top-N algorithm using an actual heap - by leveraging sort keys. This PR does exactly that - and implements sort keys using a heap from the `std` (using `std::push_heap` and `std::pop_heap` over a vector). This allows some clean-up of code as we can remove specialized code (`VectorOperations::DistinctLessThanNullsFirst`/`VectorOperations::DistinctGreaterThanNullsFirst`). In addition, we improve performance in many cases. In particular, sort keys allow us to also easily keep track of a "global boundary value" across all threads - that allows us to do much more skipping in the adversarial case where data is reverse-sorted on the order key. This makes performance much more stable. Below are some performance numbers running on TPC-H SF10: ```sql -- natural sort order, small limit, large payload SELECT * FROM lineitem ORDER BY l_orderkey LIMIT 5; -- old: 0.18s, new: 0.22s -- inverse natural sort order, small limit, large payload SELECT * FROM lineitem ORDER BY l_orderkey DESC LIMIT 5; -- old: 0.76s, new: 0.24s -- inverse natural sort order, large limit, large payload SELECT * FROM lineitem ORDER BY l_orderkey DESC LIMIT 10000; -- old: 1.59s, new: 0.34s -- natural sort order, small limit, small payload SELECT l_orderkey FROM lineitem ORDER BY l_orderkey LIMIT 5; -- old: 0.03s, new: 0.06s -- inverse natural sort order, small limit, small payload SELECT l_orderkey FROM lineitem ORDER BY l_orderkey DESC LIMIT 5; -- old: 0.16s, new: 0.07s -- inverse natural sort order, small limit, large payload SELECT l_orderkey FROM lineitem ORDER BY l_orderkey DESC LIMIT 10000; -- old: 0.32s, new: 0.14s ``` In general, we can see that performance is much more stable and greatly improved in several cases. There are a number of small regressions - in particular when sorting on individual integer keys in natural sort order the old algorithm is sometimes better. That is mostly because in these cases we can filter out values immediately. In the old implementation we would figure this out directly with the sort values, whereas in the new implementation we still spend time constructing the sort keys. We could remedy that by adding templated heaps for primitive types in the future.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR reworks the top-n implementation (i.e.
ORDER BY x LIMIT 5
) to use the new sort code instead of the old Value API-based heap code. How it works is that we stream in the input values into the Top-N operator and buffer them. When we exceed either5000
tuples oroffset + limit * 2
, whichever is bigger, we perform a sort and throw away all values except the topoffset + limit
. We then extract the very last element, and use that element as a pre-filter for subsequent appends.For example, suppose we do a
ORDER BY x LIMIT 5
on a table that has thex
values of[0..1000]
repeating several times (100, 1, 2, .., 9999, 0, 1, 2, ..., 9999
. We will first gather the first 5000 values, do a sort, and figure out the top 5 values are[0, 1, 2, 3, 4]
. We will then reject any values of x that are>= 4
(since we know they cannot be part of the result anymore).In the usual case, the pre-filter will allow us to do very few sorts, and the sorts that we perform are very optimized, so we get quite good performance here. It is also easily executed in parallel, and has a very low memory footprint, since we only need to keep the current top tuples.
The main problem with this approach is the adversarial case in which the input is sorted in the exact opposite manner in which we are attempting to extract a top-n, in which case we will constantly need to re-sort small chunks of the input. This is particularly problematic when we have large payloads, since currently we re-shuffle the payloads on every iteration. See also below the
OrderKey DESC Large Payload
query. This is something we can hopefully address in the future.Comparison vs Old Version