Skip to content

Conversation

IvanPleshkov
Copy link
Contributor

@IvanPleshkov IvanPleshkov commented Jun 19, 2025

Currently, we do quantization encoding each time we create RawScorer. In the case of HNSW construction, it is not required because we already have a quantized storage. We want to reuse already quantized data in HNSW construction and avoid unnecessary access to original vector data, which can be on disk.

This PR fixes this behaviour and uses already constructed quantized storage in raw scorer creation.

To achieve this, this PR introduces a new method in quantization storage: fn encode_internal_vector(&self, id: u32) -> Option<Self::EncodedQuery>;. It takes a point id and returns an encoded query, which is used in the query scorer. It's optional because not every case can be done that way. In PQ we still want to encode the original vector because using the encoded one will produce accuracy loss in LUT. Everything is fine in the case of SQ and BQ?

We don't call score_internal directly, instead this idea of a query getter because quantized data can be stored on disk and we want to have query vector always in RAM.

Quantized vector scorer has a new construction method which returns scorer or fall bach the ownership of hardware counter. Also a new constructor has a Filtered Scorer.

@@ -45,6 +45,8 @@ pub trait EncodedVectors: Sized {
fn score_internal(&self, i: u32, j: u32, hw_counter: &HardwareCounterCell) -> f32;

fn quantized_vector_size(&self) -> usize;

fn encode_internal_vector(&self, id: u32) -> Option<Self::EncodedQuery>;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Create query from the internal vector. Query is always in RAM

@@ -574,4 +574,8 @@ impl<TStorage: EncodedStorage> EncodedVectors for EncodedVectorsPQ<TStorage> {
fn quantized_vector_size(&self) -> usize {
self.metadata.vector_division.len()
}

fn encode_internal_vector(&self, _id: u32) -> Option<EncodedQueryPQ> {
None
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We cannot create query in PQ from quantized vector without LUT accuracy loss.

hardware_counter: HardwareCounterCell,
}

impl<'a, TElement, TMetric, TEncodedVectors>
QuantizedQueryScorer<'a, TElement, TMetric, TEncodedVectors>
pub enum QuantizedInternalScorerResult<'a, TEncodedVectors>
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Result of internal scorer creation. It can be constructed scorer or nothing but we want to get hardware counter ownership back

@IvanPleshkov IvanPleshkov marked this pull request as ready for review June 20, 2025 16:03
@IvanPleshkov IvanPleshkov requested review from generall and xzfc June 20, 2025 16:03
Copy link
Contributor

coderabbitai bot commented Jun 20, 2025

📝 Walkthrough

Walkthrough

This change introduces a new method, encode_internal_vector, to the EncodedVectors trait and implements it across several concrete types, including EncodedVectorsBin, EncodedVectorsU8, EncodedVectorsPQ, and QuantizedMultivectorStorage. The method enables encoding of internal vectors by their identifier. The FilteredScorer struct receives a new constructor, new_internal, allowing scorer creation from a point ID. The QuantizedQueryScorer struct is refactored to remove element and metric generics, with these generics now applied at the constructor level. A new enum, QuantizedInternalScorerResult, is added to represent scorer creation results. The QuantizedVectors struct gains a raw_internal_scorer method to support scorer instantiation via internal vector encoding. Associated code is updated to use these new methods.

Possibly related PRs

Suggested labels

chore

Suggested reviewers

  • IvanPleshkov
  • xzfc

📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d3a3598 and 7932210.

📒 Files selected for processing (4)
  • lib/quantization/src/encoded_vectors.rs (1 hunks)
  • lib/quantization/src/encoded_vectors_pq.rs (1 hunks)
  • lib/segment/src/index/hnsw_index/point_scorer.rs (1 hunks)
  • lib/segment/src/vector_storage/quantized/quantized_query_scorer.rs (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
  • lib/quantization/src/encoded_vectors_pq.rs
  • lib/quantization/src/encoded_vectors.rs
  • lib/segment/src/index/hnsw_index/point_scorer.rs
  • lib/segment/src/vector_storage/quantized/quantized_query_scorer.rs
⏰ Context from checks skipped due to timeout of 90000ms (14)
  • GitHub Check: test-consistency
  • GitHub Check: Basic TLS/HTTPS tests
  • GitHub Check: test-snapshot-operations-s3-minio
  • GitHub Check: test-shard-snapshot-api-s3-minio
  • GitHub Check: test-consensus-compose
  • GitHub Check: test-low-resources
  • GitHub Check: integration-tests-consensus
  • GitHub Check: integration-tests
  • GitHub Check: rust-tests (macos-latest)
  • GitHub Check: rust-tests (windows-latest)
  • GitHub Check: rust-tests-no-rocksdb (ubuntu-latest)
  • GitHub Check: storage-compat-test
  • GitHub Check: rust-tests (ubuntu-latest)
  • GitHub Check: lint
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
lib/segment/src/vector_storage/quantized/quantized_vectors.rs (1)

172-348: Consider using a macro to reduce code repetition.

The implementation correctly handles all storage variants with a consistent pattern. While the repetition is necessary due to Rust's type system, consider extracting the common pattern into a macro to improve maintainability:

macro_rules! handle_internal_scorer {
    ($quantized_data:expr, $point_id:expr, $hardware_counter:expr, $original_query:expr) => {
        match QuantizedQueryScorer::new_internal($point_id, $quantized_data, $hardware_counter) {
            QuantizedInternalScorerResult::Scorer(scorer) => raw_scorer_from_query_scorer(scorer),
            QuantizedInternalScorerResult::NotSupported(hw) => $original_query(hw),
        }
    };
}

However, if you prefer explicit code for better readability, the current implementation is acceptable.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 912c1d7 and d3a3598.

📒 Files selected for processing (10)
  • lib/quantization/src/encoded_vectors.rs (1 hunks)
  • lib/quantization/src/encoded_vectors_binary.rs (1 hunks)
  • lib/quantization/src/encoded_vectors_pq.rs (1 hunks)
  • lib/quantization/src/encoded_vectors_u8.rs (1 hunks)
  • lib/segment/src/index/hnsw_index/hnsw.rs (4 hunks)
  • lib/segment/src/index/hnsw_index/point_scorer.rs (1 hunks)
  • lib/segment/src/vector_storage/quantized/quantized_multivector_storage.rs (1 hunks)
  • lib/segment/src/vector_storage/quantized/quantized_query_scorer.rs (3 hunks)
  • lib/segment/src/vector_storage/quantized/quantized_scorer_builder.rs (1 hunks)
  • lib/segment/src/vector_storage/quantized/quantized_vectors.rs (2 hunks)
🔇 Additional comments (11)
lib/quantization/src/encoded_vectors.rs (1)

49-49: Well-designed trait method addition.

The encode_internal_vector method is properly designed with an Option return type, allowing implementations to opt out when the optimization isn't feasible (e.g., PQ quantization). This aligns perfectly with the PR objective to reuse quantized data during HNSW construction.

lib/segment/src/vector_storage/quantized/quantized_scorer_builder.rs (1)

152-157: LGTM - Clean generic parameter refactoring.

Moving the generic parameters from the struct to the method call is a clean refactoring that maintains the same functionality while supporting the new internal vector encoding capabilities.

lib/quantization/src/encoded_vectors_pq.rs (1)

578-580: Correct implementation for PQ limitations.

Returning None is the appropriate behavior for PQ quantization, as noted in the existing comment that creating a query from quantized vector would result in LUT accuracy loss. This maintains the optimization's optional nature.

lib/quantization/src/encoded_vectors_binary.rs (1)

573-581: Solid implementation for binary quantization.

The implementation correctly retrieves the encoded vector data, performs the necessary type transmutation, and creates an owned EncodedBinVector. The use of to_vec() ensures proper ownership semantics for the returned data.

lib/quantization/src/encoded_vectors_u8.rs (1)

511-519: Correct implementation for U8 quantization.

The implementation properly extracts both the vector offset and encoded data using get_vec_ptr, creates a safe slice from the raw pointer, and constructs an owned EncodedQueryU8. This maintains all necessary information for accurate scoring.

lib/segment/src/index/hnsw_index/hnsw.rs (1)

420-427: Good optimization for quantized vector scoring.

The refactoring to use FilteredScorer::new_internal avoids unnecessary vector loading and allows the scorer to work directly with quantized data when available, improving performance.

lib/segment/src/vector_storage/quantized/quantized_multivector_storage.rs (1)

296-304: Correct implementation of internal vector encoding for multivectors.

The method properly handles the multivector case by encoding each internal vector and collecting the results. The early return pattern ensures consistent error handling.

lib/segment/src/index/hnsw_index/point_scorer.rs (1)

77-108: Well-designed internal scorer constructor with lazy vector loading.

The implementation efficiently handles both quantized and non-quantized cases. The closure pattern for original_query_fn ensures the vector is only loaded when necessary, avoiding unnecessary memory operations for quantized vectors.

lib/segment/src/vector_storage/quantized/quantized_query_scorer.rs (3)

21-27: Clean result type for internal scorer creation.

The QuantizedInternalScorerResult enum provides a clear way to handle cases where internal vector encoding is not supported, while preserving hardware counter ownership as noted in the existing comment.


33-59: Good refactoring: method-level generics improve flexibility.

Moving the generic parameters from the struct to the method level allows for more flexible usage patterns while maintaining type safety.


61-76: Efficient internal scorer construction.

The method correctly handles the optional encoding case and properly configures the hardware counter's IO multiplier based on storage location.

@@ -507,6 +507,16 @@ impl<TStorage: EncodedStorage> EncodedVectors for EncodedVectorsU8<TStorage> {
fn quantized_vector_size(&self) -> usize {
self.metadata.vector_parameters.dim
}

fn encode_internal_vector(&self, id: u32) -> Option<EncodedQueryU8> {
let (query_offset, q_ptr) = self.get_vec_ptr(id);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ToDo for another PR: avoid usage of pointers. Pretty suze zero-copy can handle it just fine

Copy link
Member

@generall generall left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This changes are not exactly pretty, but I don't have any better suggestions

@IvanPleshkov IvanPleshkov merged commit de54049 into dev Jun 23, 2025
18 checks passed
@IvanPleshkov IvanPleshkov deleted the dont-encode-quantization-query-while-hnsw-build branch June 23, 2025 08:34
generall added a commit that referenced this pull request Jul 17, 2025
* dont encode quantization query while hnsw build

* add comments

---------

Co-authored-by: generall <andrey@vasnetsov.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants