Skip to content

Conversation

n0x29a
Copy link
Contributor

@n0x29a n0x29a commented May 27, 2025

All Submissions:

  • Contributions should target the dev branch. Did you create your branch from dev?
  • Have you followed the guidelines in our Contributing document?
  • Have you checked to ensure there aren't other open Pull Requests for the same update/change?

New Feature Submissions:

  1. Does your submission pass tests?
  2. Have you formatted your code locally using cargo +nightly fmt --all command prior to submission?
  3. Have you checked your code using cargo clippy --all --all-features command?

Changes to Core Features:

  • Have you added an explanation of what your changes do and why you'd like us to include them?
  • Have you written new tests for your core changes, as applicable?
  • Have you successfully ran tests with your changes locally?

n0x29a and others added 2 commits May 28, 2025 13:55
Co-authored-by: Tim Visée <tim+github@visee.me>
@n0x29a n0x29a marked this pull request as ready for review May 28, 2025 12:48
Copy link
Contributor

coderabbitai bot commented May 28, 2025

📝 Walkthrough

"""

Walkthrough

This change set updates the gRPC API and related Rust code to improve and standardize usage reporting, especially for inference usage. It renames the model field in the InferenceUsage message to models and updates the associated type accordingly. The UpdateBatchResponse message is extended with an optional usage field of type Usage. Multiple Rust modules are updated to propagate and merge InferenceUsage and hardware usage data, with new constructors, merge methods, and emptiness checks for usage-related structs. Function signatures are updated to return or process usage data, and documentation is revised to reflect these changes.

Possibly related PRs

Suggested reviewers

  • timvisee
  • agourlay
    """

📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b39ab36 and ea68408.

📒 Files selected for processing (9)
  • lib/api/src/grpc/conversions.rs (3 hunks)
  • lib/api/src/grpc/ops.rs (2 hunks)
  • lib/api/src/grpc/proto/points_internal_service.proto (2 hunks)
  • lib/api/src/grpc/qdrant.rs (4 hunks)
  • lib/collection/src/shards/remote_shard.rs (3 hunks)
  • src/tonic/api/points_api.rs (6 hunks)
  • src/tonic/api/points_internal_api.rs (5 hunks)
  • src/tonic/api/query_common.rs (19 hunks)
  • src/tonic/api/update_common.rs (12 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
  • src/tonic/api/points_internal_api.rs
  • src/tonic/api/points_api.rs
  • lib/api/src/grpc/qdrant.rs
  • src/tonic/api/query_common.rs
  • src/tonic/api/update_common.rs
⏰ Context from checks skipped due to timeout of 90000ms (13)
  • GitHub Check: test-low-resources
  • GitHub Check: integration-tests-consensus
  • GitHub Check: integration-tests
  • GitHub Check: test-shard-snapshot-api-s3-minio
  • GitHub Check: test-consistency
  • GitHub Check: Basic TLS/HTTPS tests
  • GitHub Check: test-snapshot-operations-s3-minio
  • GitHub Check: test-consensus-compose
  • GitHub Check: lint
  • GitHub Check: storage-compat-test
  • GitHub Check: rust-tests (ubuntu-latest)
  • GitHub Check: rust-tests (macos-latest)
  • GitHub Check: rust-tests (windows-latest)
🔇 Additional comments (12)
lib/api/src/grpc/proto/points_internal_service.proto (2)

318-319: Consistent usage tracking separation.

The changes here mirror the PointsOperationResponseInternal structure, providing consistent usage tracking separation across batch operations.


126-127:

❓ Verification inconclusive

Good architectural improvement with backward compatibility consideration needed.

Splitting usage tracking into separate hardware_usage and inference_usage fields provides better granularity for usage metrics. The optional nature of both fields helps maintain backward compatibility.

Verify that the conversion logic between PointsOperationResponseInternal and PointsOperationResponse properly handles the transition from a single usage field to split fields:


🏁 Script executed:

#!/bin/bash
# Description: Check conversion logic for usage field splitting
# Expected: Find conversion code that merges hardware_usage and inference_usage back to usage field

# Search for conversion implementations
ast-grep --pattern 'impl $_ for PointsOperationResponse$_Internal {
  $$$
}'

# Look for usage field conversions
rg -A 10 -B 5 "PointsOperationResponse.*usage" --type rust

Length of output: 139


Verify conversion logic for split usage fields

Splitting usage tracking into separate hardware_usage and inference_usage fields is a solid architectural improvement, and marking both fields as optional maintains backward compatibility. However, I couldn’t locate any conversion code that merges these two fields back into the original usage field on PointsOperationResponse.

Please manually verify that:

  • The From<PointsOperationResponseInternal> or equivalent converter correctly combines hardware_usage and inference_usage into the legacy usage field.
  • All downstream consumers and tests for PointsOperationResponse have been updated to handle the split fields.
lib/collection/src/shards/remote_shard.rs (3)

430-432: Correct field access update.

The update from usage to hardware_usage field access correctly aligns with the protobuf changes and maintains proper null safety.


727-729: Consistent field access pattern.

The field access update maintains consistency with other methods and properly handles the optional nature of the hardware usage field.


1200-1206: Excellent code clarity with helpful documentation.

The destructuring explicitly handles both usage fields with a clear comment explaining why inference_usage is ignored for remote shards. This makes the code intent transparent and maintainable.

lib/api/src/grpc/conversions.rs (3)

38-38: LGTM: Clean import addition.

The use crate::grpc; import is properly scoped and supports the explicit conversion call on line 2246.


2237-2253: LGTM: Proper handling of usage field separation.

The conversion correctly handles the architectural change from a single usage field to separate hardware_usage and inference_usage fields. The explicit grpc::UpdateResult::from call is appropriate and the Usage struct creation properly encapsulates both usage types.


2257-2274: LGTM: Clean reverse conversion implementation.

The reverse conversion properly unpacks the Usage struct using unwrap_or_default() for safety and assigns the separate components to the internal representation fields.

lib/api/src/grpc/ops.rs (4)

1-3: LGTM: Proper import additions.

The additional imports for HashMap and ModelUsage are necessary for the new functionality and properly scoped.


26-48: LGTM: Comprehensive emptiness check and conversion.

The is_empty() method correctly checks all fields for zero values, and into_non_empty() follows the established pattern using then_some(). The implementation is thorough and handles all hardware usage metrics.


52-87: LGTM: Well-designed Usage struct utilities.

The constructor and utility methods provide a clean API:

  • new() constructor allows flexible creation with optional components
  • is_empty() properly uses is_none_or() for concise null and empty checks
  • into_non_empty() maintains consistency with other structs
  • Factory methods from_hardware_usage() and from_inference_usage() provide convenient creation patterns

90-122: LGTM: Robust InferenceUsage implementation.

The implementation provides a complete set of utilities:

  • Constructor initializes with empty HashMap
  • is_empty() correctly checks HashMap emptiness
  • merge() properly accumulates token counts for existing models and inserts new ones
  • merge_opt() provides convenient handling of optional merging

The merge logic correctly handles both cases: updating existing model usage and inserting new models.

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
lib/api/src/grpc/ops.rs (1)

26-48: LGTM! Good implementation of emptiness check and conversion.

The implementation correctly checks all hardware usage fields and follows the suggested pattern from the past review comment using then_some.

🧹 Nitpick comments (3)
docs/grpc/docs.md (1)

182-182: Fix list indentation in Table of Contents.

The new entry uses 4 spaces for the unordered list, but the rest of the document uses 2 spaces. This inconsistency triggers markdownlint (MD007).

Apply this diff to align indentation:

-    - [InferenceUsage.ModelsEntry](#qdrant-InferenceUsage-ModelsEntry)
+  - [InferenceUsage.ModelsEntry](#qdrant-InferenceUsage-ModelsEntry)
🧰 Tools
🪛 markdownlint-cli2 (0.17.2)

182-182: Unordered list indentation
Expected: 2; Actual: 4

(MD007, ul-indent)

lib/api/src/grpc/qdrant.rs (1)

6199-6200: Add documentation and serde attribute for the optional usage field

To improve clarity and ensure we skip serializing usage when it's None, consider adding a doc comment and a serde attribute:

 #[prost(double, tag = "2")]
 pub time: f64,
+/// Resource usage metrics for this batch update, when available.
 #[prost(message, optional, tag = "3")]
+#[serde(default, skip_serializing_if = "Option::is_none")]
 pub usage: ::core::option::Option<Usage>,
src/tonic/api/points_api.rs (1)

75-92: Consider the necessity of cloning hw_metrics.

The pattern of cloning hw_metrics before the async call to capture hardware usage is functional but may be unnecessary. Consider if the hardware usage can be captured after the call to avoid the clone operation.

However, the explicit construction of Usage with both hardware and inference components is correct and provides clear control over usage reporting.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1dfebb5 and 5386f5c.

📒 Files selected for processing (12)
  • docs/grpc/docs.md (3 hunks)
  • lib/api/src/grpc/ops.rs (2 hunks)
  • lib/api/src/grpc/proto/points.proto (2 hunks)
  • lib/api/src/grpc/qdrant.rs (2 hunks)
  • lib/api/src/rest/conversions.rs (1 hunks)
  • src/actix/api/update_api.rs (1 hunks)
  • src/common/inference/query_requests_grpc.rs (6 hunks)
  • src/common/update.rs (0 hunks)
  • src/tonic/api/points_api.rs (5 hunks)
  • src/tonic/api/points_internal_api.rs (5 hunks)
  • src/tonic/api/query_common.rs (19 hunks)
  • src/tonic/api/update_common.rs (14 hunks)
💤 Files with no reviewable changes (1)
  • src/common/update.rs
🧰 Additional context used
🧬 Code Graph Analysis (1)
lib/api/src/grpc/ops.rs (1)
lib/api/src/rest/models.rs (5)
  • is_empty (63-70)
  • is_empty (98-100)
  • into_non_empty (102-104)
  • merge (106-116)
  • merge_opt (118-122)
🪛 markdownlint-cli2 (0.17.2)
docs/grpc/docs.md

182-182: Unordered list indentation
Expected: 2; Actual: 4

(MD007, ul-indent)

⏰ Context from checks skipped due to timeout of 90000ms (7)
  • GitHub Check: test-snapshot-operations-s3-minio
  • GitHub Check: test-consistency
  • GitHub Check: test-shard-snapshot-api-s3-minio
  • GitHub Check: rust-tests (windows-latest)
  • GitHub Check: rust-tests (ubuntu-latest)
  • GitHub Check: integration-tests-consensus
  • GitHub Check: rust-tests (macos-latest)
🔇 Additional comments (25)
lib/api/src/rest/conversions.rs (1)

28-28: LGTM! Field rename aligns with protobuf updates.

The field name change from model to models correctly aligns with the updated gRPC protobuf definitions, ensuring the REST to gRPC conversion remains consistent.

src/actix/api/update_api.rs (1)

109-117: Function call correctly updated to match new signature.

The removal of the inference_token argument from the do_delete_points call aligns with the standardized usage reporting changes mentioned in the PR summary.

docs/grpc/docs.md (2)

3179-3188: Approve field rename in InferenceUsage.

The model map field is correctly renamed to models, and the corresponding anchor and heading for InferenceUsage.ModelsEntry have been updated to match the protobuf changes. The documentation now aligns with the code.


4734-4734: Approve addition of usage field in UpdateBatchResponse.

The optional usage field of type Usage is correctly documented for UpdateBatchResponse, completing the usage reporting enhancements across the API.

lib/api/src/grpc/qdrant.rs (1)

6586-6586: Rename model to models in InferenceUsage is correct

The pluralized field name better reflects the map semantics. The prost tag remains 1, so backwards compatibility is preserved. Looks good to me.

src/tonic/api/points_internal_api.rs (5)

69-81: LGTM! Intentional exclusion of inference usage for internal calls.

The destructuring pattern correctly extracts the response while intentionally ignoring the inference usage, as documented by the comment. This aligns with the design of keeping inference usage tracking separate for internal vs external API calls.


113-137: Approve removal of inference token from delete operations.

The simplification of the delete_internal method signature by removing the inference_token parameter is consistent with the broader changes across the codebase that remove inference token handling from delete operations.


156-169: LGTM! Consistent pattern for excluding inference usage.

The implementation follows the same pattern as other internal methods, correctly destructuring the response tuple and ignoring the inference usage component for internal API calls.


475-475: Approve simplified delete handler.

The removal of inference token extraction and passing to delete_internal is consistent with the updated method signature and broader design changes.


582-582: Consistent with delete operation simplification.

The update to the Update::Delete variant correctly removes the inference token parameter, aligning with the changes to the delete_internal method signature.

src/tonic/api/points_api.rs (2)

152-168: LGTM! Consistent usage reporting pattern.

The implementation follows the same pattern as the upsert method, correctly combining hardware and inference usage. The explicit construction provides better visibility into the usage reporting mechanism.


716-716: Approve hardware-only usage reporting for search matrix operations.

The use of Usage::from_hardware_usage() is appropriate for operations that don't involve inference processing. The .into_non_empty() call ensures empty usage data is filtered out appropriately.

Also applies to: 742-742

src/common/inference/query_requests_grpc.rs (3)

29-29: LGTM! Enhanced function signatures for usage tracking.

The updated function signatures correctly return tuples containing both the converted query request and inference usage, enabling proper usage tracking in the calling code.

Also applies to: 110-110


62-66: Approve proper extraction of inference usage.

The destructuring of BatchAccumInferred::from_objects correctly extracts both the inferred data and usage information. The usage is properly defaulted if absent and converted to the appropriate type.

Also applies to: 141-145


155-179: Well-structured tuple return for query conversion.

The return statement correctly packages both the converted CollectionQueryRequest and the inference usage, maintaining the same pattern as the groups variant while handling the different request structure appropriately.

src/tonic/api/query_common.rs (4)

135-135: LGTM! Systematic replacement of helper function with explicit usage construction.

The consistent replacement of the usage_or_none helper function with explicit Usage::from_hardware_usage().into_non_empty() calls provides better visibility and control over usage reporting. The pattern is applied consistently across all query operations that only involve hardware usage.

Also applies to: 183-183, 241-241, 296-296, 349-349, 406-406, 460-460, 506-506, 562-562, 641-641, 699-699, 764-764


787-788: Approve enhanced query usage tracking.

The destructuring of the conversion function to extract both the request and inference usage, followed by the proper construction of Usage with both hardware and inference components, correctly implements the enhanced usage tracking for query operations.

Also applies to: 820-820


839-847: Excellent batch usage accumulation pattern.

The pre-allocation of InferenceUsage with capacity and the subsequent merging of individual query usage is well-optimized. The final construction of Usage with accumulated inference usage demonstrates proper resource management and usage tracking for batch operations.

Also applies to: 880-885


907-908: Consistent usage tracking for query groups.

The implementation follows the same pattern as single query operations, correctly extracting inference usage from the conversion function and combining it with hardware usage in the response.

Also applies to: 940-940

lib/api/src/grpc/ops.rs (2)

90-130: Well-designed InferenceUsage implementation with efficient merging.

The implementation provides:

  • Clear type alias for better readability
  • Optimized with_capacity constructor for batch operations
  • Correct merge logic that accumulates token counts for existing models
  • Convenient merge_opt wrapper for optional values

65-67:

✅ Verification successful

Verify Rust version compatibility for is_none_or.

The is_none_or method was stabilized in Rust 1.82.0. Please ensure the project's minimum supported Rust version (MSRV) is compatible with this feature.


🏁 Script executed:

#!/bin/bash
# Description: Check the project's Rust version requirements

# Check for rust-toolchain file
fd -t f "rust-toolchain" -x cat {}

# Check Cargo.toml for rust-version field
rg "rust-version" --type toml -A 2

# Check for MSRV in README or documentation
rg -i "msrv|minimum.*rust" README.md || true

Length of output: 196


No changes needed: is_none_or is supported by Rust 1.87
The project’s Cargo.toml specifies rust-version = "1.87" (≥ 1.82), so is_none_or is fully supported.

src/tonic/api/update_common.rs (4)

46-82: Correct implementation of inference usage propagation in upsert.

The function properly captures and returns the inference usage from do_upsert_points, with safe handling of the optional usage value.


132-180: Consistent inference usage handling in update_vectors.

The implementation follows the same pattern as upsert, properly propagating inference usage from the internal update operation.


806-848: Good documentation and consistent API design for sync operation.

The comment clearly explains that sync operations don't generate actual inference usage, maintaining API consistency. The implementation correctly propagates any usage from point conversion.


432-433: Excellent implementation of usage accumulation in batch operations.

The code efficiently accumulates inference usage across all batch operations with:

  • Pre-allocated capacity for performance
  • Consistent handling of operations that don't generate inference usage
  • Proper usage accumulation and inclusion in the response

Also applies to: 646-661

@@ -1180,7 +1181,7 @@ message Usage {
// ---------------------------------------------

message InferenceUsage {
map<string, ModelUsage> model = 1;
map<string, ModelUsage> models = 1;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Field rename improves semantic clarity.

Renaming model to models better represents that this is a map containing multiple model usage entries. The change aligns with similar renaming across the codebase.

Verify that all references to this field have been updated consistently:


🏁 Script executed:

#!/bin/bash
# Description: Check for any remaining references to the old 'model' field in InferenceUsage

# Search for potential references to the old field name
rg -A 3 -B 3 "InferenceUsage.*model[^s]" --type rust
rg -A 3 -B 3 "\.model\s*=" --type rust | grep -i inference

Length of output: 1399


Remaining model references need to be updated to the new models map

The rename in points.proto was successful, but the following Rust files still assign to the old model field and must be updated to use the models map instead:

  • src/common/inference/batch_processing_grpc.rs
    doc1.model = "model1".to_string();
    doc2.model = "model2".to_string();
  • src/common/inference/batch_processing.rs
    doc1.model = "model1".to_string();
    doc2.model = "model2".to_string();

Please change these to insert into the models map, for example:

doc1.models.insert("model1".to_string(), /* Usage value */);
doc2.models.insert("model2".to_string(), /* Usage value */);

Ensure all code and tests referring to .model now reference .models.

🤖 Prompt for AI Agents
In lib/api/src/grpc/proto/points.proto at line 1184, the field was renamed from
'model' to 'models' to reflect it being a map. You need to update all Rust code
that still assigns to the old 'model' field, specifically in
src/common/inference/batch_processing_grpc.rs and
src/common/inference/batch_processing.rs. Replace assignments like 'doc1.model =
"model1".to_string();' with inserting into the map:
'doc1.models.insert("model1".to_string(), /* Usage value */);'. Make sure all
references to '.model' are changed to '.models' and use map insertion
accordingly.

@@ -989,6 +989,7 @@ message RecommendGroupsResponse {
message UpdateBatchResponse {
repeated UpdateResult result = 1;
double time = 2; // Time spent to process
optional Usage usage = 3;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Good addition for usage reporting consistency.

Adding the optional usage field to UpdateBatchResponse aligns with other response messages and supports the standardized usage reporting mentioned in the PR objectives.

Consider documenting this as a breaking change for gRPC clients, though the optional nature minimizes impact:


🌐 Web query:

Are there any compatibility guidelines for gRPC protobuf changes in the Qdrant project documentation?

💡 Result:

Qdrant utilizes gRPC as one of its main interfaces alongside REST, but the documentation doesn't provide specific Qdrant-focused compatibility guidelines for protobuf changes. However, we can understand the compatibility aspects from both Qdrant's implementation and general Protocol Buffers practices.

gRPC in Qdrant

Qdrant supports both REST and gRPC interfaces for client-server communication. The gRPC interface is available on port 6334 by default and follows the same principles as the REST API, with corresponding gRPC methods for each REST endpoint[1]. When deciding between REST and gRPC, Qdrant documentation suggests it's a trade-off between convenience and speed - gRPC is recommended for performance optimization when you're already familiar with Qdrant[1].

Protocol Buffers Compatibility Guidelines

While Qdrant-specific guidelines aren't explicitly detailed, Protocol Buffers (which gRPC uses) maintains important compatibility guarantees:

  1. Binary Wire Format Stability: The binary wire format doesn't change even in major version updates. This ensures backward and forward compatibility, allowing newer versions to read old binary wire format data and older binaries to parse newly generated protobuf data[7].

  2. Descriptor Schema Changes: In minor or patch releases, new messages, fields, enums, or values may be added, and existing elements may be marked as deprecated. Major releases may remove deprecated elements[7].

  3. Language Grammar Changes: Minor or patch releases may introduce new language constructs or alternative syntax, while major releases might remove obsolete features[7].

  4. API Compatibility: Minor or patch releases contain additive or source-compatible updates, while major releases may include breaking changes requiring client code updates[7].

Client-Server Version Compatibility in Qdrant

Qdrant does implement a version compatibility check between client and server. The Python client, for instance, verifies compatibility by comparing client and server versions. The check ensures that major versions match and minor version differences don't exceed 1[5]. This check can be skipped by setting check_compatibility=False in the client initialization.

Qdrant Client Implementation

Qdrant offers official client libraries for multiple programming languages including Python, JavaScript/TypeScript, Rust, Go, .NET, and Java[1]. These clients handle the underlying gRPC communication:

  • The Python client provides wrappers around generated gRPC client code[3]
  • The Rust client uses gRPC via the Tonic library[4]
  • Various code examples show how client implementations interact with the gRPC interface across Java, .NET, and Go[9]

If you need to work with protocol buffer definitions directly, you would need the protoc Protocol Buffers compiler, as mentioned in the Rust client documentation[4].

For languages without official client support, you can use the REST API directly or generate a client using OpenAPI or protobuf definitions[1].

Citations:


Document gRPC protobuf changes for client compatibility

Both the new optional Usage usage = 3; in UpdateBatchResponse (line 992) and the rename from

map<string, ModelUsage> model = 1;

to

map<string, ModelUsage> models = 1;

(line 1184) are wire-compatible under standard Protocol Buffers rules (added optional field, unchanged tag numbers). However, clients will need to regenerate their gRPC stubs to pick up the new usage field and the renamed models property.

Please update your release notes or CHANGELOG to call out:

  • Addition of usage field to UpdateBatchResponse (tag 3)
  • Rename of modelmodels in InferenceUsage (tag 1)

so that consumers know to regenerate their client code for full compatibility.

🤖 Prompt for AI Agents
In lib/api/src/grpc/proto/points.proto at line 992, you added an optional field
`usage` to `UpdateBatchResponse`. This is a wire-compatible change but requires
clients to regenerate their gRPC stubs to access the new field. Update the
release notes or CHANGELOG to document the addition of the `usage` field and the
rename of `model` to `models` in `InferenceUsage` at line 1184, informing
consumers to regenerate their client code for full compatibility.

@n0x29a n0x29a requested review from generall, timvisee and JojiiOfficial and removed request for JojiiOfficial May 28, 2025 14:12
&& *vector_io_write == 0
}

pub fn into_non_empty(self) -> Option<Self> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor nit but I find this name a bit confusing. To me it suggests that values get inserted into the counter.
What about check_not_empty or some_if_not_empty?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we consume self it should be called into_*.
check_not_empty - would require bool as return type
And into_some_if_not_empty is just too long 🤷‍♂️

@generall generall merged commit 01e5fab into dev May 29, 2025
17 checks passed
@generall generall deleted the inference-usage-grpc branch May 29, 2025 19:48
generall added a commit that referenced this pull request Jul 17, 2025
* InferenceUsage in grpc

* Update documentation for InferenceUsage

* Update lib/api/src/grpc/ops.rs

Co-authored-by: Tim Visée <tim+github@visee.me>

* clearer inference usage tracking

* remove unused inference token from delete_points

* review

---------

Co-authored-by: Tim Visée <tim+github@visee.me>
Co-authored-by: generall <andrey@vasnetsov.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants