Skip to content

Conversation

generall
Copy link
Member

@generall generall commented Apr 20, 2025

Current implementation of payload index building had a problem: if payload index already existed and we tried to overwrite it with a new one, and the service was stopped during the process, we could end up in the situation where one index was created in storage, while another index was configured. That caused panic on start:

2025-04-20T09:21:25.054723Z  INFO storage::content_manager::toc: Loading collection: XXXX
2025-04-20T09:21:25.439576Z ERROR qdrant::startup: Panic backtrace:
   0: std::backtrace::Backtrace::create
   1: qdrant::startup::setup_panic_hook::{{closure}}
   2: std::panicking::rust_panic_with_hook
   3: std::panicking::begin_panic_handler::{{closure}}
   4: std::sys::backtrace::__rust_end_short_backtrace
   5: rust_begin_unwind
   6: core::panicking::panic_fmt
   7: core::result::unwrap_failed
   8: segment::index::field_index::numeric_index::mutable_numeric_index::MutableNumericIndex<T>::load
   9: segment::index::field_index::numeric_index::NumericIndexInner<T>::load
  10: segment::index::struct_payload_index::StructPayloadIndex::open
  11: segment::segment_constructor::segment_constructor_base::create_segment
  12: segment::segment_constructor::segment_constructor_base::load_segment
  13: std::sys::backtrace::__rust_begin_short_backtrace
  14: core::ops::function::FnOnce::call_once{{vtable.shim}}
  15: std::sys::pal::unix::thread::Thread::new::thread_start
  16: <unknown>
  17: <unknown>
2025-04-20T09:21:25.439604Z ERROR qdrant::startup: Panic occurred in file lib/segment/src/index/key_encoding.rs at line 120: called `Result::unwrap()` on an `Err` value: TryFromSliceError(())

To prevent this error, this PR changes index creation function in a way, that we always drop incompatible index before trying to build a new one.

Additionally, this PR removes obsolete auto-inference of the index type, which is not used anyway.

use crate::index::field_index::FieldIndex;
use crate::types::PayloadFieldSchema;

pub enum BuildFieldIndexResult {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This result structure enforces the caller to handle each case, including the IncompatibleSchema. It makes sure we don't forget to follow the protocol of removing existing index (otherwise it is impossible to handle IncompatibleSchema case)

Comment on lines +327 to +333
BuildFieldIndexResult::IncompatibleSchema => {
// This is a service error, as we should have just removed the old index
// So it should not be possible to get this error
return Err(OperationError::service_error(format!(
"Incompatible schema for field index on field {key}",
)));
}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This handling only makes sense since we called delete_field_index_if_incompatible before

Copy link
Contributor

coderabbitai bot commented Apr 20, 2025

📝 Walkthrough

Walkthrough

This change introduces new mechanisms for managing field indexes in segments with a focus on schema compatibility. A new enum, BuildFieldIndexResult, is added to represent the result of building a field index, distinguishing between cases such as already existing indexes, incompatible schemas, and successful builds. The PayloadIndex trait and its implementations are refactored to use this enum, and new methods are added for conditionally deleting indexes if their schema is incompatible (delete_field_index_if_incompatible and drop_index_if_incompatible). The ProxyIndexChange enum gains a DeleteIfIncompatible variant, and related logic is updated to process this new operation throughout the segment and optimizer layers. Schema inference functions and logic are removed, requiring explicit schemas for index operations. The changes also improve error handling and control flow in index creation and management, making schema compatibility checks explicit and robust across the codebase.

Suggested reviewers

  • timvisee
  • coszio

Tip

⚡💬 Agentic Chat (Pro Plan, General Availability)
  • We're introducing multi-step agentic chat in review comments and issue comments, within and outside of PR's. This feature enhances review and issue discussions with the CodeRabbit agentic chat by enabling advanced interactions, including the ability to create pull requests directly from comments and add commits to existing pull requests.

📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0d9afc6 and 37aace2.

📒 Files selected for processing (1)
  • lib/segment/src/index/plain_payload_index.rs (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • lib/segment/src/index/plain_payload_index.rs
⏰ Context from checks skipped due to timeout of 90000ms (12)
  • GitHub Check: integration-tests-consensus
  • GitHub Check: integration-tests
  • GitHub Check: Basic TLS/HTTPS tests
  • GitHub Check: test-snapshot-operations-s3-minio
  • GitHub Check: test-shard-snapshot-api-s3-minio
  • GitHub Check: rust-tests (macos-latest)
  • GitHub Check: test-low-resources
  • GitHub Check: test-consistency
  • GitHub Check: rust-tests (windows-latest)
  • GitHub Check: rust-tests (ubuntu-latest)
  • GitHub Check: lint
  • GitHub Check: storage-compat-test

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🔭 Outside diff range comments (3)
lib/segment/src/types.rs (1)

1858-1883: ⚠️ Potential issue

Remove auto-inference of field index type.

This code segment should be removed as it's no longer required with the new explicit schema compatibility approach. The automatic type inference mechanism has been replaced by explicit schema compatibility checks throughout the system.

When inferring types from JSON values, inconsistencies could arise if the index was interrupted during creation, leading to the startup panics described in the PR objectives.

The function value_type appears to still be in the codebase but should be removed as part of this PR. Removing this auto-inference mechanism is critical to resolving the startup panic issue described in the PR objectives.

-pub fn value_type(value: &Value) -> Option<PayloadSchemaType> {
-    match value {
-        Value::Null => None,
-        Value::Bool(_) => None,
-        Value::Number(num) => {
-            if num.is_i64() {
-                Some(PayloadSchemaType::Integer)
-            } else if num.is_f64() {
-                Some(PayloadSchemaType::Float)
-            } else {
-                None
-            }
-        }
-        Value::String(_) => Some(PayloadSchemaType::Keyword),
-        Value::Array(_) => None,
-        Value::Object(obj) => {
-            let lon_op = obj.get("lon").and_then(|x| x.as_f64());
-            let lat_op = obj.get("lat").and_then(|x| x.as_f64());
-
-            if let (Some(_), Some(_)) = (lon_op, lat_op) {
-                return Some(PayloadSchemaType::Geo);
-            }
-            None
-        }
-    }
-}
lib/segment/src/index/plain_payload_index.rs (1)

76-83: ⚠️ Potential issue

build_index never marks the new schema as applied

build_index always returns AlreadyBuilt, which short‑cuts the apply_field_index call in the upper layers.
For PlainPayloadIndex this means the segment’s indexed_fields map is never updated after
drop_index_if_incompatible removed an old (incompatible) schema, leaving the segment
without any record of the new (compatible) schema.

This breaks consistency guarantees – other segments will report the field as indexed while
plain segments silently ignore it, and SegmentEntry::get_indexed_fields() will give
conflicting answers.

A minimal fix is to return Built with an empty vector so the normal apply_field_index
path persists the schema:

-    ) -> OperationResult<BuildIndexResult> {
-        Ok(BuildIndexResult::AlreadyBuilt) // No index to build
+    ) -> OperationResult<BuildIndexResult> {
+        // no on‑disk index, but we must notify the caller that the schema needs
+        // to be persisted in `indexed_fields`
+        Ok(BuildIndexResult::Built(Vec::new()))
     }
lib/collection/src/collection_manager/holders/proxy_segment.rs (1)

1199-1214: ⚠️ Potential issue

Index creation is not recorded → wrapped segment will never learn about the new index

build_field_index successfully builds the index inside write_segment, but does not add a ProxyIndexChange::Create entry to changed_indexes.
Consequently propagate_to_wrapped() (lines 270‑301) will skip the CREATE step and the wrapped segment will continue to run without the index, defeating the whole purpose of building it in the proxy.

@@ fn build_field_index(&self, op_num: SeqNumberType, key: PayloadKeyTypeRef, field_type: &PayloadFieldSchema, hw_counter: &HardwareCounterCell) -> OperationResult<BuildFieldIndexResult> {
-        let field_index = match self
+        let field_index = match self
             .payload_index
             .borrow()
             .build_index(key, field_type, hw_counter)?
         {
             BuildIndexResult::Built(indexes) => indexes,
@@
-        Ok(BuildFieldIndexResult::Built {
-            indexes: field_index,
-            schema: field_type.clone(),
-        })
+        // Record the change so it gets propagated later
+        self.changed_indexes
+            .write()
+            .insert(key.clone(), ProxyIndexChange::Create(field_type.clone(), op_num));
+
+        Ok(BuildFieldIndexResult::Built {
+            indexes: field_index,
+            schema: field_type.clone(),
+        })
 }

Without this, index schemas will diverge again after optimisation or restart.

🧹 Nitpick comments (10)
lib/segment/src/index/plain_payload_index.rs (1)

121-129: Over‑aggressive drop in drop_index_if_incompatible

Always removing the entry even when it is compatible is safe but loses the optimisation of
avoiding an unnecessary save_config().
(You already do this optimisation in StructPayloadIndex.)

-        // Just always drop the index, as we don't have any indexes
-        self.config.indexed_fields.remove(field);
-        self.save_config()
+        if let Some(current) = self.config.indexed_fields.get(field) {
+            if current == _new_payload_schema {
+                return Ok(()); // schema matches – nothing to do
+            }
+        }
+        self.config.indexed_fields.remove(field);
+        self.save_config()
lib/collection/src/collection_manager/segments_updater.rs (1)

347-358: Consider propagating the “index was dropped” signal

delete_field_index_if_incompatible returns bool, but the result is ignored.
Forwarding it would let the caller know whether a rebuild was triggered by an actual
schema change, which could be useful for logging or telemetry:

-            write_segment.with_upgraded(|segment| {
-                segment.delete_field_index_if_incompatible(op_num, field_name, field_schema)
-            })?;
+            let index_dropped = write_segment.with_upgraded(|segment| {
+                segment.delete_field_index_if_incompatible(op_num, field_name, field_schema)
+            })?;

and later include index_dropped in the return value if desired.

lib/collection/src/collection_manager/holders/proxy_segment.rs (1)

1255-1261: get_indexed_fields removes entries on schema mismatch but never adds updated schema

When an existing schema differs, the field is simply removed from the map.
Down‑stream components cannot tell whether the field is un‑indexed or indexed with the new schema.

Return the new schema instead:

-    if let Some(existing_schema) = indexed_fields.get(field_name) {
-        if existing_schema != schema {
-            indexed_fields.remove(field_name);
-        }
-    }
+    match indexed_fields.get_mut(field_name) {
+        Some(existing) if existing != schema => *existing = schema.clone(),
+        None => { /* ignore – index was never there */ }
+        _ => {}
+    }

This keeps the caller’s view consistent with what delete_field_index_if_incompatible achieved.

lib/segment/src/index/payload_index_base.rs (2)

18-26: Derive common traits for BuildIndexResult

The enum is passed around (and sometimes compared) but does not implement Debug, Clone, or PartialEq.
Deriving them eases logging, testing, and pattern matching.

-pub enum BuildIndexResult {
+#[derive(Debug, Clone, PartialEq)]
+pub enum BuildIndexResult {

No run‑time impact, just better ergonomics.


59-65: Provide a default no‑op implementation for drop_index_if_incompatible

All index types are now forced to implement this even if they never create per‑field indexes (e.g. the simple payload index).
Adding a default implementation keeps the trait backwards‑compatible and reduces boilerplate:

fn drop_index_if_incompatible(
    &mut self,
    _field: PayloadKeyTypeRef,
    _new_payload_schema: &PayloadFieldSchema,
) -> OperationResult<()> {
    Ok(())
}

Implementers that need special logic can still override it.

lib/segment/src/segment/entry.rs (2)

21-22: Two similarly‑named result types are imported – consider an alias to avoid confusion

use crate::data_types::build_index_result::BuildFieldIndexResult;
use crate::index::{BuildIndexResult, …};

Developers reading the file must constantly remember which one is which. A simple alias clarifies intent:

use crate::index::{BuildIndexResult as PayloadBuildResult, PayloadIndex, VectorIndex};

and update the references accordingly.


768-787: Return path on IncompatibleSchema lacks guidance

The method bubbles up BuildFieldIndexResult::IncompatibleSchema but gives no hint on what the caller should do next.
Consider extending the doc‑comment (or the enum variant) to state that the caller is expected to invoke
delete_field_index_if_incompatible and retry the build.

Small, but improves DX and makes the control‑flow obvious.

lib/segment/src/index/struct_payload_index.rs (3)

41-41: PayloadContainer appears to be unused – consider removing

The extra import generates a compiler warning (unused import: 'PayloadContainer') in strict #![deny(warnings)] builds.

-    PayloadContainer, PayloadFieldSchema, PayloadKeyType, PayloadKeyTypeRef, VectorNameBuf,
+    PayloadFieldSchema, PayloadKeyType, PayloadKeyTypeRef, VectorNameBuf,

514-542: set_indexed writes configuration twice and contains an unreachable branch

  1. drop_index_if_incompatibledrop_index already persists PayloadConfig.
  2. The happy‑path (BuildIndexResult::Built) calls apply_index, which saves the config again.
    That is two disk writes for one logical operation.
  3. After drop_index_if_incompatible, the IncompatibleSchema variant can no longer occur, so the error branch is effectively dead code.

You could streamline the method and avoid redundant I/O:

-        self.drop_index_if_incompatible(field, &payload_schema)?;
+        let schema_was_dropped =
+            self.drop_index_if_incompatible(field, &payload_schema)?;

-        let field_index = match self.build_index(field, &payload_schema, hw_counter)? {
+        let field_index = match self.build_index(field, &payload_schema, hw_counter)? {
             BuildIndexResult::Built(field_index) => field_index,
             BuildIndexResult::AlreadyBuilt => {
-                // Index already built, no need to do anything
+                if schema_was_dropped {
+                    // Should be impossible – defensive check
+                    return Err(OperationError::service_error(
+                        "index dropped but BuildIndex reported AlreadyBuilt",
+                    ));
+                }
                 return Ok(());
             }
             BuildIndexResult::IncompatibleSchema => unreachable!(),
         };

Where drop_index_if_incompatible returns a bool indicating whether it actually removed anything.
This eliminates double‑flushes and clarifies control‑flow.


558-571: drop_index_if_incompatible triggers a full config flush even when unnecessary

self.drop_index(field)? persists the updated config every time schemas differ.
Immediately afterwards, set_indexed will rebuild the index and again call apply_index, causing a second save_config().

Consider:

  • Return a flag (bool) to indicate that an index was removed, allowing the caller to decide when to persist.
  • Or move save_config() out of drop_index and call it once at a higher level.

Reducing superfluous RocksDB flushes will shorten index‑rebuild latency under heavy write load.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d2cf584 and 1e83c5c.

📒 Files selected for processing (13)
  • lib/collection/src/collection_manager/holders/proxy_segment.rs (5 hunks)
  • lib/collection/src/collection_manager/optimizers/segment_optimizer.rs (3 hunks)
  • lib/collection/src/collection_manager/segments_updater.rs (2 hunks)
  • lib/segment/src/data_types/build_index_result.rs (1 hunks)
  • lib/segment/src/data_types/mod.rs (1 hunks)
  • lib/segment/src/entry/entry_point.rs (3 hunks)
  • lib/segment/src/index/payload_index_base.rs (3 hunks)
  • lib/segment/src/index/plain_payload_index.rs (3 hunks)
  • lib/segment/src/index/struct_payload_index.rs (4 hunks)
  • lib/segment/src/segment/entry.rs (2 hunks)
  • lib/segment/src/segment/segment_ops.rs (1 hunks)
  • lib/segment/src/segment_constructor/segment_builder.rs (1 hunks)
  • lib/segment/src/types.rs (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (4)
lib/collection/src/collection_manager/optimizers/segment_optimizer.rs (3)
lib/segment/src/segment/entry.rs (1)
  • version (35-37)
lib/segment/src/entry/entry_point.rs (1)
  • version (32-32)
lib/collection/src/collection_manager/holders/proxy_segment.rs (2)
  • version (368-373)
  • version (1453-1459)
lib/segment/src/entry/entry_point.rs (3)
lib/segment/src/segment/entry.rs (2)
  • delete_field_index_if_incompatible (741-754)
  • build_field_index (756-787)
lib/collection/src/collection_manager/holders/proxy_segment.rs (2)
  • delete_field_index_if_incompatible (1178-1197)
  • build_field_index (1199-1214)
lib/segment/src/common/operation_error.rs (1)
  • service_error (84-89)
lib/segment/src/index/plain_payload_index.rs (2)
lib/segment/src/index/payload_index_base.rs (3)
  • set_indexed (49-54)
  • drop_index (57-57)
  • drop_index_if_incompatible (60-64)
lib/segment/src/index/struct_payload_index.rs (3)
  • set_indexed (514-542)
  • drop_index (544-556)
  • drop_index_if_incompatible (558-571)
lib/segment/src/index/payload_index_base.rs (2)
lib/segment/src/index/plain_payload_index.rs (2)
  • drop_index (116-119)
  • drop_index_if_incompatible (121-129)
lib/segment/src/index/struct_payload_index.rs (2)
  • drop_index (544-556)
  • drop_index_if_incompatible (558-571)
⏰ Context from checks skipped due to timeout of 90000ms (13)
  • GitHub Check: Basic TLS/HTTPS tests
  • GitHub Check: test-snapshot-operations-s3-minio
  • GitHub Check: test-consistency
  • GitHub Check: test-low-resources
  • GitHub Check: test-shard-snapshot-api-s3-minio
  • GitHub Check: rust-tests (macos-latest)
  • GitHub Check: test-consensus-compose
  • GitHub Check: rust-tests (windows-latest)
  • GitHub Check: integration-tests
  • GitHub Check: lint
  • GitHub Check: storage-compat-test
  • GitHub Check: rust-tests (ubuntu-latest)
  • GitHub Check: integration-tests-consensus
🔇 Additional comments (9)
lib/segment/src/data_types/mod.rs (1)

1-1: Added new module for build index results.

This change adds a new module to handle build index results, which is a good way to organize the code and make the new data types available throughout the codebase.

lib/segment/src/segment/segment_ops.rs (1)

27-28: Removed unused import types.

The removal of PayloadKeyTypeRef and PayloadSchemaType from imports aligns with the PR's goal of removing auto-inference of index types. This cleanup helps prevent using the obsolete inference functionality.

lib/segment/src/segment_constructor/segment_builder.rs (1)

167-177: Added key method to handle incompatible schema cases.

This new method implements a critical part of the PR's objective - checking for schema compatibility and conditionally removing incompatible field indexes. This helps prevent the inconsistent state that was causing panics when rebuilding indexes.

The implementation correctly:

  1. Checks if the field exists in the indexed fields
  2. Compares the schemas for compatibility
  3. Removes the field if schemas are incompatible
lib/segment/src/data_types/build_index_result.rs (1)

1-16: Well-designed enum for robust field index building results.

This new enum provides a comprehensive way to represent the different outcomes of building a field index, which is central to addressing the index inconsistency issues in the PR.

The enum variants effectively cover all necessary scenarios:

  • SkippedByVersion: For handling version-based operation skipping
  • AlreadyExists: For preventing duplicate work
  • IncompatibleSchema: For detecting schema mismatches (critical for the bug fix)
  • Built: For successful outcomes with detailed result information
lib/collection/src/collection_manager/optimizers/segment_optimizer.rs (3)

457-459: Add support for DeleteIfIncompatible variant in index changes processing.

This change implements handling for the new ProxyIndexChange::DeleteIfIncompatible variant, which is used to conditionally delete a field index if it has an incompatible schema. It adds a call to remove_index_field_if_incompatible on the segment builder.


538-541: Enhance segment optimization with compatibility-aware deletion logic.

This change adds handling for the DeleteIfIncompatible operation during segment optimization by calling delete_field_index_if_incompatible on the optimized segment. This ensures that existing field indexes with incompatible schemas are properly removed before creating new ones.


724-727: Support schema compatibility checks during index changes processing.

This change integrates the schema compatibility check logic into the segment optimization phase when processing index changes. It ensures that any incompatible field index is properly deleted before a new one is created, preventing potential panics during startup if an incompatible index structure exists.

lib/collection/src/collection_manager/holders/proxy_segment.rs (1)

291-295: Order of operations may recreate the incompatible index

DeleteIfIncompatible is processed after Create/Delete (loop keeps the original order of HashMap).
If the optimizer inserted both DeleteIfIncompatible and a following Create, the latter would rebuild the index that has just been conditionally dropped.

Either:

  1. Sort changed_indexes so that DeleteIfIncompatible entries are applied before Create, or
  2. Perform an explicit second‑pass check for newly incompatible indexes.

Otherwise a corrupted schema can slip through.

lib/segment/src/index/struct_payload_index.rs (1)

34-34: Import looks good

Adding BuildIndexResult to the prelude is required for the refactor – no concerns here.

return if prev_schema == payload_schema {
Ok(BuildIndexResult::AlreadyBuilt)
} else {
Ok(BuildIndexResult::IncompatibleSchema)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can't handle IncompatibleSchema inside build_index function, cause this function only has read-only access to the segment. This is important, since we want to split index building and index registration. Index building is a long process, and we don't want to build it while holding write lock. Therefore we have to process IncompatibleSchema externally

Comment on lines +18 to +26
pub enum BuildIndexResult {
/// Index was built
Built(Vec<FieldIndex>),
/// Index was already built
AlreadyBuilt,
/// Field Index already exists, but incompatible schema
/// Requires extra actions to remove the old index.
IncompatibleSchema,
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks too similar to BuildFieldIndexResult, is it possible to keep just one? or what is the reason for a separate enum?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar, but the other one have one extra field.

@generall generall requested a review from agourlay April 23, 2025 09:15
) -> OperationResult<()> {
let payload_schema = payload_schema.into();

self.drop_index_if_incompatible(field, &payload_schema)?;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you manually test that the index is properly created on restart if we crash just after deleting the incompatible index?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did

@generall generall merged commit 2a7c931 into dev Apr 23, 2025
17 checks passed
@generall generall deleted the careful-index-creation branch April 23, 2025 13:36
generall added a commit that referenced this pull request May 22, 2025
* make sure to delete existing index, if it is not compatible with a new one

* fmt

* fix tests

* review fixes
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants