Traditional vector search relies on single, fixed-size embeddings (dense vectors) for documents and queries. While powerful, this approach can lose nuanced, token-level details.
-
Multi-vector search, used in models like ColBERT or ColPali, replaces a single document or image vector with a set of per-token vectors. This enables a "late interaction" mechanism, where fine-grained similarity is calculated term-by-term to boost retrieval accuracy.
-
Higher Accuracy: By matching at a granular, token-level, FastPlaid captures subtle relevance that single-vector models simply miss.
-
PLAID: stands for Per-Token Late Interaction Dense Search.
-
Blazing Performance: Engineered in Rust and optimized for GPUs.
pip install fast-plaid
FastPlaid is built with the torch
version 2.8.0
. To use FastPlaid with a lower version of torch
, you can build fast-plaid
from source.
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
pip install git+https://github.com/lightonai/fast-plaid.git
Get started with creating an index and performing a search in just a few lines of Python.
import torch
from fast_plaid import search
fast_plaid = search.FastPlaid(index="index")
embedding_dim = 128
# Index 100 documents, each with 300 tokens, each token is a 128-dim vector.
fast_plaid.create(
documents_embeddings=[torch.randn(300, embedding_dim) for _ in range(100)]
)
# Search for 2 queries, each with 50 tokens, each token is a 128-dim vector
scores = fast_plaid.search(
queries_embeddings=torch.randn(2, 50, embedding_dim),
top_k=10,
)
print(scores)
The output will be a list of lists, where each inner list contains tuples of (document_index, similarity_score) for the top top_k results for each query:
[
[
(20, 1334.55),
(91, 1299.57),
(59, 1285.78),
(10, 1273.53),
(62, 1267.96),
(44, 1265.55),
(15, 1264.42),
(34, 1261.19),
(19, 1261.05),
(86, 1260.94),
],
[
(58, 1313.85),
(75, 1313.82),
(79, 1305.32),
(61, 1304.45),
(64, 1303.67),
(68, 1302.98),
(66, 1301.23),
(65, 1299.78),
],
]
import torch
from fast_plaid import search
fast_plaid = search.FastPlaid(index="index") # Load an existing index
embedding_dim = 128
fast_plaid.update(
documents_embeddings=[torch.randn(300, embedding_dim) for _ in range(100)]
)
scores = fast_plaid.search(
queries_embeddings=torch.randn(2, 50, embedding_dim),
top_k=10,
)
print(scores)
It is highly recommended to create your initial index with a large and representative sample of your data for optimal performance and accuracy. The .create()
method establishes the fundamental structure of the index by calculating centroids that are specifically tailored to the distribution of this initial dataset.
The .update()
method, designed for efficiency, does not re-compute these centroids. Instead, it places new documents into the existing structure. If you frequently update the index with large volumes of data that have a different statistical distribution than the original set, you may experience "drift." This means the fixed centroids become less representative of the total collection, potentially leading to sub-optimal data partitioning and a gradual decline in retrieval accuracy over time. Therefore, building a robust initial index is key to its long-term health. If you find that your data distribution changes significantly, consider periodically re-creating the index with a new, representative sample to maintain optimal performance or avoid using the .update()
method and rely on the .create()
method which will delete the existing index and re-create it from scratch.
FastPlaid significantly outperforms the original PLAID engine across various datasets, delivering comparable accuracy with faster indexing and query speeds.
NDCG@10 Indexing Time (s) Queries per seconds (QPS)
dataset size library
arguana 8674 PLAID 0.46 4.30 56.73
FastPlaid 0.46 4.72 155.25 (+174%)
fiqa 57638 PLAID 0.41 17.65 48.13
FastPlaid 0.41 12.62 146.62 (+205%)
nfcorpus 3633 PLAID 0.37 2.30 78.31
FastPlaid 0.37 2.10 243.42 (+211%)
quora 522931 PLAID 0.88 40.01 43.06
FastPlaid 0.87 11.23 281.51 (+554%)
scidocs 25657 PLAID 0.19 13.32 57.17
FastPlaid 0.18 10.86 157.47 (+175%)
scifact 5183 PLAID 0.74 3.43 67.66
FastPlaid 0.75 3.16 190.08 (+181%)
trec-covid 171332 PLAID 0.84 69.46 32.09
FastPlaid 0.83 45.19 54.11 (+69%)
webis-touche2020 382545 PLAID 0.25 128.11 31.94
FastPlaid 0.24 74.50 70.15 (+120%)
All benchmarks were performed on an H100 GPU. It's important to note that PLAID relies on Just-In-Time (JIT) compilation. This means the very first execution can exhibit longer runtimes. To ensure our performance analysis is representative, we've excluded these initial JIT-affected runs from the reported results. In contrast, FastPlaid does not employ JIT compilation, so its performance on the first run is directly indicative of its typical execution speed.
FastPlaid builds upon the groundbreaking work of the original PLAID engine Santhanam, Keshav, et al..
You can cite FastPlaid in your work as follows:
@misc{fastplaid2025,
author = {Sourty, Raphaël},
title = {FastPlaid: A High-Performance Engine for Multi-Vector Search},
year = {2025},
url = {https://github.com/lightonai/fast-plaid}
}
And for the original PLAID research:
@inproceedings{santhanam2022plaid,
title={{PLAID}: an efficient engine for late interaction retrieval},
author={Santhanam, Keshav and Khattab, Omar and Potts, Christopher and Zaharia, Matei},
booktitle={Proceedings of the 31st ACM International Conference on Information \& Knowledge Management},
pages={1747--1756},
year={2022}
}
The FastPlaid
class is the core component for building and querying multi-vector search indexes. It's designed for high performance, especially when leveraging GPUs.
To create an instance of FastPlaid
, you'll provide the directory where your index will be stored and specify the device(s) for computation.
class FastPlaid:
def __init__(
self,
index: str,
device: str | list[str] | None = None,
) -> None:
index: str
The file path to the directory where your index will be saved or loaded from.
device: str | list[str] | None = None
Specifies the device(s) to use for computation.
- If None (default) and CUDA is available, it defaults to "cuda".
- If CUDA is not available, it defaults to "cpu".
- Can be a single device string (e.g., "cuda:0" or "cpu").
- Can be a list of device strings (e.g., ["cuda:0", "cuda:1"]).
- If multiple GPUs are specified and available, multiprocessing is automatically set up for parallel execution.
Remember to include your code within an `if __name__ == "__main__":` block for proper multiprocessing behavior.
The create
method builds the multi-vector index from your document embeddings. It uses K-means clustering to organize your data for efficient retrieval.
def create(
self,
documents_embeddings: list[torch.Tensor],
kmeans_niters: int = 4,
max_points_per_centroid: int = 256,
nbits: int = 4,
n_samples_kmeans: int | None = None,
) -> "FastPlaid":
documents_embeddings: list[torch.Tensor]
A list where each element is a PyTorch tensor representing the multi-vector embedding for a single document.
Each document's embedding should have a shape of `(num_tokens, embedding_dimension)`.
kmeans_niters: int = 4 (optional)
The number of iterations for the K-means algorithm used during index creation.
This influences the quality of the initial centroid assignments.
max_points_per_centroid: int = 256 (optional)
The maximum number of points (token embeddings) that can be assigned to a single centroid during K-means.
This helps in balancing the clusters.
nbits: int = 4 (optional)
The number of bits to use for product quantization.
This parameter controls the compression of your embeddings, impacting both index size and search speed.
Lower values mean more compression and potentially faster searches but can reduce accuracy.
n_samples_kmeans: int | None = None (optional)
The number of samples to use for K-means clustering.
If `None`, it defaults to a value based on the number of documents.
This parameter can be adjusted to balance between speed, memory usage and
clustering quality. If you have a large dataset, you might want to set this to a
smaller value to speed up the indexing process and save some memory.
The update
method provides an efficient way to add new documents to an existing index without rebuilding it from scratch. This is significantly faster than calling .create() again, as it reuses the existing quantization configuration and only processes the new documents. The centroids and quantization parameters remain unchanged, this might lead to a slight decrease in accuracy compared to a full re-indexing.
def update(
self,
documents_embeddings: list[torch.Tensor],
) -> "FastPlaid":
documents_embeddings: list[torch.Tensor]
A list where each element is a PyTorch tensor representing the multi-vector embedding for a single document.
Each document's embedding should have a shape of `(num_tokens, embedding_dimension)`.
This method will add these new embeddings to the existing index.
The search
method lets you query the created index with your query embeddings and retrieve the most relevant documents.
def search(
self,
queries_embeddings: torch.Tensor,
top_k: int = 10,
batch_size: int = 1 << 18,
n_full_scores: int = 8192,
n_ivf_probe: int = 8,
show_progress: bool = True,
) -> list[list[tuple[int, float]]]:
queries_embeddings: torch.Tensor
A PyTorch tensor representing the multi-vector embeddings of your queries.
Its shape should be `(num_queries, num_tokens_per_query, embedding_dimension)`.
top_k: int = 10 (optional)
The number of top-scoring documents to retrieve for each query.
batch_size: int = 1 << 18 (optional)
The internal batch size used for processing queries.
A larger batch size might improve throughput on powerful GPUs but can consume more memory.
n_full_scores: int = 8192 (optional)
The number of candidate documents for which full (re-ranked) scores are computed.
This is a crucial parameter for accuracy; higher values lead to more accurate results but increase computation.
n_ivf_probe: int = 8 (optional)
The number of inverted file list "probes" to perform during the search.
This parameter controls the number of clusters to search within the index for each query.
Higher values improve recall but increase search time.
show_progress: bool = True (optional)
If set to `True`, a progress bar will be displayed during the search operation.
Any contributions to FastPlaid are welcome! If you have ideas for improvements, bug fixes, or new features, please open an issue or submit a pull request. We are particularly interested in:
- Re-computing centroids when using the
.update()
method to maintain optimal performance. - Additional algorithms for multi-vector search.
- New search outputs formats for better integration with existing systems.