Benchmark

ANN Search Benchmark

codelibs/search-ann-benchmark evaluates the performance of various Approximate Nearest Neighbor (ANN) algorithms, comparing both response time and accuracy metrics. This provides a comprehensive comparison of ANN-enabled systems for handling high-dimensional data spaces.

Overview

The tests focus on two main metrics:

  • QTime(msec): The time taken to respond to a search query, calculated as the average time for 10,000 queries.
  • Precision@K: The accuracy of the search results for K=10 and K=100.

The tables also include columns labeled "Top 10" and "Top 100," which indicate the number of results retrieved:

  • Top 10: Retrieves the top 10 results.
  • Top 100: Retrieves the top 100 results.

Results

The tests involve searching through a dataset of 100,000 vectors with 768 dimensions, yielding the following results:

ANN Search (Vector Only)

Product Top 10 Top 100 Test Date
QTime Precision@10 QTime Precision@100
chroma 0.5.7 5.2482 0.99225 8.0238 0.95742 2024-09-26
elasticsearch 8.17.4 2.2232 0.99596 9.2025 0.98519 2025-03-30
elasticsearch 8.17.4 (int8) 1.4073 0.95756 8.5261 0.96112 2025-03-30
elasticsearch 8.17.4 (int4) 4.6519 0.81174 11.6206 0.83758 2025-03-30
elasticsearch 8.17.4 (bbq) 1.6419 0.66765 8.8195 0.71704 2025-03-30
milvus 2.5.4 3.6221 0.93626 4.3812 0.96365 2025-02-21
opensearch 2.19.1 1.7832 0.87648 9.9277 0.98408 2025-02-28
opensearch 2.19.1 (faiss) 6.4687 0.99962 12.1940 0.99695 2025-02-28
pgvector 0.8.0-pg17 17.4892 0.99619 18.3212 0.97699 2024-11-29
qdrant 1.13.6 1.6421 0.99937 1.7289 0.99390 2025-04-05
qdrant 1.13.6 (int8) 0.8514 0.92674 1.0389 0.94392 2025-04-05
vespa 8.499.20 1.8790 0.99095 2.3363 0.95134 2025-03-19
weaviate 1.28.2 5.5044 0.99290 6.4320 0.95707 2025-01-07

ANN Search with Keyword Filtering

Product Top 10 Top 100 Test Date
QTime Precision@10 QTime Precision@100
chroma 0.5.7 - - - - 2024-09-26
elasticsearch 8.17.4 2.6348 0.99899 9.7819 0.99922 2025-03-30
elasticsearch 8.17.4 (int8) 1.7223 0.95569 8.8646 0.96977 2025-03-30
elasticsearch 8.17.4 (int4) 5.0060 0.80622 12.3432 0.85305 2025-03-30
elasticsearch 8.17.4 (bbq) 1.5942 0.66578 8.9165 0.74942 2025-03-30
milvus 2.5.4 4.0947 0.92782 4.8623 0.92482 2025-02-21
opensearch 2.19.1 3.5636 0.99412 9.8582 0.99986 2025-02-28
opensearch 2.19.1 (faiss) 2.0508 1.00000 7.6206 0.99999 2025-02-28
pgvector 0.8.0-pg17 17.7521 0.34090 17.9696 0.05644 2024-11-29
qdrant 1.13.6 0.8476 0.99978 0.9226 0.99847 2025-04-05
qdrant 1.13.6 (int8) 0.6429 0.92953 0.6335 0.95271 2025-04-05
vespa 8.499.20 4.1376 0.99979 4.4572 0.99382 2025-03-19
weaviate 1.28.2 6.4203 0.99990 7.4898 0.99988 2025-01-07

The tests are run on GitHub Actions, and the results are collected and summarized in tables. These benchmarks provide basic reference values, allowing users to evaluate and select an appropriate system based on their specific requirements. Be sure to test and verify the chosen solution's performance in your particular context before deployment.