Benchmark

ANN Search Benchmark

codelibs/search-ann-benchmark evaluates the performance of various Approximate Nearest Neighbor (ANN) algorithms, comparing both response time and accuracy metrics. This provides a comprehensive comparison of ANN-enabled systems for handling high-dimensional data spaces.

Overview

The tests focus on two main metrics:

  • QTime(msec): The time taken to respond to a search query, calculated as the average time for 10,000 queries.
  • Precision@K: The accuracy of the search results for K=10 and K=100.

The tables also include columns labeled "Top 10" and "Top 100," which indicate the number of results retrieved:

  • Top 10: Retrieves the top 10 results.
  • Top 100: Retrieves the top 100 results.

Results

The tests involve searching through a dataset of 100,000 vectors with 768 dimensions, yielding the following results:

ANN Search (Vector Only)

Product Top 10 Top 100 Test Date
QTime Precision@10 QTime Precision@100
chroma 1.4.1 2.8419 0.99219 4.4962 0.95785 2026-01-25
elasticsearch 9.2.4 1.2120 0.99691 1.4305 0.99042 2026-01-25
elasticsearch 9.2.4 (int8) 0.8136 0.95870 1.0300 0.96423 2026-01-25
elasticsearch 9.2.4 (int4) 3.5758 0.81545 3.9017 0.84015 2026-01-25
elasticsearch 9.2.4 (bbq) 0.3434 0.93012 1.2152 0.96828 2026-01-25
elasticsearch 9.2.4 (bbq_disk) 0.5251 0.89706 3.0141 0.98265 2026-01-25
milvus 2.6.9 3.4035 0.93506 3.7207 0.96760 2026-01-25
opensearch 3.4.0 1.5412 0.96757 11.0309 0.99045 2026-01-08
opensearch 3.4.0 (faiss) 3.7035 0.99858 12.3818 0.99619 2026-01-08
pgvector 0.8.1-pg17 4.4388 0.99613 5.8089 0.97694 2026-01-08
qdrant 1.16.3 1.4187 0.99562 1.7837 0.97262 2026-01-25
qdrant 1.16.3 (int8) 0.9209 0.93186 1.2295 0.93651 2026-01-25
vespa 8.631.39 1.7894 0.99010 2.0275 0.95153 2026-01-25
weaviate 1.35.2 5.2268 0.99185 6.4101 0.95630 2026-01-08
redisstack 7.4.0-v8 0.8867 0.99186 2.3006 0.95648 2026-01-08
clickhouse 25.8 5.8906 0.94669 6.9339 0.91587 2026-01-08
lancedb 0.26.1 55.6033 0.99907 88.4217 0.99918 2026-01-25
vald v1.7.17 0.8758 0.84806 3.8194 0.94194 2026-01-08

ANN Search with Keyword Filtering

Product Top 10 Top 100 Test Date
QTime Precision@10 QTime Precision@100
chroma 1.4.1 47.8857 0.99869 50.0312 0.99138 2026-01-25
elasticsearch 9.2.4 0.7963 0.99336 1.5184 0.99037 2026-01-25
elasticsearch 9.2.4 (int8) 0.5358 0.95207 1.1458 0.96488 2026-01-25
elasticsearch 9.2.4 (int4) 1.7360 0.80076 2.2040 0.84856 2026-01-25
elasticsearch 9.2.4 (bbq) 0.4945 0.93338 1.3776 0.98396 2026-01-25
elasticsearch 9.2.4 (bbq_disk) 0.5954 0.92563 2.6262 0.99055 2026-01-25
milvus 2.6.9 3.6408 0.92584 4.2753 0.93752 2026-01-25
opensearch 3.4.0 2.3322 0.99824 11.6128 0.99925 2026-01-08
opensearch 3.4.0 (faiss) 1.7004 0.99687 10.5990 0.99408 2026-01-08
pgvector 0.8.1-pg17 4.4941 0.34094 4.5000 0.05647 2026-01-08
qdrant 1.16.3 0.7566 0.98926 1.0921 0.97593 2026-01-25
qdrant 1.16.3 (int8) 0.7735 0.93351 1.0228 0.95300 2026-01-25
vespa 8.631.39 5.1314 0.99241 5.6131 0.99173 2026-01-25
weaviate 1.35.2 6.1562 0.99891 7.4382 0.99918 2026-01-08
redisstack 7.4.0-v8 2.0235 0.99502 4.0483 0.99929 2026-01-08
clickhouse 25.8 8.3215 0.33396 12.1641 0.40238 2026-01-08
lancedb 0.26.1 80.3284 0.99899 379.8262 0.99929 2026-01-25
vald v1.7.17 3.8065 0.33386 25.8800 0.40058 2026-01-08

The tests are run on GitHub Actions, and the results are collected and summarized in tables. These benchmarks provide basic reference values, allowing users to evaluate and select an appropriate system based on their specific requirements. Be sure to test and verify the chosen solution's performance in your particular context before deployment.