Benchmark

ANN Search Benchmark

codelibs/search-ann-benchmark evaluates the performance of various Approximate Nearest Neighbor (ANN) algorithms, comparing both response time and accuracy metrics. This provides a comprehensive comparison of ANN-enabled systems for handling high-dimensional data spaces.

Overview

The tests focus on two main metrics:

  • QTime(msec): The time taken to respond to a search query, calculated as the average time for 10,000 queries.
  • Precision@K: The accuracy of the search results for K=10 and K=100.

The tables also include columns labeled "Top 10" and "Top 100," which indicate the number of results retrieved:

  • Top 10: Retrieves the top 10 results.
  • Top 100: Retrieves the top 100 results.

Results

The tests involve searching through a dataset of 100,000 vectors with 768 dimensions, yielding the following results:

ANN Search (Vector Only)

Product Top 10 Top 100 Test Date
QTime Precision@10 QTime Precision@100
chroma 0.5.7 5.2482 0.99225 8.0238 0.95742 2024-09-26
elasticsearch 8.17.0 2.2724 0.99609 9.3091 0.98537 2024-12-15
elasticsearch 8.17.0 (int8) 1.6932 0.95667 8.8113 0.95956 2024-12-15
elasticsearch 8.17.0 (int4) 4.4064 0.81461 11.2097 0.84030 2024-12-15
elasticsearch 8.17.0 (bbq) 1.5803 0.66819 8.8007 0.71677 2024-12-15
milvus 2.4.12 3.9257 0.92656 4.4710 0.96553 2024-10-12
opensearch 2.18.0 1.7899 0.87579 10.0962 0.98386 2024-11-08
opensearch 2.18.0 (faiss) 15.2966 0.99953 21.1456 0.99724 2024-11-08
pgvector 0.8.0-pg17 17.4892 0.99619 18.3212 0.97699 2024-11-29
qdrant 1.12.5 2.0661 0.99971 2.1771 0.99660 2024-12-14
qdrant 1.12.5 (int8) 0.9803 0.92322 1.0785 0.94098 2024-12-14
vespa 8.448.13 1.7728 0.99119 2.0382 0.95152 2024-11-29
weaviate 1.27.7 5.2703 0.99278 6.3095 0.95684 2024-12-06

ANN Search with Keyword Filtering

Product Top 10 Top 100 Test Date
QTime Precision@10 QTime Precision@100
chroma 0.5.7 - - - - 2024-09-26
elasticsearch 8.17.0 2.6714 0.99899 9.8360 0.99922 2024-12-15
elasticsearch 8.17.0 (int8) 1.6476 0.95560 8.9155 0.96989 2024-12-15
elasticsearch 8.17.0 (int4) 4.9727 0.80813 12.2072 0.85514 2024-12-15
elasticsearch 8.17.0 (bbq) 1.5651 0.66601 8.8278 0.74942 2024-12-15
milvus 2.4.12 4.4185 0.92319 5.0487 0.92495 2024-10-12
opensearch 2.18.0 3.4966 0.99344 9.8424 0.99991 2024-11-08
opensearch 2.18.0 (faiss) 2.0230 0.99775 7.8931 0.99504 2024-11-08
pgvector 0.8.0-pg17 17.7521 0.34090 17.9696 0.05644 2024-11-29
qdrant 1.12.5 0.9176 0.99994 1.0172 0.99961 2024-12-14
qdrant 1.12.5 (int8) 0.5759 0.92517 0.6517 0.94958 2024-12-14
vespa 8.448.13 3.7106 0.99980 4.0971 0.99375 2024-11-29
weaviate 1.27.7 6.1376 0.99991 7.2258 0.99988 2024-12-06

The tests are run on GitHub Actions, and the results are collected and summarized in tables. These benchmarks provide basic reference values, allowing users to evaluate and select an appropriate system based on their specific requirements. Be sure to test and verify the chosen solution's performance in your particular context before deployment.