BonsaiDb Commerce Benchmark

This benchmark suite is designed to simulate the types of loads that an ecommerce application might see under different levels of concurrency and traffic patterns. As with all benchmark suites, the results should not be taken as proof that any datbase may or may not perform for any particular application. Each application's needs differ greatly, and this benchmark is designed at helping BonsaiDb's developers notice areas for improvement.

Comparison of all backends across all suites
Dataset Size Traffic Pattern Concurrency bonsaidb-local bonsaidb-local+lz4 bonsaidb-quic bonsaidb-ws mongodb postgresql Report
small balanced 1 1.568s 1.770s 6.523s 5.091s 14.82s 10.31s Full Report
small balanced 4 2.126s 2.566s 8.452s 5.907s 18.36s 14.79s Full Report
small balanced 8 4.187s 4.523s 14.64s 11.78s 21.87s 18.14s Full Report
small readheavy 1 821.3ms 961.4ms 4.317s 4.005s 8.901s 6.080s Full Report
small readheavy 4 1.294s 1.247s 7.120s 5.548s 10.85s 7.921s Full Report
small readheavy 8 2.423s 2.381s 11.78s 9.238s 11.96s 9.250s Full Report
small writeheavy 1 5.848s 5.623s 17.12s 13.27s 42.55s 59.59s Full Report
small writeheavy 4 15.48s 13.99s 28.83s 24.21s 69.22s 131.6s Full Report
small writeheavy 8 25.65s 26.85s 52.37s 42.60s 89.78s 245.1s Full Report
medium balanced 1 1.505s 1.534s 6.087s 4.395s 14.86s 13.14s Full Report
medium balanced 4 2.780s 3.237s 8.497s 7.011s 18.75s 18.80s Full Report
medium balanced 8 5.305s 6.334s 15.24s 12.57s 22.92s 22.62s Full Report
medium readheavy 1 1.206s 1.321s 5.155s 2.764s 8.755s 8.624s Full Report
medium readheavy 4 1.602s 1.964s 7.524s 6.263s 9.948s 10.47s Full Report
medium readheavy 8 2.245s 2.824s 11.81s 8.664s 11.92s 12.66s Full Report
medium writeheavy 1 6.196s 7.677s 16.25s 11.38s 42.02s 69.59s Full Report
medium writeheavy 4 13.62s 15.82s 29.03s 23.89s 68.05s 163.1s Full Report
medium writeheavy 8 23.87s 29.41s 47.18s 42.29s 88.08s 307.8s Full Report
large balanced 1 3.144s 3.278s 7.905s 6.529s 14.96s 29.89s Full Report
large balanced 4 4.288s 4.589s 10.77s 7.719s 18.43s 34.48s Full Report
large balanced 8 6.111s 7.155s 17.09s 14.17s 22.37s 40.70s Full Report
large readheavy 1 2.355s 2.799s 6.801s 5.130s 8.875s 25.03s Full Report
large readheavy 4 2.871s 3.249s 8.344s 6.846s 10.43s 27.37s Full Report
large readheavy 8 4.721s 4.420s 12.91s 9.454s 11.32s 26.92s Full Report
large writeheavy 1 7.187s 7.834s 17.47s 13.49s 42.62s 98.36s Full Report
large writeheavy 4 14.83s 17.08s 30.20s 25.43s 66.76s 230.2s Full Report
large writeheavy 8 26.98s 27.85s 50.68s 42.28s 85.36s 417.5s Full Report

Dataset Sizes

The three dataset sizes are named "small", "medium", and "large". All databases being benchmarked can handle much larger dataset sizes than "large", but it is impractical at this time to run larger benchmarks on a regular basis. Each run's individual page will show the initial data set breakdown by type.

Traffic Patterns

This suite uses a probability-based system to generate plans for agents to process concurrently. These plans operate in a "funnel" pattern of searching, adding to cart, checking out, and reviewing the purchased items. Each stage in this funnel is assigned a probabilty, and these probabilities are tweaked to simulate read-heavy traffic patterns that perform more searches than purchasing, write-heavy traffic patterns where most plans result in purchasing and reviewing the products, and a balanced traffic pattern that is meant to simulate moderate amount of write traffic.

Concurrency

The suite is configured to run the plans up to three times, depending on the number of CPU cores present: 1 agent, 1 agent per core, and 2 agents per core.