Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add memory performance tests #439

Merged
merged 17 commits into from
May 7, 2024
Merged
Show file tree
Hide file tree
Changes from 16 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
154 changes: 147 additions & 7 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 4 additions & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ keywords = ["starknet", "cairo", "testnet", "local", "server"]


[workspace.dependencies]

# axum
axum = { version = "0.5" }
hyper = "0.14.12"
Expand Down Expand Up @@ -107,3 +108,6 @@ lazy_static = { version = "1.4.0" }
ethers = { version = "2.0.11" }

openssl = { version = "0.10", features = ["vendored"] }

hhamud marked this conversation as resolved.
Show resolved Hide resolved


16 changes: 16 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -573,6 +573,22 @@ $ cargo test --jobs <N>

To test if your contribution presents an improvement in execution time, check out the script at `scripts/benchmark/command_stat_test.py`.


##### Cargo Bench execution
To run the criterion benchmarks and generate a performance report:

```
$ cargo bench
```

This command will compile the benchmarks and run them using all available CPUs on your machine. Criterion will perform multiple iterations of each benchmark to collect performance data and generate statistical analysis.

Check the report created at `target/criterion/report/index.html`
FabijanC marked this conversation as resolved.
Show resolved Hide resolved

Criterion is highly configurable and offers various options to customise the benchmarking process. You can find more information about Criterion and its features in the [Criterion documentation](https://bheisler.github.io/criterion.rs/book/index.html).

To measure and benchmark memory it is best to use external tools such as Valgrind, Leaks, etc.

### Development - Docker

Due to internal needs, images with arch suffix are built and pushed to Docker Hub, but this is not mentioned in the user docs as users should NOT be needing it.
Expand Down
7 changes: 7 additions & 0 deletions crates/starknet-devnet/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -48,3 +48,10 @@ starknet-rs-core = { workspace = true }
starknet-rs-accounts = { workspace = true }
hyper = { workspace = true }
usc = { workspace = true }
criterion = { version = "0.3.4", features = ["async_tokio"] }


[[bench]]
name = "mint_bench"
harness = false
path = "benches/mint_bench.rs"
35 changes: 35 additions & 0 deletions crates/starknet-devnet/benches/mint_bench.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
use criterion::{black_box, criterion_group, criterion_main, Criterion};
use tokio::runtime::Runtime;

use crate::common::background_devnet::BackgroundDevnet;

#[path = "../tests/common/mod.rs"]
pub mod common;

static DUMMY_ADDRESS: u128 = 1;
static DUMMY_AMOUNT: u128 = 1;

async fn mint_iter(capacity: &str) {
let devnet =
BackgroundDevnet::spawn_with_additional_args(&["--state-archive-capacity", capacity])
.await
.expect("Could not start Devnet");

for _n in 1..=5_000 {
devnet.mint(DUMMY_ADDRESS, DUMMY_AMOUNT).await;
}
}

fn bench_devnet(c: &mut Criterion) {
let rt = Runtime::new().unwrap();
let mut group = c.benchmark_group("Mint");
group.significance_level(0.1).sample_size(10);
FabijanC marked this conversation as resolved.
Show resolved Hide resolved
for i in ["full", "none"].iter() {
group.bench_function(*i, |b| b.to_async(&rt).iter(|| black_box(mint_iter(*i))));
}

group.finish();
}

criterion_group!(benches, bench_devnet);
criterion_main!(benches);