Python pyperf moduleΒΆ
The Python pyperf
module is a toolkit to write, run and analyze benchmarks.
Documenation:
Features of the pyperf
module:
- Simple API to run reliable benchmarks: see examples.
- Automatically calibrate a benchmark for a time budget.
- Spawn multiple worker processes.
- Compute the mean and standard deviation.
- Detect if a benchmark result seems unstable: see the pyperf check command.
- pyperf stats command to analyze the distribution of benchmark results (min/max, mean, median, percentiles, etc.).
- pyperf compare_to command tests if a difference if significant. It supports comparison between multiple benchmark suites (made of multiple benchmarks)
- pyperf timeit command line tool for quick but reliable Python microbenchmarks
- pyperf system tune command to tune your system to run stable benchmarks.
- Automatically collect metadata on the computer and the benchmark: use the pyperf metadata command to display them, or the pyperf collect_metadata command to manually collect them.
--track-memory
and--tracemalloc
options to track the memory usage of a benchmark.- JSON format to store benchmark results.
- Support multiple units: seconds, bytes and integer.
Quick Links:
- pyperf documentation (this documentation)
- pyperf project homepage at GitHub (code, bugs)
- Download latest pyperf release at the Python Cheeseshop (PyPI)
Other Python benchmark projects:
- pyperformance: the Python
benchmark suite which uses
pyperf
- Python speed mailing list
- Airspeed Velocity: A simple Python benchmarking tool with web-based reporting
- pytest-benchmark
- boltons.statsutils
The pyperf project is covered by the PSF Code of Conduct.