++++++++++++++++++++ Python pyperf module ++++++++++++++++++++ The Python ``pyperf`` module is a toolkit to write, run and analyze benchmarks. Documenation: .. toctree:: :maxdepth: 2 user_guide developer_guide Features of the ``pyperf`` module: * :ref:`Simple API ` to run reliable benchmarks: see :ref:`examples `. * Automatically calibrate a benchmark for a time budget. * Spawn multiple worker processes. * Compute the mean and standard deviation. * Detect if a benchmark result seems unstable: see the :ref:`pyperf check command `. * :ref:`pyperf stats command ` to analyze the distribution of benchmark results (min/max, mean, median, percentiles, etc.). * :ref:`pyperf compare_to command ` tests if a difference if significant. It supports comparison between multiple benchmark suites (made of multiple benchmarks) * :ref:`pyperf timeit command line tool ` for quick but reliable Python microbenchmarks * :ref:`pyperf system tune command ` to tune your system to run stable benchmarks. * Automatically collect metadata on the computer and the benchmark: use the :ref:`pyperf metadata command ` to display them, or the :ref:`pyperf collect_metadata command ` to manually collect them. * ``--track-memory`` and ``--tracemalloc`` :ref:`options ` to track the memory usage of a benchmark. * :ref:`JSON format ` to store benchmark results. * Support multiple units: seconds, bytes and integer. Quick Links: * `pyperf documentation `_ (this documentation) * `pyperf project homepage at GitHub `_ (code, bugs) * `Download latest pyperf release at the Python Cheeseshop (PyPI) `_ Other Python benchmark projects: * `pyperformance `_: the Python benchmark suite which uses ``pyperf`` * `Python speed mailing list `_ * `Airspeed Velocity `_: A simple Python benchmarking tool with web-based reporting * `pytest-benchmark `_ * `boltons.statsutils `_ The pyperf project is covered by the `PSF Code of Conduct `_.