Perf User GuideΒΆ

Table of Contents:

  • Run a benchmark
    • Install pyperf
    • Run a benchmark
    • pyperf architecture
    • Runs, values, warmups, outer and inner loops
    • How to get reproducible benchmark results
    • JIT compilers
    • Specializer statistics (pystats)
  • Analyze benchmark results
    • pyperf commands
    • Statistics
    • Why is pyperf so slow?
    • Compare benchmark results
  • pyperf commands
    • pyperf show
    • pyperf compare_to
    • pyperf stats
    • pyperf check
    • pyperf dump
    • pyperf hist
    • pyperf metadata
    • pyperf timeit
    • pyperf command
    • pyperf system
    • pyperf collect_metadata
    • pyperf slowest
    • pyperf convert
  • Runner CLI
    • Loop iterations
    • Output options
    • JSON output
    • Misc
    • Internal usage only
  • Tune the system for benchmarks
    • CPU pinning and CPU isolation
    • Process priority
    • Isolate CPUs on Linux
    • NUMA
    • Features of Intel CPUs
    • Operations and checks of the pyperf system command
    • Linux documentation
    • macOS
    • Articles
    • More options
    • Notes

pyperf

Navigation

  • Perf User Guide
    • Run a benchmark
    • Analyze benchmark results
    • pyperf commands
    • Runner CLI
    • Tune the system for benchmarks
  • Perf Developer Guide

Related Topics

  • Documentation overview
    • Previous: Python pyperf module
    • Next: Run a benchmark

Quick search

©2016, Victor Stinner. | Powered by Sphinx 1.8.6 & Alabaster 0.7.13 | Page source