You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
Sam James 4a99afe6e9 test_utils.py: fix messages in pytest.skip() 4 months ago
ci Update skel. 1 year ago
docs Bump version: 3.4.0 → 3.4.1 1 year ago
src/pytest_benchmark Still squelch warning but also don't actually save 1 year ago
tests test_utils.py: fix messages in pytest.skip() 4 months ago
.appveyor.yml Drop Python 3.5 support 1 year ago
.bumpversion.cfg Bump version: 3.4.0 → 3.4.1 1 year ago
.cookiecutterrc Update skel. 1 year ago
.coveragerc Add 3.6 in test gris and other skel updates. 5 years ago
.editorconfig Update skel. 1 year ago
.gitignore Update skel. 1 year ago
.pre-commit-config.yaml Update project skel (using https://cruft.github.io/cruft/ now). 2 years ago
.readthedocs.yml Update project skel (using https://cruft.github.io/cruft/ now). 2 years ago
.travis.yml Drop Python 3.5 support 1 year ago
AUTHORS.rst test_utils.py: fix messages in pytest.skip() 4 months ago
CHANGELOG.rst test_utils.py: fix messages in pytest.skip() 4 months ago
CONTRIBUTING.rst Update skel. 1 year ago
LICENSE Upadte skel. 4 years ago
MANIFEST.in Update skel. 1 year ago
README.rst Bump version: 3.4.0 → 3.4.1 1 year ago
setup.cfg Update project skel (using https://cruft.github.io/cruft/ now). 2 years ago
setup.py Bump version: 3.4.0 → 3.4.1 1 year ago
tox.ini Update test deps. 1 year ago

README.rst

Overview

docs Documentation Status Join the chat at https://gitter.im/ionelmc/pytest-benchmark
tests
Travis-CI Build Status AppVeyor Build Status Requirements Status
Coverage Status Coverage Status
package
PyPI Package latest release PyPI Wheel Supported versions Supported implementations
Commits since latest release

A pytest fixture for benchmarking code. It will group the tests into rounds that are calibrated to the chosen timer.

See calibration and FAQ.

  • Free software: BSD 2-Clause License

Installation

pip install pytest-benchmark

Documentation

For latest release: pytest-benchmark.readthedocs.org/en/stable.

For master branch (may include documentation fixes): pytest-benchmark.readthedocs.io/en/latest.

Examples

But first, a prologue:

This plugin tightly integrates into pytest. To use this effectively you should know a thing or two about pytest first. Take a look at the introductory material or watch talks.

Few notes:

  • This plugin benchmarks functions and only that. If you want to measure block of code or whole programs you will need to write a wrapper function.
  • In a test you can only benchmark one function. If you want to benchmark many functions write more tests or use parametrization.
  • To run the benchmarks you simply use pytest to run your "tests". The plugin will automatically do the benchmarking and generate a result table. Run pytest --help for more details.

This plugin provides a benchmark fixture. This fixture is a callable object that will benchmark any function passed to it.

Example:

def something(duration=0.000001):
    """
    Function that needs some serious benchmarking.
    """
    time.sleep(duration)
    # You may return anything you want, like the result of a computation
    return 123

def test_my_stuff(benchmark):
    # benchmark something
    result = benchmark(something)

    # Extra code, to verify that the run completed correctly.
    # Sometimes you may want to check the result, fast functions
    # are no good if they return incorrect results :-)
    assert result == 123

You can also pass extra arguments:

def test_my_stuff(benchmark):
    benchmark(time.sleep, 0.02)

Or even keyword arguments:

def test_my_stuff(benchmark):
    benchmark(time.sleep, duration=0.02)

Another pattern seen in the wild, that is not recommended for micro-benchmarks (very fast code) but may be convenient:

def test_my_stuff(benchmark):
    @benchmark
    def something():  # unnecessary function call
        time.sleep(0.000001)

A better way is to just benchmark the final function:

def test_my_stuff(benchmark):
    benchmark(time.sleep, 0.000001)  # way more accurate results!

If you need to do fine control over how the benchmark is run (like a setup function, exact control of iterations and rounds) there's a special mode - pedantic:

def my_special_setup():
    ...

def test_with_setup(benchmark):
    benchmark.pedantic(something, setup=my_special_setup, args=(1, 2, 3), kwargs={'foo': 'bar'}, iterations=10, rounds=100)

Screenshots

Normal run:

Screenshot of pytest summary

Compare mode (--benchmark-compare):

Screenshot of pytest summary in compare mode

Histogram (--benchmark-histogram):

Histogram sample

Development

To run the all tests run:

tox

Credits