![]() |
4 months ago | |
---|---|---|
ci | 1 year ago | |
docs | 1 year ago | |
src/pytest_benchmark | 1 year ago | |
tests | 4 months ago | |
.appveyor.yml | 1 year ago | |
.bumpversion.cfg | 1 year ago | |
.cookiecutterrc | 1 year ago | |
.coveragerc | 5 years ago | |
.editorconfig | 1 year ago | |
.gitignore | 1 year ago | |
.pre-commit-config.yaml | 2 years ago | |
.readthedocs.yml | 2 years ago | |
.travis.yml | 1 year ago | |
AUTHORS.rst | 4 months ago | |
CHANGELOG.rst | 4 months ago | |
CONTRIBUTING.rst | 1 year ago | |
LICENSE | 4 years ago | |
MANIFEST.in | 1 year ago | |
README.rst | 1 year ago | |
setup.cfg | 2 years ago | |
setup.py | 1 year ago | |
tox.ini | 1 year ago |
README.rst
Overview
docs | |
tests | |
package |
A pytest
fixture for benchmarking code. It will group the tests into rounds that are calibrated to the chosen timer.
See calibration and FAQ.
- Free software: BSD 2-Clause License
Installation
pip install pytest-benchmark
Documentation
For latest release: pytest-benchmark.readthedocs.org/en/stable.
For master branch (may include documentation fixes): pytest-benchmark.readthedocs.io/en/latest.
Examples
But first, a prologue:
This plugin tightly integrates into pytest. To use this effectively you should know a thing or two about pytest first. Take a look at the introductory material or watch talks.
Few notes:
- This plugin benchmarks functions and only that. If you want to measure block of code or whole programs you will need to write a wrapper function.
- In a test you can only benchmark one function. If you want to benchmark many functions write more tests or use parametrization.
- To run the benchmarks you simply use pytest to run your "tests". The plugin will automatically do the benchmarking and generate a result table. Run
pytest --help
for more details.
This plugin provides a benchmark fixture. This fixture is a callable object that will benchmark any function passed to it.
Example:
def something(duration=0.000001):
"""
Function that needs some serious benchmarking.
"""
time.sleep(duration)# You may return anything you want, like the result of a computation
return 123
def test_my_stuff(benchmark):
# benchmark something
= benchmark(something)
result
# Extra code, to verify that the run completed correctly.
# Sometimes you may want to check the result, fast functions
# are no good if they return incorrect results :-)
assert result == 123
You can also pass extra arguments:
def test_my_stuff(benchmark):
0.02) benchmark(time.sleep,
Or even keyword arguments:
def test_my_stuff(benchmark):
=0.02) benchmark(time.sleep, duration
Another pattern seen in the wild, that is not recommended for micro-benchmarks (very fast code) but may be convenient:
def test_my_stuff(benchmark):
@benchmark
def something(): # unnecessary function call
0.000001) time.sleep(
A better way is to just benchmark the final function:
def test_my_stuff(benchmark):
0.000001) # way more accurate results! benchmark(time.sleep,
If you need to do fine control over how the benchmark is run (like a setup function, exact control of iterations and rounds) there's a special mode - pedantic:
def my_special_setup():
...
def test_with_setup(benchmark):
=my_special_setup, args=(1, 2, 3), kwargs={'foo': 'bar'}, iterations=10, rounds=100) benchmark.pedantic(something, setup
Screenshots
Normal run:
Compare mode (--benchmark-compare
):
Histogram (--benchmark-histogram
):
Development
To run the all tests run:
tox
Credits
- Timing code and ideas taken from: https://github.com/vstinner/misc/blob/34d3128468e450dad15b6581af96a790f8bd58ce/python/benchmark.py