Hubbry Logo
PytestPytestMain
Open search
Pytest
Community hub
Pytest
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Pytest
Pytest
from Wikipedia

Pytest
Original authorKrekel et al.
Stable release
9.0.1[1] Edit this on Wikidata / 12 November 2025; 13 days ago (12 November 2025)
Repository
Written inPython
PlatformmacOS, Windows, POSIX
TypeFramework for software testing
LicenseMIT License
Websitepytest.org Edit this on Wikidata

Pytest is a Python testing framework that originated from the PyPy project. It can be used to write various types of software tests, including unit tests, integration tests, end-to-end tests, and functional tests. Its features include parametrized testing, fixtures, and assert re-writing.

Pytest fixtures provide the contexts for tests by passing in parameter names in test cases; its parametrization eliminates duplicate code for testing multiple sets of input and output; and its rewritten assert statements provide detailed output for causes of failures.

History

[edit]

Pytest was developed as part of an effort by third-party packages to address Python's built-in module unittest's shortcomings. It originated as part of PyPy, an alternative implementation of Python to the standard CPython. Since its creation in early 2003, PyPy has had a heavy emphasis on testing. PyPy had unit tests for newly written code, regression tests for bugs, and integration tests using CPython's test suite.[2]

In mid 2004, a testing framework called utest emerged and contributors to PyPy began converting existing test cases to utest. Meanwhile, at EuroPython 2004 a complementary standard library for testing, named std, was invented. This package laid out the principles, such as assert rewriting, of what would later become pytest. In late 2004, the std project was renamed to py, std.utest became py.test, and the py library was separated from PyPy. In November 2010, pytest 2.0.0 was released as a package separate from py. It was still called py.test until August 2016, but following the release of pytest 3.0.0 the recommended command line entry point became pytest.[3]

Pytest has been classified by developer security platform Snyk as one of the key ecosystem projects in Python due to its popularity. Some well-known projects who switched to pytest from unittest and nose (another testing package) include those of Mozilla and Dropbox.[4][5][6][7]

Features

[edit]

Parameterized testing

[edit]

It is a common pattern in software testing to send values through test functions and check for correct output. In many cases, in order to thoroughly test functionalities, one needs to test multiple sets of input/output, and writing such cases separately would cause duplicate code as most of the actions would remain the same, only differing in input/output values. Pytest's parametrized testing feature eliminates such duplicate code by combining different iterations into one test case, then running these iterations and displaying each test's result separately.[8]

Parameterized tests in pytest are marked by the @pytest.mark.parametrize(argnames, argvalues) decorator, where the first parameter, argnames, is a string of comma-separated names, and argvalues is a list of values to pass into argnames. When there are multiple names in argnames, argvalues would be a list of tuples where values in each tuple corresponds to the names in argnames by index. The names in argnames are then passed into the test function marked by the decorator as parameters. When pytest runs such decorated tests, each pair of argnames and argvalues would constitute a separate run with its own test output and unique identifier. The identifier can then be used to run individual data pairs.[8]: 52–58 [9]

Assert rewriting

[edit]

When writing software tests, the assert statement is a primary means for communicating test failure, where expected values are compared to actual values.[8]: 32–34  While Python's built-in assert keyword would only raise AssertionError with no details in cases of failure, pytest rewrites Python's assert keyword and provides detailed output for the causes of failures, such as what expressions in the assert statement evaluate to. A comparison can be made with unittest (Python's built-in module for testing)'s assert statements:[8]: 32 

pytest unittest
assert x assertTrue(x)
assert x == y assertEqual(x, y)
assert x <= y assertLessEqual(x, y)

unittest adheres to a more verbose syntax because it is inspired by the Java programming language's JUnit, as are most unit testing libraries; pytest achieves the same while intercepting Python's built-in assert calls, making the approach more concise.[8]: 32 [6]

Pytest fixtures

[edit]

Pytest's tests verify that computer code performs as expected[10] using tests that are structured in an arrange, act and assert sequence known as AAA.[11] Its fixtures provide the context for tests. They can be used to put a system into a known state and to pass data into test functions. Fixtures practically constitute the arrange phase in the anatomy of a test (AAA, short for arrange, act, assert).[11][10] Pytest fixtures can run before test cases as setup or after test cases for clean up, but are different from unittest and nose (another third-party Python testing framework)'s setups and teardowns. Functions declared as pytest fixtures are marked by the @pytest.fixture decorator, whose names can then be passed into test functions as parameters.[12] When pytest finds the fixtures' names in test functions' parameters, it first searches in the same module for such fixtures, and if not found, it searches for such fixtures in the conftest.py file.[8]: 61 

For example:

import pytest

@pytest.fixture
def dataset():
    """Return some data to test functions"""
    return {'data1': 1, 'data2': 2}

def test_dataset(dataset):
    """test and confirm fixture value"""
    assert dataset == {'data1': 1, 'data2': 2}

In the above example, pytest fixture dataset returns a dictionary, which is then passed into test function test_dataset for assertion. In addition to fixture detection within the same file as test cases, pytest fixtures can also be placed in the conftest.py file in the tests directory. There can be multiple conftest.py files, each placed within a tests directory for fixtures to be detected for each subset of tests.[8]: 63 

Fixture scopes

[edit]

In pytest, fixture scopes let the user define when a fixture should be called. There are four fixture scopes: function scope, class scope, module scope, and session scope. Function-scoped fixtures are default for all pytest fixtures, which are called every time a function having the fixture as a parameter runs. The goal of specifying a broader fixture scope is to eliminate repeated fixture calls, which could slow down test execution. Class-scoped fixtures are called once per test class, regardless of the number of times they are called, and the same logic goes for all other scopes. When changing fixture scope, one need only add the scope parameter to fixture decorators, for example, @pytest.fixture(scope="class").[8]: 72 [13]

Test filtering

[edit]

Another feature of pytest is its ability to filter through tests, where only desired tests are selected to run, or behave in a certain way as desired by the developer. With the "k" option (e.g. pytest -k some_name), pytest would only run tests whose names include some_name. The opposite is true, where one can run pytest -k "not some_name", and pytest will run all tests whose names do not include some_name.[14]

Pytest's markers can, in addition to altering test behaviour, also filter tests. Pytest's markers are Python decorators starting with the @pytest.mark.<markername> syntax placed on top of test functions. With different arbitrarily named markers, running pytest -m <markername> on the command line will only run those tests decorated with such markers.[8]: 13  All available markers can be listed by the pytest --markers along with their descriptions; custom markers can also be defined by users and registered in pytest.ini, in which case pytest --markers will also list those custom markers along with builtin markers.[8]: 147 

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Pytest is an open-source testing framework for the Python programming language that enables developers to write simple, readable tests while scaling to handle complex functional testing for applications and libraries. Developed by Holger Krekel starting in 2003, pytest originated as a tool to support testing in Python projects and has since evolved into a mature, community-driven project under the MIT license. It is maintained by a team including Krekel and supported through platforms like Open Collective and Tidelift for enterprise use. Unlike Python's built-in unittest module, which requires subclassing and verbose assertions, pytest emphasizes simplicity with plain assert statements and automatic test discovery. Key features of pytest include detailed assertion for clear failure messages, modular fixtures for managing test resources, and a rich plugin with over 1,300 external plugins available for extending functionality, such as parametrization, mocking, and parallel execution. It supports running unittest suites alongside native pytest tests and is compatible with Python 3.10+ and 3, making it versatile for modern development workflows. As of 2025, pytest is widely regarded as the most popular Python testing framework, used by major companies including Amazon, due to its flexibility, extensive ecosystem, and ease of integration with tools like and .

Overview

Purpose and Scope

pytest is an open-source testing framework for Python designed to simplify the process of writing, running, and managing tests by providing a concise syntax and powerful features that reduce . It enables developers to create small, readable tests that can scale to handle complex for applications and libraries, supporting a wide range of testing needs including unit, integration, , and end-to-end tests. Compatible with Python 3.10 and later versions, pytest maintains support for all actively maintained Python releases at the time of each version's issuance, ensuring broad applicability across modern Python environments. At its core, pytest embodies a philosophy of simplicity and extensibility, prioritizing plain assert statements over verbose assertion methods and allowing tests to be written as simple functions without requiring class inheritance or test method naming conventions. This approach minimizes setup overhead while offering extensibility through a robust plugin system, which enables customization and integration with other tools without altering core functionality. Fixtures serve as a key mechanism for managing setup and teardown in tests, providing reusable and scoped resources that enhance modularity. While pytest functions as an alternative to the built-in unittest framework, it is not intended as a direct replacement but rather as an enhancer that can run existing unittest-based test suites seamlessly, allowing gradual adoption of its features.

Advantages over Built-in Frameworks

Pytest offers significant advantages over Python's built-in testing frameworks, particularly unittest and doctest, by prioritizing simplicity, expressiveness, and extensibility, which lead to faster test development and reduced maintenance overhead. One key benefit is its simplicity in test structure: unlike unittest, which requires tests to inherit from unittest.TestCase and follow a class-based approach, pytest allows tests to be written as simple, standalone functions without boilerplate inheritance or explicit test classes. This functional style makes tests more concise and easier to read, enabling developers to focus on logic rather than framework conventions. In contrast, doctest primarily extracts and runs tests from docstrings, limiting its scope to informal examples rather than structured unit testing. Pytest's assertion system provides human-readable failure messages through assert rewriting, where standard Python assert statements are enhanced to display detailed introspection of expressions, subexpressions, and values without needing custom assertion methods like unittest's assertEqual or assertTrue. This feature improves debugging by showing exactly where and why an assertion fails, such as differing list elements or string mismatches, reducing the time spent interpreting generic error outputs. Doctest, while useful for documentation validation, offers minimal assertion diagnostics compared to pytest's introspective capabilities. For setup and teardown, pytest's fixture system surpasses unittest's setUp and tearDown methods by supporting scoped, reusable, and dependency-injected resources that can be applied at function, class, module, or session levels, with automatic cleanup and parameterization for varied test contexts. This allows fixtures to be shared across tests without repetitive code, unlike unittest's more rigid, per-class setup that lacks fine-grained scoping or injection. Doctest has no built-in setup mechanism, often requiring manual context management in docstrings. Pytest's plugin ecosystem further enhances productivity, with over 1,600 third-party plugins available for integrations like (e.g., pytest-cov), mocking (e.g., pytest-mock), parallel execution (e.g., pytest-xdist), and (e.g., pytest-bdd for TDD/BDD styles). This extensibility allows customization without modifying core code, a capability absent in unittest's limited built-in extensions and entirely lacking in doctest. Overall, these features reduce overhead by automating discovery, parameterization, and reporting, making pytest ideal for large-scale projects.
Featurepytestunittest
Test StructureSimple functions, no requiredClass-based, from TestCase
AssertionsPlain assert with introspective rewriting for detailed failuresSpecific methods (e.g., assertEqual), basic output
Setup/TeardownFlexible fixtures with scoping and injectionsetUp/tearDown methods, class-scoped only
Test DiscoveryAutomatic based on naming conventions (e.g., test_*.py)Automatic discovery via TestLoader.discover() or python -m unittest discover (since Python 3.2), based on patterns like test*.py
Extensibility>1,600 plugins for coverage, mocking, parallelism, BDDLimited built-in, few third-party extensions

History

Origins and Early Development

pytest originated from efforts within the project, where Holger Krekel began developing a testing framework in 2004 to address the challenges of managing a large number of tests in a dynamic Python environment. Initially named py.test, it served as a successor to earlier test support scripts built on top of Python's built-in unittest module, with roots tracing back to refactors in June 2003. Krekel, motivated by the verbosity and rigidity of unittest—particularly its class-based structure and limited flexibility for rapid iteration in projects like —aimed to create a tool that emphasized plain assertions, minimal boilerplate, and straightforward invocation. This approach allowed for more without the overhead of subclassing or XML-style output mandates common in other frameworks. The framework's early development occurred primarily within the PyPy community's version control systems, starting in SVN repositories before migrating to and then in 2007. It was initially hosted on for collaborative development, reflecting the distributed version control preferences of the time, and later moved to to align with broader open-source trends. The first public release came as part of the broader "py" library in 2006 (version 0.8.0-alpha2), with py.test as a core component focused on command-line driven test discovery and execution. Key initial features included automatic collection of tests from Python files without requiring inheritance from base classes, support for options like -x for early stopping and --pdb for , and a collection for organizing tests hierarchically. These elements established py.test as a lightweight alternative, prioritizing ease of use over comprehensive reporting, and laid the groundwork for its evolution into a plugin-extensible system.

Major Releases and Evolution

Pytest's evolution has been marked by regular major releases that introduce new features, enhance compatibility, and remove legacy support to align with modern Python practices. The release in 2012 represented a significant milestone, introducing the funcarg mechanism—which laid the foundation for the current fixture —along with improved assertion reporting for sequences and strings, and support for ini-file configuration in files like setup.cfg. This release also separated pytest from the broader py library, establishing it as a standalone package while maintaining for existing tests. Subsequent releases built on this foundation, with version 3.0 in July 2016 shifting the primary command-line entry point from py.test to pytest for simplicity and to reduce confusion with other tools. Key additions included the approx function for robust floating-point comparisons, support for yield statements in fixtures to handle teardown more elegantly (deprecating the older @yield_fixture decorator), and enhanced exception handling in pytest.raises with regex matching. Version 5.0 in October 2019 marked the end of Python 2 support, requiring Python 3.5 or later and focusing on improvements like better JUnit XML reporting and refined plugin architecture. By version 7.0 in February 2022, pytest dropped support for Python 3.6, introduced the --import-mode=importlib option for more reliable module imports, and added the pytest.Stash mechanism for plugins to store data across the test session. The project's maintenance transitioned fully to PyPI under the pytest-dev GitHub organization, formed to foster collaborative development among core contributors. This structure has facilitated widespread adoption, with major scientific computing projects like relying on pytest for their comprehensive test suites, leveraging its flexible discovery and reporting capabilities. Community growth accelerated, evidenced by over 10 million monthly downloads on PyPI by 2023, reflecting its status as a for Python testing. As of , recent developments emphasize modernization and performance. The 8.0 release in January 2024 introduced support for exception groups and improved diff output for better , while deprecating marks on fixtures to streamline configuration. Pytest 9.0.0, released on November 5, , dropped Python 3.9 support, added native TOML configuration files, integrated subtests for more granular assertions, and enabled strict mode to enforce best practices. Ongoing deprecations target legacy features like old hooks and path handling, with enhancements to Windows compatibility through refined temporary directory management and path normalization. Integration with Python 3.12 and later has improved via enhanced type support in fixtures and parametrized tests, aiding static type like mypy in large-scale projects.

Installation and Setup

Installing pytest

Pytest is primarily installed using the Python package manager pip, which is the recommended method for ensuring compatibility and access to the latest version. The command pip install -U pytest installs or upgrades pytest to the most recent release, currently version 9.0.1 as of November 2025. Pytest requires Python 3.10 or later (or PyPy3), along with the minimal dependency pluggy version 1.0 or higher for plugin management. It is advisable to install pytest within a virtual environment to isolate it from the system Python installation and avoid conflicts with other projects; this can be set up using the built-in venv module with commands like python -m venv myenv followed by activation and then pip install -U pytest. To verify the installation, execute pytest --version in , which outputs the installed version, Python interpreter details, and platform information, such as pytest 9.0.1 on Python 3.12. For optional features, such as generating HTML reports, install plugins separately via pip, for example pip install pytest-html. On Windows, installation via pip typically succeeds, but if the pytest command is not recognized, ensure the Python Scripts directory is added to the system PATH or run tests using python -m pytest. On macOS, while pip is preferred, pytest can alternatively be installed using Homebrew with brew install pytest for system-wide access. On distributions like , pip remains the primary method, though pytest is available via package managers such as [sudo](/page/Sudo) apt install python3-pytest for a system installation. To upgrade pytest, run pip install --upgrade pytest, and review the for changes in the new version.

Configuration Options

pytest supports customization through placed in the of a project, allowing users to define default behaviors without repeating options on every command-line invocation. The supported formats include pytest.toml, .pytest.toml, pytest.ini, .pytest.ini, pyproject.toml, tox.ini, and setup.cfg, with pytest.toml introduced in version 9.0 and pyproject.toml support added in version 6.0. These files follow a specific for precedence, where pytest loads the first matching file it finds in the order: pytest.toml (highest), .pytest.toml, pytest.ini, .pytest.ini, pyproject.toml, tox.ini, and setup.cfg (lowest). The (rootdir) for configuration is determined by the common ancestor path of specified test files or directories and the presence of these files, ensuring configurations apply to the intended project scope. Key options in these files control aspects like test discovery and execution defaults. For instance, the testpaths option specifies directories to search for tests, such as ["tests", "integration"], which narrows discovery to those paths rather than the current directory. The python_files option defines patterns for identifying test files, defaulting to test_*.py and *test.py, but can be customized (e.g., ["check_*.py", "test_*.py"]) to match project-specific naming conventions. Another essential option is addopts, which appends static command-line arguments like ["-ra", "-q"] for always-on reporting and quiet output, streamlining repeated invocations.

ini

# Example pytest.ini [pytest] testpaths = tests python_files = test_*.py addopts = -ra -q

# Example pytest.ini [pytest] testpaths = tests python_files = test_*.py addopts = -ra -q

In pyproject.toml, these are structured under the [tool.pytest] section:

toml

[tool.pytest] testpaths = ["tests"] python_files = ["test_*.py"] addopts = ["-ra", "-q"]

[tool.pytest] testpaths = ["tests"] python_files = ["test_*.py"] addopts = ["-ra", "-q"]

Command-line flags provide overrides for file-based settings, enabling per-run adjustments. The --verbose (or -v) flag increases output verbosity, showing more details about execution, while --junitxml=path generates JUnit-compatible XML reports for integration with CI tools. These flags take precedence over options, allowing flexible customization without editing files. Environment variables further enhance configurability, particularly in automated environments. The PYTEST_ADDOPTS variable prepends additional options to the command line, such as PYTEST_ADDOPTS="-v --junitxml=report.xml", which is useful for CI pipelines to enforce consistent testing behaviors across builds. Best practices for configuration emphasize selecting appropriate file formats to avoid parsing issues; for example, prefer pytest.ini, tox.ini, or pyproject.toml over setup.cfg due to differences in INI parsing. Use addopts judiciously to set project defaults without introducing global conflicts, and employ the --rootdir=path flag to explicitly set the configuration root if the automatic detection yields unexpected results. These approaches ensure configurations remain maintainable and project-specific, influencing test discovery patterns like directory scanning without altering the core collection process.

Core Testing Workflow

Test Discovery and Collection

Pytest employs an automated test discovery mechanism that simplifies the process of identifying and gathering tests without requiring explicit file listings or manual invocation. By default, when invoked without arguments, pytest begins collection from the current directory or a configured testpaths and recursively traverses subdirectories, excluding those matching the norecursedirs pattern (such as build directories). It targets Python files matching the patterns test_*.py or *_test.py, importing them using a test-specific package name to avoid conflicts with application code. Within these files, pytest collects test items based on standard naming conventions: standalone functions or methods prefixed with test_ are recognized as tests, provided they are defined at the module level or within classes prefixed with Test that lack an __init__ method. Classes themselves are not executed as tests but serve to organize methods, and decorators like @staticmethod or @classmethod on methods are supported to enable collection. This process constructs a hierarchical tree of collection nodes, where each node represents an item such as a module, class, or function, rooted at the project's rootdir (determined by the nearest pytest.ini, pyproject.[toml](/page/TOML), or similar ). Each node receives a unique nodeid combining its path and name, facilitating precise identification and reporting during execution. Nested structures, such as parameterized tests or fixtures, are incorporated into this tree, ensuring comprehensive session building. Customization of discovery rules is achieved through configuration files like pytest.ini or pyproject.toml, where options such as python_files, python_classes, and python_functions allow overriding defaults—for instance, altering file patterns to include check_*.py or function prefixes to check_*. Markers, applied via the @pytest.mark decorator (e.g., @pytest.mark.slow), enable grouping and selective collection without altering core discovery logic, supporting features like test categorization for focused runs. These configurations ensure flexibility while maintaining pytest's convention-over-configuration philosophy. To inspect the collection outcome without execution, the --collect-only option displays a verbose tree of all discovered nodes, optionally combined with -v for increased detail or -q for concise output, which can be redirected to a file for further analysis. During collection, pytest handles such as skipped tests (via @pytest.mark.skip) or expected failures (via @pytest.mark.xfail), integrating them into the node tree and flagging them appropriately in reports to indicate non-execution without halting the process. This visibility aids discovery issues and verifies test organization prior to running the suite.

Writing Basic Tests

Pytest encourages writing tests as simple Python functions within modules, where assertions are used to verify expected outcomes without requiring from a base class, unlike some other frameworks. These test functions typically begin with the prefix test_ to facilitate automatic discovery, and they rely on the built-in assert statement for checks, allowing for expressive and readable validations. A basic test might verify the behavior of a , such as one that adds two numbers. For instance, consider the following code in a file named test_sample.py:

python

### Running Tests Programmatically Pytest can be invoked programmatically from Python code using the `pytest.main()` function. This allows execution of tests, including targeting a specific test function, from within a script or another program. To run a specific test function, import pytest and call `pytest.main()` with a list containing the node ID of the test in the format `'path/to/test_file.py::test_function_name'`. For example: ```python import pytest # Run a specific test function pytest.main(['tests/test_example.py::test_specific_function'])

### Running Tests Programmatically Pytest can be invoked programmatically from Python code using the `pytest.main()` function. This allows execution of tests, including targeting a specific test function, from within a script or another program. To run a specific test function, import pytest and call `pytest.main()` with a list containing the node ID of the test in the format `'path/to/test_file.py::test_function_name'`. For example: ```python import pytest # Run a specific test function pytest.main(['tests/test_example.py::test_specific_function'])

This executes only the targeted test function and returns an exit code (0 for success, non-zero otherwise). Additional command-line flags can be added to the list for more options, such as '-v' for verbose output:

python

pytest.main(['-v', 'tests/test_example.py::test_specific_function'])

pytest.main(['-v', 'tests/test_example.py::test_specific_function'])

Note that invoking pytest.main() imports the tests and any dependent modules, which may lead to side effects due to Python's import caching mechanism.

content of test_sample.py

def add(a, b): return a + b def test_add(): assert add(2, 3) == 5

This test function calls `add(2, 3)` and asserts that the result equals 5, failing if the condition is not met.[](https://docs.pytest.org/en/stable/getting-started.html) To execute tests, invoke the `pytest` command from [the terminal](/page/The_Terminal) in the directory containing the test files; it will automatically discover and run all functions matching the `test_` convention. For more detailed output, use the `-v` (verbose) option, which displays test names and results as they run, such as `pytest -v test_sample.py` producing lines like `test_sample.py::test_add PASSED`. The `-q` option can be used for quieter, summary-only reporting.[](https://docs.pytest.org/en/stable/getting-started.html) When a [test](/page/.test) fails, pytest provides comprehensive [error](/page/Error) reporting, including full tracebacks that pinpoint the exact line of failure and, for assertion mismatches, a detailed [diff](/page/Diff) showing intermediate values and expected versus actual results. For example, if the assertion `assert 4 == 5` fails, the output might include `E assert 4 == 5` followed by `E -4` and `E +5` to highlight the discrepancy, aiding quick debugging without manual print statements.[](https://docs.pytest.org/en/stable/getting-started.html) Adhering to pytest conventions enhances test maintainability: name test files starting with `test_` or ending with `_test.py`, ensure test functions or classes (if used) begin with `test_` or `Test`, and design tests to be independent by avoiding shared mutable state or side effects, as each test runs in isolation to prevent interference. Simple setup, if needed, can involve basic fixtures mentioned later, but standalone tests suffice for most basic cases.[](https://docs.pytest.org/en/stable/getting-started.html) ## Fixtures ### Defining and Using Fixtures Fixtures in pytest are functions decorated with the `@pytest.fixture` decorator that encapsulate reusable setup and teardown code, providing a consistent context for tests such as initialized objects or environments.[](https://docs.pytest.org/en/stable/explanation/fixtures.html) This design allows developers to define the "arrange" phase of testing—preparing data or resources—without repeating boilerplate in each test function.[](https://docs.pytest.org/en/stable/explanation/fixtures.html#what-fixtures-are) By marking a function with `@pytest.fixture`, pytest treats it as a provider of test dependencies rather than a standalone test, enabling modular and maintainable test suites.[](https://docs.pytest.org/en/stable/reference/reference.html#pytest.fixture) To use a fixture, a test function declares it as an argument with the same name as the fixture, triggering pytest to execute the fixture code and inject its return value automatically before running the test.[](https://docs.pytest.org/en/stable/how-to/fixtures.html#requesting-fixtures) For instance, consider a simple fixture that returns a list of fruits: ```python import pytest @pytest.fixture def fruit_bowl(): return ["apple", "banana", "cherry"]

This test function calls `add(2, 3)` and asserts that the result equals 5, failing if the condition is not met.[](https://docs.pytest.org/en/stable/getting-started.html) To execute tests, invoke the `pytest` command from [the terminal](/page/The_Terminal) in the directory containing the test files; it will automatically discover and run all functions matching the `test_` convention. For more detailed output, use the `-v` (verbose) option, which displays test names and results as they run, such as `pytest -v test_sample.py` producing lines like `test_sample.py::test_add PASSED`. The `-q` option can be used for quieter, summary-only reporting.[](https://docs.pytest.org/en/stable/getting-started.html) When a [test](/page/.test) fails, pytest provides comprehensive [error](/page/Error) reporting, including full tracebacks that pinpoint the exact line of failure and, for assertion mismatches, a detailed [diff](/page/Diff) showing intermediate values and expected versus actual results. For example, if the assertion `assert 4 == 5` fails, the output might include `E assert 4 == 5` followed by `E -4` and `E +5` to highlight the discrepancy, aiding quick debugging without manual print statements.[](https://docs.pytest.org/en/stable/getting-started.html) Adhering to pytest conventions enhances test maintainability: name test files starting with `test_` or ending with `_test.py`, ensure test functions or classes (if used) begin with `test_` or `Test`, and design tests to be independent by avoiding shared mutable state or side effects, as each test runs in isolation to prevent interference. Simple setup, if needed, can involve basic fixtures mentioned later, but standalone tests suffice for most basic cases.[](https://docs.pytest.org/en/stable/getting-started.html) ## Fixtures ### Defining and Using Fixtures Fixtures in pytest are functions decorated with the `@pytest.fixture` decorator that encapsulate reusable setup and teardown code, providing a consistent context for tests such as initialized objects or environments.[](https://docs.pytest.org/en/stable/explanation/fixtures.html) This design allows developers to define the "arrange" phase of testing—preparing data or resources—without repeating boilerplate in each test function.[](https://docs.pytest.org/en/stable/explanation/fixtures.html#what-fixtures-are) By marking a function with `@pytest.fixture`, pytest treats it as a provider of test dependencies rather than a standalone test, enabling modular and maintainable test suites.[](https://docs.pytest.org/en/stable/reference/reference.html#pytest.fixture) To use a fixture, a test function declares it as an argument with the same name as the fixture, triggering pytest to execute the fixture code and inject its return value automatically before running the test.[](https://docs.pytest.org/en/stable/how-to/fixtures.html#requesting-fixtures) For instance, consider a simple fixture that returns a list of fruits: ```python import pytest @pytest.fixture def fruit_bowl(): return ["apple", "banana", "cherry"]

A test can then consume this fixture:

python

def test_fruit_count(fruit_bowl): assert len(fruit_bowl) == 3

def test_fruit_count(fruit_bowl): assert len(fruit_bowl) == 3

This injection mechanism ensures fixtures are only computed when requested, promoting efficiency. Fixtures can return a value directly for simple setup or use yield to support teardown logic, where code after the yield statement executes after the test completes. Synchronous fixtures typically return the setup result, while yield enables cleanup, such as closing resources; for asynchronous fixtures (via extensions like pytest-asyncio), yield similarly handles post-test finalization to ensure proper resource management. An example of a fixture with teardown for a database connection might look like:

python

@pytest.fixture def db_connection(): conn = create_db_connection() # Setup yield conn # Provide to test conn.close() # Teardown

@pytest.fixture def db_connection(): conn = create_db_connection() # Setup yield conn # Provide to test conn.close() # Teardown

A test using this would receive the connection object:

python

def test_query([db_connection](/page/.test)): result = db_connection.execute("SELECT * FROM users") assert len(result) > 0

def test_query([db_connection](/page/.test)): result = db_connection.execute("SELECT * FROM users") assert len(result) > 0

This pattern prevents resource leaks by guaranteeing cleanup regardless of test outcome. Pytest supports dependency injection among fixtures, allowing one fixture to request another as an argument, fostering composable setups. For example, a temporary path fixture might depend on a base temporary directory:

python

@pytest.fixture def tmpdir(): return "/tmp/test_dir" @pytest.fixture def tmp_path(tmpdir): path = tmpdir + "/myfile.txt" with open(path, "w") as f: f.write("test content") return path

@pytest.fixture def tmpdir(): return "/tmp/test_dir" @pytest.fixture def tmp_path(tmpdir): path = tmpdir + "/myfile.txt" with open(path, "w") as f: f.write("test content") return path

Here, tmp_path builds upon tmpdir, and tests requesting tmp_path implicitly get both. A test for file operations could then use:

python

def test_file_read(tmp_path): with open(tmp_path, "r") as f: content = f.read() assert content == "test content"

def test_file_read(tmp_path): with open(tmp_path, "r") as f: content = f.read() assert content == "test content"

This hierarchical injection simplifies complex environment creation, such as stacking database sessions on top of connections.

Fixture Scopes and Lifetime

Pytest fixtures support multiple scope levels to control their execution frequency and lifetime, allowing developers to optimize test setup and teardown for efficiency. The available scopes are function (the default, executed once per test function), class (once per test class), module (once per test module), package (once per package, including sub-packages), and session (once per entire test session). These scopes determine when a fixture is created and destroyed, enabling reuse of resources across multiple tests without redundant initialization. To specify a fixture's scope, the scope parameter is passed to the @pytest.fixture decorator, such as @pytest.fixture(scope="module") for module-level execution. This configuration influences caching behavior, where pytest maintains a single instance of the fixture within its defined scope and reuses it for subsequent tests, reducing overhead. Additionally, it affects teardown order: fixtures with broader scopes (e.g., session) are torn down after narrower ones (e.g., function), ensuring dependent resources are cleaned up appropriately. Fixture lifetime is managed through setup and teardown mechanisms, including the use of finalizers added via the request.addfinalizer method within the fixture function. For example, a fixture can register a cleanup function like request.addfinalizer(lambda: delete_user(user_id)) to handle deallocation. Finalizers execute in reverse order of registration (last-in, first-out), promoting safe cleanup of nested dependencies. Yield-based fixtures further support this by running teardown code after the yield statement, with execution occurring from right to left across fixtures in a test. Broader scopes enhance performance by minimizing repeated setup costs, particularly for expensive operations like database connections. A session-scoped database fixture, for instance, initializes the connection once and shares it across all tests, avoiding the latency of per-test recreations. Practical examples illustrate scope selection: a module-scoped fixture for loading configuration files executes once per module, caching the data for all tests within it, as shown below.

python

import pytest @pytest.fixture(scope="module") def config(): return load_config("settings.json")

import pytest @pytest.fixture(scope="module") def config(): return load_config("settings.json")

In contrast, function-scoped mocks ensure isolation per test, resetting state each time to prevent interference.

python

@pytest.fixture def mock_client(): with patch("module.Client") as mock: yield mock

@pytest.fixture def mock_client(): with patch("module.Client") as mock: yield mock

This approach balances efficiency with test independence.

Key Features

Parameterized Testing

Parameterized testing in pytest allows a single test function to be executed multiple times with different sets of input parameters, promoting efficient coverage of various scenarios without duplicating code. This feature is particularly useful for , where tests are parameterized based on predefined inputs and expected outputs, such as edge cases or diverse datasets. By leveraging the @pytest.mark.parametrize decorator, developers can specify argument names and corresponding values, enabling pytest to generate and run distinct test instances for each parameter combination. The @pytest.mark.parametrize decorator is applied directly to a test function, taking two main arguments: a string of comma-separated argument names (e.g., "input,expected") and an iterable of sets, typically a list of tuples or lists. For instance, the following example tests an evaluation function with two input-output pairs:

python

import pytest @pytest.mark.parametrize("test_input,expected", [("3+5", 8), ("2+4", 6)]) def test_eval(test_input, expected): assert [eval](/page/Eval)(test_input) == expected

import pytest @pytest.mark.parametrize("test_input,expected", [("3+5", 8), ("2+4", 6)]) def test_eval(test_input, expected): assert [eval](/page/Eval)(test_input) == expected

This results in two separate test executions: one for ("3+5", 8) and another for ("2+4", 6), each treated as an independent test case by pytest's runner. sets can include any hashable values, and multiple decorators can be stacked to parameterize different argument groups, with the total number of test runs being the of the sets' sizes. During execution, each set invokes the test function once, passing the values as arguments in the order specified. Pytest automatically generates descriptive test IDs based on the parameter values, such as test_eval[3+5-8], which appear in reports and tracebacks for clarity. Developers can customize these IDs using the optional ids argument, providing a list of strings like ids=["case1", "case2"] to override defaults and improve readability in output. If a set is empty, the entire test is skipped, configurable via the empty_parameter_set_mark option in pytest configuration. For more advanced scenarios, indirect parameterization allows parameters to be sourced dynamically from fixtures using the pytest_generate_tests hook, where metafunc.parametrize injects values into fixture-backed ; this approach integrates seamlessly with fixture definitions for complex setup. Parameterized tests can also be combined with other , such as @pytest.mark.xfail applied to specific parameter sets via the marks argument in parametrize, enabling conditional expectations like expected failures for certain inputs. Use cases include systematically testing edge cases, such as boundary values in algorithms, and data-driven validation where are loaded from external sources like CSV or files through custom loader functions or plugins. In reporting, each parameterized invocation is evaluated independently, with pass/fail status, errors, and tracebacks isolated to the specific set, facilitating granular . Specific parameters can be skipped using conditional like @pytest.mark.skipif, applied selectively to subsets via the parametrize marks , ensuring only relevant cases are executed while maintaining comprehensive coverage.

Assert Rewriting and

Pytest enhances the standard Python assert statement through a process known as assertion rewriting, which occurs at import time for test modules discovered during test collection. This mechanism intercepts assert statements by modifying the (AST) of the module before it is compiled into , using a PEP 302 . The rewritten assertions evaluate subexpressions—such as function calls, attribute accesses, comparisons, and operators—and include their values in the failure message if the assertion fails, providing detailed without requiring developers to write additional code. In failure reports, pytest displays the differing elements for common data structures; for instance, when comparing lists, it highlights the first index where they diverge, and for dictionaries, it shows the specific keys with mismatched values. For assertions involving exceptions, the framework captures the exception type, value, and a full traceback via the ExceptionInfo object, integrating this into the error output for comprehensive . These features extend to complex types by leveraging their __repr__ method for meaningful string representations, though for highly customized comparisons, developers can implement the pytest_assertrepr_compare hook to define tailored difference explanations. Customization options allow fine-grained control over . Globally, it can be disabled using the --assert=plain command-line option, which reverts to standard Python assertion behavior without introspection. Per-module disabling is possible by adding a with PYTEST_DONT_REWRITE to the module. Additionally, for supporting modules not automatically rewritten (e.g., imported helpers), the pytest.register_assert_rewrite function can be called before import to enable it explicitly. To support custom classes effectively, ensuring a descriptive __repr__ is recommended, as the rewriting relies on it for output clarity. The primary benefits of assert rewriting include significantly reduced debugging time compared to raw assert statements, as it eliminates the need for manual print statements or complex conditional expressions to isolate failure points. It handles intricate data structures, such as nested objects or collections, by breaking down the assertion into traceable parts, making it particularly valuable for testing functions with multiple interdependent computations. This introspection fosters more maintainable test code and quicker issue resolution in large test suites. Despite these advantages, assert rewriting introduces some overhead, particularly in large codebases where many modules require AST parsing and bytecode recompilation during test collection. This process caches rewritten modules as .pyc files in __pycache__ for reuse, but caching is skipped if the filesystem is read-only or if custom import machinery interferes, potentially increasing startup time on subsequent runs. For performance-critical environments, opting out via the available disabling mechanisms is advised to avoid this computational cost.

Advanced Usage

Test Filtering and Skipping

Pytest provides mechanisms for filtering and skipping tests to enable selective execution, which is particularly valuable during sessions or in (CI) environments where running the full test suite may be inefficient. These features allow developers to label tests with markers, conditionally skip them based on runtime conditions, or mark expected failures without halting the entire suite. By integrating these tools, pytest supports focused test runs that target specific subsets, reducing execution time and aiding in maintenance of large test bases. Markers in pytest are attributes applied to test functions, classes, or modules using the @pytest.mark decorator, enabling categorization and selective running. Custom markers, such as @pytest.mark.slow, can be defined to resource-intensive tests, and they must be registered in a like pytest.ini to avoid warnings: [pytest] markers = slow: marks tests as slow (deselect with '-m "not slow"'). To list all available markers, including built-in and custom ones, the command pytest --markers displays their descriptions and usage. Markers facilitate flexible test organization without altering test logic. Skipping tests can be achieved unconditionally or conditionally to handle cases where tests are inapplicable under certain conditions. The @pytest.mark.skip(reason="explanation") decorator skips a test entirely before execution, while pytest.skip("reason") allows imperative skipping inside a test function, useful when conditions are evaluated at runtime. For conditional skips, @pytest.mark.skipif(condition, reason="explanation") evaluates a ; for instance, @pytest.mark.skipif(sys.platform == "win32", reason="does not run on Windows") excludes tests on Windows platforms. These approaches ensure tests are only run in supported environments, with skipped tests reported in the unless suppressed. The xfail mechanism marks tests expected to fail, such as those covering unimplemented features or known bugs, using @pytest.mark.xfail. Parameters include reason for documentation, raises=ExceptionType to specify expected exceptions, strict=True to treat unexpected passes (XPASS) as failures, and run=False to skip execution altogether. In non-strict mode, an xfail test that fails is reported as XFAIL (pass for the suite), while an unexpected pass is XPASS (still a pass overall); strict mode enforces failure on XPASS via configuration like xfail_strict = true in pytest.ini. This feature prevents transient bugs from masking real issues in test outcomes. Command-line options provide runtime control over test selection. The -k "expression" flag filters tests by matching substrings in test names using Python-like expressions, such as pytest -k "http and not slow" to run only HTTP-related tests excluding slow ones, enabling precise targeting for debugging. Similarly, -m "marker_expression" selects based on markers, like pytest -m "not slow" to exclude slow tests or pytest -m "slow(phase=1)" for parameterized markers, which is ideal for CI pipelines to run subsets efficiently. Markers can be configured globally in pytest.ini for consistent filtering across invocations. These features are commonly used for platform-specific testing, where skips ensure compatibility across operating systems, or in CI to deselect flaky or long-running tests marked as slow, thereby optimizing build times without compromising coverage. For example, conditional skips based on environment variables allow tests to run only in dedicated CI agents, while xfail markers track progress on unresolved issues until fixes are implemented.

Plugins and Extensibility

Pytest employs a robust plugin that allows for seamless extension of its core functionality through hook functions and modular components. Plugins are discovered and loaded in a specific order: pytest first scans the command line for the -p no:name option to disable plugins, then loads built-in plugins from the _pytest directory, followed by those specified via command-line options like -p name, entry point plugins registered under the pytest11 group in pyproject.toml or setup.py (unless autoloading is disabled by the PYTEST_DISABLE_PLUGIN_AUTOLOAD ), plugins listed in the PYTEST_PLUGINS , and finally local plugins defined in conftest.py files or via the pytest_plugins variable. This discovery mechanism ensures predictable behavior, with options to disable plugins using -p no:name or the PYTEST_DISABLE_PLUGIN_AUTOLOAD . Central to this architecture are hook functions, which plugins implement to integrate with pytest's test execution lifecycle. These functions follow a pytest_ prefix convention, such as pytest_addoption for registering custom command-line options or pytest_configure for setup tasks. For instance, a plugin can use pytest_addoption("--my-option", action="store", default="value") to add a configurable flag, which is then accessible via config.getoption("--my-option") in other hooks. plugins in conftest.py files apply to specific directories, enabling directory-scoped customizations without global impact. While pytest includes built-in plugins for core features like assertion rewriting and terminal reporting, much of its power derives from third-party extensions installed via pip. Notable examples include pytest-xdist, which enables parallel test execution across multiple processes to accelerate runs (e.g., pytest -n auto distributes tests over available CPUs), and pytest-rerunfailures, which automatically retries flaky tests up to a specified number of times (e.g., --reruns 5) to mitigate intermittent failures. Other widely adopted plugins are pytest-cov for integrating measurement with reporting options like --cov-report=[html](/page/HTML), pytest-mock providing a mocker fixture for simplified patching and spying on objects, and pytest-asyncio for supporting asynchronous test functions marked with @pytest.mark.asyncio to handle coroutine-based code. These plugins are distributed on PyPI and automatically loaded upon installation, enhancing pytest's versatility without altering core behavior. Developers can write custom plugins by defining hook functions in a Python module and exposing it via the pytest11 in their package's metadata, allowing distribution on PyPI as installable packages. For example, a simple plugin might implement pytest_collection_modifyitems to filter tests dynamically, and once packaged with setuptools, it becomes available for pip installation like any other plugin. This approach fosters a thriving , with over 1,300 external plugins listed on PyPI as of recent compilations. The extensibility of pytest supports modular testing tailored to diverse domains, such as web applications via pytest-django, which provides Django-specific fixtures for database management and request simulation without requiring unittest inheritance, or workflows with pytest-notebook, enabling of Jupyter notebooks by executing cells and diffing outputs for . This plugin-based modularity promotes reusable components, reduces boilerplate, and integrates seamlessly with pipelines, making pytest adaptable for large-scale projects.

References

Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.