Testing Guide¶
This guide outlines the testing strategy for the AMMM library, the types of tests employed, and instructions on how to run them. Detailed testing is crucial for ensuring the reliability and correctness of the library.
Testing Strategy¶
AMMM aims for a balanced testing approach, incorporating:
Unit Tests: To verify the functionality of individual components (classes, functions, methods) in isolation.
Integration Tests: To ensure that different parts of the library work together correctly. The
demo/runme.pyscript serves as a key integration test for the end-to-end modeling pipeline.Regression Tests: Implicitly, as the test suite grows, it helps prevent regressions when new features are added or existing code is refactored.
Test Framework¶
Pytest: The primary framework used for writing and running tests. Pytest offers a flexible and powerful way to define tests, manage fixtures, and generate reports.
Website: https://docs.pytest.org/
Types of Tests¶
Unit Tests¶
Location: Primarily within the
tests/directory, organized into subdirectories mirroring theammm/package structure (e.g.,tests/core/,tests/prepro/).Focus: Testing individual functions and class methods for expected outputs, behaviour with valid/invalid inputs, and edge cases.
Examples:
tests/prepro/test_prepro.py: Contains unit tests for custom scalers (MaxAbsScaler,StandardScaler),Pipeline, andStandardizeControls.tests/core/test_base.py: Tests functionalities withinammm.core.base.Many other test files exist covering different modules.
Integration Tests¶
demo/runme.py: This script serves as a crucial end-to-end integration test. It loads a configuration, preprocesses data, builds and fits an MMM, and generates outputs. Successful execution ofdemo/runme.pywithout errors indicates that the main modeling pipeline is functioning correctly.Other Potential Integration Points: As the library evolves, more focused integration tests might be added to test interactions between specific sub-packages or complex features.
Running Tests¶
Prerequisites¶
Ensure you have a Python environment set up with AMMM and its development dependencies installed. This typically involves:
pip install -e .[dev] # or if you have separate requirements files: # pip install -r requirements.txt # pip install -r requirements-dev.txt
(The
.[dev]extra is defined inpyproject.toml.)
Running All Tests¶
Navigate to the root directory of the AMMM project.
Execute Pytest:
pytest
Or, for more verbose output:
pytest -v
Running Specific Tests¶
Run tests in a specific file:
pytest tests/core/test_base.pyRun a specific test class:
pytest tests/prepro/test_prepro.py::TestStandardScalerRun a specific test function:
pytest tests/prepro/test_prepro.py::TestStandardScaler::test_fit_transformRun tests matching a keyword expression (
-k):pytest -k "StandardScaler and not inverse"
Test Coverage (Future Consideration)¶
While not explicitly set up with a coverage tool in the initial phases, generating test coverage reports is a good practice for identifying untested parts of the codebase.
Tools like
pytest-covcan be integrated:pip install pytest-cov pytest --cov=ammm --cov-report=html
This would generate an HTML report in
htmlcov/showing code coverage.
Current Test Status¶
As of recent development (refer to
memory-bank/progress.md), the test suite was passing with a specific number of tests (e.g., “268 passed, 4 skipped”). This number will change as tests are added or modified.The
tests/prepro/test_prepro.pyfile, for instance, contains numerous tests for custom scalers and preprocessing components.
Maintaining a robust test suite is essential for the long-term health and stability of the AMMM library. Developers are encouraged to write tests for new features and bug fixes.