Testing and Tooling in Python¶
Version: 0.2 Year: 2026
Copyright Notice¶
Copyright (c) 2025-2026 Ryan Thomas Robson / Robworks Software LLC. Licensed under CC BY-NC-ND 4.0. You may share this material for non-commercial purposes with attribution, but you may not distribute modified versions.
As your automation scripts grow from quick hacks to production tools, you need confidence that they work correctly and stay maintainable. Python has a mature ecosystem for testing, code quality, and dependency management that turns scripts into reliable software.
Unit Testing with pytest¶
While Python includes the built-in unittest module, pytest is the industry standard. Its plain assert statements, automatic test discovery, and powerful fixture system make tests easier to write and read.
Writing Your First Test¶
pytest discovers files matching test_*.py or *_test.py and runs functions that start with test_.
# server_utils.py
def format_server_name(name):
"""Normalize a server name to lowercase kebab-case."""
return name.strip().lower().replace(" ", "-")
def parse_host_port(address):
"""Split 'host:port' into (host, port) tuple."""
host, port_str = address.rsplit(":", 1)
return host, int(port_str)
# test_server_utils.py
from server_utils import format_server_name, parse_host_port
import pytest
def test_format_strips_whitespace():
assert format_server_name(" WEB01 ") == "web01"
def test_format_replaces_spaces():
assert format_server_name("DB Server 01") == "db-server-01"
def test_format_lowercases():
assert format_server_name("CacheNode") == "cachenode"
def test_parse_host_port():
assert parse_host_port("db01:5432") == ("db01", 5432)
def test_parse_host_port_invalid():
with pytest.raises(ValueError):
parse_host_port("no-port-here")
$ pytest -v
test_server_utils.py::test_format_strips_whitespace PASSED
test_server_utils.py::test_format_replaces_spaces PASSED
test_server_utils.py::test_format_lowercases PASSED
test_server_utils.py::test_parse_host_port PASSED
test_server_utils.py::test_parse_host_port_invalid PASSED
pytest discovery conventions
Keep test files next to the code they test, or in a tests/ directory. Name test files test_<module>.py and test functions test_<behavior>. pytest finds them automatically - no registration or test suites needed.
Fixtures¶
Fixtures provide test dependencies (data, connections, temporary files) that are set up before each test and cleaned up after. They replace the setup/teardown pattern from unittest.
import pytest
import json
from pathlib import Path
@pytest.fixture
def sample_config(tmp_path):
"""Create a temporary config file for testing."""
config = {"hostname": "test-server", "port": 8080, "debug": True}
config_file = tmp_path / "config.json"
config_file.write_text(json.dumps(config))
return config_file
def test_load_config(sample_config):
"""Test that config loading works with a real file."""
with open(sample_config) as f:
data = json.load(f)
assert data["hostname"] == "test-server"
assert data["port"] == 8080
def test_config_file_exists(sample_config):
"""Test that the fixture creates a valid file."""
assert sample_config.exists()
assert sample_config.suffix == ".json"
tmp_path is a built-in pytest fixture that provides a temporary directory unique to each test. It's automatically cleaned up after the test session.
Parametrize¶
Run the same test with multiple inputs:
@pytest.mark.parametrize("input_name,expected", [
(" WEB01 ", "web01"),
("DB Server 01", "db-server-01"),
("CacheNode", "cachenode"),
("already-formatted", "already-formatted"),
("UPPER CASE NAME", "upper-case-name"),
])
def test_format_server_name(input_name, expected):
assert format_server_name(input_name) == expected
This generates 5 separate test cases from a single test function, each with a clear pass/fail status.
Mocking¶
When your code calls external services, databases, or system commands, you don't want tests to depend on those systems being available. unittest.mock replaces real dependencies with controlled substitutes.
from unittest.mock import patch, MagicMock
import subprocess
def is_service_running(name):
"""Check if a systemd service is active."""
result = subprocess.run(
["systemctl", "is-active", name],
capture_output=True, text=True
)
return result.stdout.strip() == "active"
@patch("subprocess.run")
def test_service_running(mock_run):
mock_run.return_value = MagicMock(stdout="active\n", returncode=0)
assert is_service_running("nginx") is True
@patch("subprocess.run")
def test_service_not_running(mock_run):
mock_run.return_value = MagicMock(stdout="inactive\n", returncode=3)
assert is_service_running("nginx") is False
Don't over-mock
Mocking is a tool, not a goal. If you mock every dependency, your tests verify your mocks, not your code. Mock at boundaries (external APIs, system commands, databases) but let internal logic run for real. A test that mocks everything and passes doesn't prove anything works.
When to Mock vs When to Use Real Objects¶
| Situation | Approach |
|---|---|
| External API calls | Mock - don't hit real APIs in tests |
System commands (systemctl, docker) |
Mock - tests shouldn't require services running |
| File operations | Use tmp_path fixture with real files |
| Pure functions (string formatting, math) | No mocking needed - test directly |
| Database queries | Use a test database or mock the connection |
Test Coverage¶
Coverage measures which lines of code your tests execute. It doesn't guarantee correctness, but uncovered code is definitely untested.
pip install pytest-cov
# Run tests with coverage report
pytest --cov=server_utils --cov-report=term-missing
# Output:
# Name Stmts Miss Cover Missing
# --------------------------------------------------
# server_utils.py 12 2 83% 15, 22
The Missing column tells you exactly which lines need tests. Aim for 80-90% coverage on critical code. 100% coverage is rarely worth the effort for utility scripts.
Code Quality Tools¶
Linting with ruff¶
ruff is the modern Python linter - it replaces flake8, isort, pycodestyle, and dozens of other tools in a single fast binary.
pip install ruff
# Check for issues
ruff check .
# Fix auto-fixable issues
ruff check --fix .
# Format code (replaces black)
ruff format .
Formatting with black¶
black is the "uncompromising" code formatter. It makes stylistic decisions for you, eliminating debates over formatting.
pip install black
# Format a file
black my_script.py
# Check without modifying (useful in CI)
black --check my_script.py
Type Checking with mypy¶
mypy checks type annotations without running your code, catching bugs like passing a string where an integer is expected.
# server_utils.py (with type annotations)
def format_server_name(name: str) -> str:
return name.strip().lower().replace(" ", "-")
def parse_host_port(address: str) -> tuple[str, int]:
host, port_str = address.rsplit(":", 1)
return host, int(port_str)
Type annotations are optional in Python, but they become valuable as your codebase grows. Start by annotating function signatures - you don't need to annotate every variable.
Project Structure¶
Once a script grows beyond a single file, organizing it properly makes it testable and distributable.
Minimal Project Layout¶
my-tool/
├── my_tool/
│ ├── __init__.py # Makes this a Python package
│ ├── cli.py # Command-line interface (argparse)
│ ├── checks.py # Business logic (service checks, disk checks)
│ └── utils.py # Shared utilities
├── tests/
│ ├── test_checks.py
│ └── test_utils.py
├── pyproject.toml # Project metadata, dependencies, tool config
├── README.md
└── .gitignore
pyproject.toml¶
pyproject.toml is the modern standard for Python project configuration. It replaces setup.py, setup.cfg, and tool-specific config files.
[project]
name = "my-tool"
version = "0.1.0"
description = "System health checker"
requires-python = ">=3.10"
dependencies = [
"requests>=2.28",
]
[project.optional-dependencies]
dev = [
"pytest>=7.0",
"pytest-cov",
"ruff",
"mypy",
]
[project.scripts]
my-tool = "my_tool.cli:main"
[tool.pytest.ini_options]
testpaths = ["tests"]
[tool.ruff]
line-length = 100
[tool.mypy]
strict = true
Pre-commit Hooks¶
pre-commit runs checks automatically before each git commit, catching issues before they reach code review.
# .pre-commit-config.yaml
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.3.0
hooks:
- id: ruff
args: [--fix]
- id: ruff-format
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-added-large-files
# Install the hooks into your git repo
pre-commit install
# Run against all files (useful for first-time setup)
pre-commit run --all-files
Interactive Quizzes¶
Further Reading¶
- pytest Documentation - fixtures, parametrize, plugins, and configuration
- Real Python: Effective Testing - practical testing strategies and patterns
- Ruff Documentation - fast linter and formatter replacing flake8, isort, and more
- Python Packaging Guide - the official guide to pyproject.toml
- pre-commit Documentation - automated code quality checks on every commit
Previous: System Automation | Next: Object-Oriented Programming | Back to Index