Skip to content

Testing

Perl's Testing Culture

Version: 1.1 Year: 2026


Copyright (c) 2025-2026 Ryan Thomas Robson / Robworks Software LLC. Licensed under CC BY-NC-ND 4.0. You may share this material for non-commercial purposes with attribution, but you may not distribute modified versions.


Perl has one of the oldest and most deeply embedded testing cultures in programming. Every CPAN distribution ships with tests. The prove command and TAP protocol originated in Perl. When you install a module with cpanm, the test suite runs automatically before the module lands on your system.


Test::More Basics

Test::More is the standard testing module that ships with Perl. It provides functions that compare actual results against expected values and produce structured output.

Your First Test File

Test files live in a t/ directory and have the .t extension. They are plain Perl scripts:

# t/basic.t
use strict;
use warnings;
use Test::More tests => 4;

ok(1 + 1 == 2, 'addition works');
is(lc('HELLO'), 'hello', 'lc lowercases a string');
like('user@example.com', qr/@/, 'email contains @');
isnt('foo', 'bar', 'foo is not bar');

The tests => 4 declaration tells the harness how many tests to expect. If the script exits early or runs extra tests, the harness flags it as a failure.

Core Test Functions

Function Purpose
ok($test, $name) Passes if $test is true
is($got, $expected, $name) Passes if $got eq $expected
isnt($got, $unexpected, $name) Passes if $got ne $unexpected
like($got, qr/regex/, $name) Passes if $got matches the regex
unlike($got, qr/regex/, $name) Passes if $got does not match
is_deeply($got, $expected, $name) Deep comparison of structures
can_ok($module, @methods) Checks a module has the methods
isa_ok($obj, $class) Checks an object's class

ok vs. is vs. is_deeply

ok only reports pass/fail. is shows both the expected and received values on failure. is_deeply handles nested arrays, hashes, and mixed structures:

# ok - minimal failure output: just "not ok"
ok($result == 42, 'answer is 42');

# is - shows got vs. expected on failure
is($result, 42, 'answer is 42');
#   got: 41
#   expected: 42

# is_deeply - deep comparison for nested structures
is_deeply($config, { host => 'localhost', port => 8080 }, 'config defaults');

Always prefer is() over ok()

When comparing values, is() provides dramatically better failure messages than ok(). The line ok($x == 5) tells you only that the check failed. is($x, 5) tells you what $x actually was.

Plan Strategies

use Test::More tests => 10;       # exact count - strictest
use Test::More;                    # no plan - call done_testing() at end
use Test::More skip_all => 'No database available' unless $ENV{TEST_DB};

The done_testing() function tells the harness you finished successfully. Use it when the number of tests varies at runtime.


The TAP Protocol

Test Anything Protocol (TAP) is the text-based format that Perl test scripts produce, now used across many languages.

TAP Format

1..4
ok 1 - addition works
not ok 2 - subtraction is broken
#          got: '7'
#     expected: '6'
ok 3 - multiplication
ok 4 # skip no database configured
Element Meaning
1..N Plan line - expect N tests
ok N / not ok N Pass / fail
# text Diagnostic comment
# skip / # TODO Skipped / expected failure

SKIP and TODO

SKIP marks tests that cannot run in the current environment. TODO marks tests that are expected to fail:

SKIP: {
    skip 'No network available', 2 unless $ENV{TEST_NETWORK};
    ok(ping('example.com'), 'can reach example.com');
    ok(fetch('https://example.com'), 'can fetch page');
}

TODO: {
    local $TODO = 'Unicode normalization not yet implemented';
    is(normalize("\x{e9}"), "e\x{301}", 'decomposes e-acute');
}

Both count as passed in the harness. A TODO test that unexpectedly passes is reported as a bonus.


prove: The Test Runner

prove is Perl's standard test harness. It finds test files, runs them, parses TAP output, and summarizes results.

Basic Usage

prove                     # run all tests in t/
prove t/specific.t        # run one test file
prove -v                  # verbose - show individual test results
prove -l                  # add lib/ to @INC
prove -r t/               # recurse into subdirectories
prove -j4                 # run 4 tests in parallel

Test Organization

myproject/
  lib/
    MyApp.pm
    MyApp/
      Config.pm
      Database.pm
  t/
    00-load.t           # verify modules compile
    01-config.t         # unit tests for Config
    02-database.t       # unit tests for Database
    03-integration.t    # integration tests
  cpanfile

Prefix test files with numbers for execution order. Use 00-load.t to verify modules compile. Group tests by module or feature, and separate unit tests from integration tests.

flowchart TD
    A["t/ directory"] --> B["00-load.t\nModule compilation checks"]
    A --> C["01-unit-*.t\nUnit tests per module"]
    A --> D["02-integration-*.t\nCross-module tests"]
    A --> E["03-acceptance-*.t\nEnd-to-end tests"]
    B --> F["prove parses TAP\nfrom each file"]
    C --> F
    D --> F
    E --> F
    F --> G["Summary:\nFiles, Tests, Pass/Fail"]

Common prove Flags

Flag Effect
-v Verbose output
-l Add lib/ to @INC
-r Recurse into subdirectories
-j N Run N test files in parallel
--shuffle Randomize test file order
--state=save Re-run failures first next time
--timer Show elapsed time per file

Subtests

Subtests group related tests under a single test point. A subtest counts as one test in the parent plan, regardless of how many assertions it contains:

use Test::More;

subtest 'addition' => sub {
    is(add(1, 2), 3, 'positive');
    is(add(-1, -2), -3, 'negative');
    is(add(0, 0), 0, 'zeros');
};

done_testing();

Subtests produce nested TAP and provide natural setup/teardown boundaries - variables declared inside do not leak out.


Test2::Suite - The Modern API

Test2::Suite is the next generation of Perl testing tools, providing a cleaner API and better diagnostics while remaining TAP-compatible.

Test2::V0

The Test2::V0 bundle imports the most common functions:

use Test2::V0;

is(add(2, 3), 5, 'addition');
like($email, qr/@/, 'has at-sign');

# Deep comparison with structured validators
is($config, hash {
    field host => 'localhost';
    field port => 8080;
    end();     # no extra keys allowed
}, 'config structure matches');

done_testing();

Advanced Comparisons

Test2::V0 provides structured validators for arrays, hashes, and objects:

use Test2::V0;

is(\@results, array {
    item 0 => 'first';
    item 1 => match qr/^\d+$/;
    end();
}, 'array matches pattern');

is($user, object {
    call name => 'Alice';
    call age  => 30;
}, 'user has expected attributes');

Test2::V0 vs. Test::More

For new projects, Test2::V0 is the recommended choice. It provides better error messages and structured validators. Test::More is still widely used and works fine - you do not need to rewrite existing test suites.


Exception Testing

Test::Exception provides functions for testing that code throws or does not throw:

use Test::Exception;

dies_ok  { divide(1, 0) }  'division by zero dies';
lives_ok { divide(10, 2) } 'valid division lives';
throws_ok { divide(1, 0) } qr/division by zero/i, 'correct error message';
Function Tests that...
dies_ok { } Code dies
lives_ok { } Code does not die
throws_ok { } qr// Code dies with matching message
throws_ok { } 'Class' Code dies with object of given class

With Test2::V0, use dies and lives instead:

use Test2::V0;
my $err = dies { divide(1, 0) };
like($err, qr/division by zero/i, 'correct error');
ok(lives { divide(10, 2) }, 'valid division succeeds');

Mocking

Test::MockModule

Test::MockModule replaces subroutines within a module for the duration of a test. Originals are restored when the mock object goes out of scope:

use Test::MockModule;
my $mock = Test::MockModule->new('WebClient');
$mock->mock('fetch', sub { { status => 200, body => '{"ok":true}' } });

my $result = WebClient->fetch('https://api.example.com/status');
is($result->{status}, 200, 'mocked status code');
# Original fetch restored when $mock goes out of scope

Test::MockObject

Test::MockObject creates fake objects from scratch:

my $fake_db = Test::MockObject->new();
$fake_db->mock('query', sub { [{ id => 1, name => 'Alice' }] });
my $users = UserService->new(db => $fake_db)->list_users();
is(scalar @$users, 1, 'got one user from mock');

Do not over-mock

Mock external boundaries (databases, network calls, filesystems) but test your own code with real objects. If you find yourself mocking internal methods, the code may need refactoring.


Test Fixtures

Test fixtures are known data sets that put tests in a predictable state. Store test data in t/fixtures/ and use helper functions for setup/teardown:

# File-based fixtures
my $input = read_text('t/fixtures/sample.csv');
is_deeply(MyParser->parse($input), decode_json(read_text('t/fixtures/expected.json')));

# Setup helper for database tests
sub fresh_database {
    my $db = TestDB->new(':memory:');
    $db->deploy_schema();
    $db->load_fixtures('t/fixtures/seed.sql');
    return $db;   # cleaned up when $db goes out of scope
}

Test Coverage with Devel::Cover

Devel::Cover instruments your code and reports which lines, branches, conditions, and subroutines your tests exercise.

cover -delete                                      # clean previous data
HARNESS_PERL_SWITCHES=-MDevel::Cover prove -l t/   # run tests under coverage
cover -report html                                 # generate HTML report
Metric What it measures
Statement Was each line executed?
Branch Was each side of every if/unless/while taken?
Condition Was each sub-expression in &&/|| tested both ways?
Subroutine Was each function called at least once?

Target 80-90% statement coverage. Focus on public API methods, error handling paths, and boundary conditions.

Coverage is a floor, not a ceiling

High coverage does not mean your tests are good. Coverage tells you where tests are missing, not where they are sufficient.


Test-Driven Development

Test-driven development (TDD) writes a failing test before the code that makes it pass:

flowchart LR
    A["RED\nWrite a failing test"] --> B["GREEN\nWrite minimal code\nto pass the test"]
    B --> C["REFACTOR\nClean up code\nwhile tests stay green"]
    C --> A

TDD in Practice

RED - Write the test first:

use Test::More;
use lib './lib';
use StringUtils qw(capitalize_words);

is(capitalize_words('hello world'), 'Hello World', 'basic');
is(capitalize_words('ALREADY UP'), 'Already Up', 'handles uppercase');
is(capitalize_words(''), '', 'empty string');
done_testing();

Running prove -l t/string_utils.t fails because capitalize_words does not exist.

GREEN - Write the minimal implementation:

sub capitalize_words {
    my $str = shift // '';
    return join ' ', map { ucfirst(lc($_)) } split /\s+/, $str;
}

REFACTOR - The tests pass. Review the code, clean up, move on to the next feature. This cycle guarantees test coverage for every behavior.


CI/CD Integration

Automated testing in CI catches regressions before they reach production.

GitHub Actions Configuration

# .github/workflows/test.yml
name: Test Suite
on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  test:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        perl-version: ['5.32', '5.36', '5.38']
    name: Perl ${{ matrix.perl-version }}
    steps:
      - uses: actions/checkout@v4
      - uses: shogo82148/actions-setup-perl@v1
        with:
          perl-version: ${{ matrix.perl-version }}
      - run: cpanm --installdeps --notest .
      - run: prove -l -j4 t/
      - name: Coverage report
        if: matrix.perl-version == '5.38'
        run: |
          cpanm --notest Devel::Cover
          cover -test
          cover -report html

The matrix strategy tests against multiple Perl versions. Coverage runs on one version to avoid redundant reports.

Never skip failing tests in CI

If tests fail in CI, fix them. Do not add || true to mask failures or mark tests as TODO to avoid investigation. Each ignored failure erodes trust in the test suite.


Exercises


Quick Reference

Task Command / Code
Run all tests prove -l t/
Verbose single test prove -lv t/specific.t
Parallel tests prove -l -j4 t/
Compare values is($got, $expected, $name)
Compare structures is_deeply(\%got, \%expected, $name)
Regex match like($got, qr/pattern/, $name)
Test exception eval { code() }; like($@, qr/error/)
Skip / TODO skip 'reason', $count / local $TODO = 'reason'
Coverage cover -test && cover -report html

Further Reading


Previous: Error Handling and Debugging | Next: Text Processing and One-Liners | Back to Index

Comments