Skip to content

Understanding Test Reports

This guide explains how to interpret test reports and coverage metrics.

The HTML report (reports/html/report.html) shows:

  • Summary: Pass/fail counts, duration
  • Environment: Python version, platform, plugins
  • Results Table: Each test with status and duration
  • Failure Details: Full tracebacks for failed tests

The coverage report (reports/coverage/index.html) provides:

  • Overall Coverage: Percentage of lines covered
  • Module Breakdown: Coverage per file
  • Line-by-Line View: Green (covered), red (missed), yellow (partial)
  • Branch Coverage: Conditional statement coverage

The coverage report tracks:

MetricDescription
Line CoveragePercentage of source lines executed
Branch CoveragePercentage of conditional branches taken
Function CoveragePercentage of functions called
Missing LinesLines not covered by any test

Different module types have different coverage targets:

Module TypeTargetRationale
Routers80%+Critical user-facing paths
Models90%+Data integrity is essential
Auth95%+Security-critical code
Utilities70%+Helper functions
  • High-risk modules (auth, payments) need near-complete coverage
  • Business logic (routers) should have thorough path coverage
  • Data models are typically straightforward and easy to test
  • Utilities may have edge cases that are rarely used

Good test coverage is not just about hitting a percentage target:

  1. Critical paths covered - All happy paths and common error cases
  2. Edge cases tested - Boundary conditions, null values, empty collections
  3. Error handling verified - Exceptions are caught and handled correctly
  4. Integration points validated - Database, external APIs, authentication

Coverage metrics don’t measure:

  • Test quality - A test that just calls code without assertions doesn’t validate behavior
  • Assertion strength - assert response is weaker than assert response.status_code == 200
  • Real-world scenarios - Edge cases that haven’t been thought of yet
  1. Start with failures - Fix broken tests before adding new ones
  2. Check critical paths - Ensure auth, payment, data integrity are well-covered
  3. Look for patterns - Missing coverage in similar modules suggests systematic gaps
  4. Review new code - Ensure new features have corresponding tests
  5. Balance speed and thoroughness - Not every line needs a test, focus on risk

When reviewing pull requests:

  • Check coverage delta - New code should maintain or improve coverage
  • Review test quality - Look at the actual test assertions, not just counts
  • Verify edge cases - Ensure tests cover failure modes, not just happy paths
  • Consider maintainability - Tests should be readable and easy to update