Understanding Test Reports
Understanding Test Reports
Section titled “Understanding Test Reports”This guide explains how to interpret test reports and coverage metrics.
pytest HTML Report
Section titled “pytest HTML Report”The HTML report (reports/html/report.html) shows:
- Summary: Pass/fail counts, duration
- Environment: Python version, platform, plugins
- Results Table: Each test with status and duration
- Failure Details: Full tracebacks for failed tests
Coverage Report
Section titled “Coverage Report”The coverage report (reports/coverage/index.html) provides:
- Overall Coverage: Percentage of lines covered
- Module Breakdown: Coverage per file
- Line-by-Line View: Green (covered), red (missed), yellow (partial)
- Branch Coverage: Conditional statement coverage
Coverage Metrics Explained
Section titled “Coverage Metrics Explained”The coverage report tracks:
| Metric | Description |
|---|---|
| Line Coverage | Percentage of source lines executed |
| Branch Coverage | Percentage of conditional branches taken |
| Function Coverage | Percentage of functions called |
| Missing Lines | Lines not covered by any test |
Coverage Goals
Section titled “Coverage Goals”Different module types have different coverage targets:
| Module Type | Target | Rationale |
|---|---|---|
| Routers | 80%+ | Critical user-facing paths |
| Models | 90%+ | Data integrity is essential |
| Auth | 95%+ | Security-critical code |
| Utilities | 70%+ | Helper functions |
Why Different Targets?
Section titled “Why Different Targets?”- High-risk modules (auth, payments) need near-complete coverage
- Business logic (routers) should have thorough path coverage
- Data models are typically straightforward and easy to test
- Utilities may have edge cases that are rarely used
What Good Coverage Looks Like
Section titled “What Good Coverage Looks Like”Good test coverage is not just about hitting a percentage target:
- Critical paths covered - All happy paths and common error cases
- Edge cases tested - Boundary conditions, null values, empty collections
- Error handling verified - Exceptions are caught and handled correctly
- Integration points validated - Database, external APIs, authentication
What Coverage Doesn’t Tell You
Section titled “What Coverage Doesn’t Tell You”Coverage metrics don’t measure:
- Test quality - A test that just calls code without assertions doesn’t validate behavior
- Assertion strength -
assert responseis weaker thanassert response.status_code == 200 - Real-world scenarios - Edge cases that haven’t been thought of yet
Best Practices for Reviewing Reports
Section titled “Best Practices for Reviewing Reports”- Start with failures - Fix broken tests before adding new ones
- Check critical paths - Ensure auth, payment, data integrity are well-covered
- Look for patterns - Missing coverage in similar modules suggests systematic gaps
- Review new code - Ensure new features have corresponding tests
- Balance speed and thoroughness - Not every line needs a test, focus on risk
Using Reports for Code Review
Section titled “Using Reports for Code Review”When reviewing pull requests:
- Check coverage delta - New code should maintain or improve coverage
- Review test quality - Look at the actual test assertions, not just counts
- Verify edge cases - Ensure tests cover failure modes, not just happy paths
- Consider maintainability - Tests should be readable and easy to update