Back to Skill Hub
Download Skill Package

System Prompt / Instructions

Test Fixing

Systematically identify and fix all failing tests using smart grouping strategies.

When to Use

  • Explicitly asks to fix tests ("fix these tests", "make tests pass")
  • Reports test failures ("tests are failing", "test suite is broken")
  • Completes implementation and wants tests passing
  • Mentions CI/CD failures due to tests

Systematic Approach

1. Initial Test Run

Run make test to identify all failing tests.

Analyze output for:

  • Total number of failures
  • Error types and patterns
  • Affected modules/files

2. Smart Error Grouping

Group similar failures by:

  • Error type: ImportError, AttributeError, AssertionError, etc.
  • Module/file: Same file causing multiple test failure
  • Root cause: Missing dependencies, API changes, refactoring impacts

Prioritize groups by:

  • Number of affected tests (highest impact first)
  • Dependency order (fix infrastructure before functionality)

3. Systematic Fixing Process

For each group (starting with highest impact):

  1. Identify root cause

    • Read relevant code
    • Check recent changes with git diff
    • Understand the error pattern
  2. Implement fix

    • Use Edit tool for code changes
    • Follow project conventions (see CLAUDE.md)
    • Make minimal, focused changes
  3. Verify fix

    • Run subset of tests for this group
    • Use pytest markers or file patterns:
      uv run pytest tests/path/to/test_file.py -v
      uv run pytest -k "pattern" -v
      
    • Ensure group passes before moving on
  4. Move to next group

4. Fix Order Strategy

Infrastructure first:

  • Import errors
  • Missing dependencies
  • Configuration issues

Then API changes:

  • Function signature changes
  • Module reorganization
  • Renamed variables/functions

Finally, logic issues:

  • Assertion failures
  • Business logic bugs
  • Edge case handling

5. Final Verification

After all groups fixed:

  • Run complete test suite: make test
  • Verify no regressions
  • Check test coverage remains intact

Best Practices

  • Fix one group at a time
  • Run focused tests after each fix
  • Use git diff to understand recent changes
  • Look for patterns in failures
  • Don't move to next group until current passes
  • Keep changes minimal and focused

Example Workflow

User: "The tests are failing after my refactor"

  1. Run make test → 15 failures identified
  2. Group errors:
    • 8 ImportErrors (module renamed)
    • 5 AttributeErrors (function signature changed)
    • 2 AssertionErrors (logic bugs)
  3. Fix ImportErrors first → Run subset → Verify
  4. Fix AttributeErrors → Run subset → Verify
  5. Fix AssertionErrors → Run subset → Verify
  6. Run full suite → All pass ✓

Frequently Asked Questions

What is test-fixing?

test-fixing is an expert AI persona designed to improve your coding workflow. Run tests and systematically fix all failing tests using smart error grouping. Use when user asks to fix failing tests, mentions test failures, runs test suite and failures occur, or requests to make tests pass. It provides senior-level context directly within your IDE.

How do I install the test-fixing skill in Cursor or Windsurf?

To install the test-fixing skill, download the package, extract the files to your project's .cursor/skills directory, and type @test-fixing in your editor chat to activate the expert instructions.

Is test-fixing free to download?

Yes, the test-fixing AI persona is completely free to download and integrate into compatible Agentic IDEs like Cursor, Windsurf, Github Copilot, and Anthropic MCP servers.

@

test-fixing

Run tests and systematically fix all failing tests using smart error grouping. Use when user asks to fix failing tests, mentions test failures, runs test suite and failures occur, or requests to make tests pass.

Download Skill Package

IDE Invocation

@test-fixing
COPY

Platform

IDE Native

Price

Free Download

Setup Instructions

Cursor & Windsurf

  1. Download the zip file above.
  2. Extract to .cursor/skills
  3. Type @test-fixing in editor chat.

Copilot & ChatGPT

Copy the instructions from the panel on the left and paste them into your custom instructions setting.

"Adding this test-fixing persona to my Cursor workspace completely changed the quality of code my AI generates. Saves me hours every week."

A
Alex Dev
Senior Engineer, TechCorp