---
name: systematic-debugging
description: "4-phase root cause debugging: understand bugs before fixing."
version: 1.1.0
author: Hermes Agent (adapted from obra/superpowers)
license: MIT
metadata:
  hermes:
    tags: [debugging, troubleshooting, problem-solving, root-cause, investigation]
    related_skills: [test-driven-development, writing-plans, subagent-driven-development]
---

# Systematic Debugging

## Overview

Random fixes waste time and create new bugs. Quick patches mask underlying issues.

**Core principle:** ALWAYS find root cause before attempting fixes. Symptom fixes are failure.

**Violating the letter of this process is violating the spirit of debugging.**

## The Iron Law

```
NO FIXES WITHOUT ROOT CAUSE INVESTIGATION FIRST
```

If you haven't completed Phase 1, you cannot propose fixes.

## When to Use

Use for ANY technical issue:
- Test failures
- Bugs in production
- Unexpected behavior
- Performance problems
- Build failures
- Integration issues

**Use this ESPECIALLY when:**
- Under time pressure (emergencies make guessing tempting)
- "Just one quick fix" seems obvious
- You've already tried multiple fixes
- Previous fix didn't work
- You don't fully understand the issue

**Don't skip when:**
- Issue seems simple (simple bugs have root causes too)
- You're in a hurry (rushing guarantees rework)
- Someone wants it fixed NOW (systematic is faster than thrashing)

## The Four Phases

You MUST complete each phase before proceeding to the next.

---

## Phase 1: Root Cause Investigation

**BEFORE attempting ANY fix:**

### 1. Read Error Messages Carefully

- Don't skip past errors or warnings
- They often contain the exact solution
- Read stack traces completely
- Note line numbers, file paths, error codes

**Action:** Use `read_file` on the relevant source files. Use `search_files` to find the error string in the codebase.

### 2. Reproduce Consistently

- Can you trigger it reliably?
- What are the exact steps?
- Does it happen every time?
- If not reproducible → gather more data, don't guess

**Action:** Use the `terminal` tool to run the failing test or trigger the bug:

```bash
# Run specific failing test
pytest tests/test_module.py::test_name -v

# Run with verbose output
pytest tests/test_module.py -v --tb=long
```

### 3. Check Recent Changes

- What changed that could cause this?
- Git diff, recent commits
- New dependencies, config changes

**Action:**

```bash
# Recent commits
git log --oneline -10

# Uncommitted changes
git diff

# Changes in specific file
git log -p --follow src/problematic_file.py | head -100
```

### 4. Gather Evidence in Multi-Component Systems

**WHEN system has multiple components (API → service → database, CI → build → deploy):**

**BEFORE proposing fixes, add diagnostic instrumentation:**

For EACH component boundary:
- Log what data enters the component
- Log what data exits the component
- Verify environment/config propagation
- Check state at each layer

Run once to gather evidence showing WHERE it breaks.
THEN analyze evidence to identify the failing component.
THEN investigate that specific component.

### 5. Trace Data Flow

**WHEN error is deep in the call stack:**

- Where does the bad value originate?
- What called this function with the bad value?
- Keep tracing upstream until you find the source
- Fix at the source, not at the symptom

**Action:** Use `search_files` to trace references:

```python
# Find where the function is called
search_files("function_name(", path="src/", file_glob="*.py")

# Find where the variable is set
search_files("variable_name\\s*=", path="src/", file_glob="*.py")
```

### 6. Set Up Debug Snapshot Infrastructure (for Web Services)

**WHEN deploying or maintaining a web service** (FastAPI, REST API, or any HTTP server):

Build a one-click debug snapshot system **before bugs happen**, not during debugging.

This automates evidence gathering: one button click captures process state, logs, config, system health, network, recent changes, and data integrity — all at once. The user describes the bug in their own words; the snapshot gives you the objective state to cross-reference against.

**The infrastructure has three components:**

| Layer | What | Purpose |
|-------|------|---------|
| Script | `debug-snapshot.sh` | Shell script that collects everything into a .tar.gz |
| Backend | `POST /api/bug-report` | Accepts description, runs script, returns report ID |
| Frontend | Modal with textarea + button | User clicks, describes problem, gets ID |

**Complete reference:** See `references/debug-snapshot-infrastructure.md` for architecture details.

**Template script:** `templates/debug-snapshot.sh` — copy and customize for any web service project.

#### When to Build This

- At project setup time (preferred — prevents scrambles)
- When deploying to a new environment
- During Phase 1 if repeated debugging sessions lack adequate context

#### The Snapshot Captures

- **System:** uptime, CPU, memory, disk, top processes
- **Processes:** is each service running? ports listening?
- **API:** health check response, live search/test endpoint response
- **Logs:** recent service logs (search API, Caddy, uvicorn)
- **Config:** current config files (redacted of secrets)
- **Data integrity:** index size, DB health, file structure
- **Network:** listening ports, connectivity to external hosts
- **Recent changes:** files modified in last 24h, indexer logs

#### API Endpoints to Add

```python
@app.get('/health')           # Quick alive check + collection count
@app.get('/api/stats')        # Index/DB size and readiness
@app.post('/api/bug-report')  # POST {description} → runs snapshot → returns {id}
@app.get('/api/bug-report/{id}')  # Retrieve or download (with ?download=snapshot)
```

#### User Flow

1. Bug encountered → User clicks "🐛 报告问题" button
2. Modal opens → User types description → Clicks "生成快照并提交"
3. Backend runs snapshot script → saves description + .tar.gz
4. Returns `{id: "20260515_140605", message: "告诉Hermes这个ID"}`
5. User tells agent the ID → Agent reads files, investigates systematically

#### Pitfalls

- **Log redirection:** uvicorn logs go to stderr. Always `&> logs/service.log` when starting.
- **Secrets:** Always redact API keys (`sk-*`, `ghp_*`, token patterns) in config snapshots.
- **Timeout:** Set 30s timeout on snapshot subprocess. Long-running checks should have internal timeouts.
- **Caddyfile location:** Search dynamically; don't hardcode paths.
- **Startup race:** If the API auto-restarts, snapshot may catch it mid-load. Retry health check 2x.
- **async onsubmit pitfall:** `async function doSearch()` in HTML `onsubmit="return doSearch()"` returns a Promise, not `false`. The browser sees a truthy Promise and **still submits the form**, causing a page reload. Fix: pass `event` and call `event.preventDefault()` inside the async handler, or use `addEventListener('submit', ...)` instead of `onsubmit`.

### 7. Add Status Tracking (After Snapshot Infrastructure)

**WHEN the basic snapshot system works and you want to show users the lifecycle of their bug reports:**

Add a status tracking layer that persists report state in a JSON file and exposes it via API endpoints and a frontend history modal.

#### Architecture

```
reports.json (in bug_reports/)
{
  "20260515_140702": {
    id, description,
    status: "received" | "fixing" | "fixed",
    status_label,
    snapshot, created_at, updated_at
  }
}
```

#### Additional API Endpoints

```python
# Create report (auto-sets status="received")
POST /api/bug-report  { description: "..." }

# Agent updates status from CLI
PUT /api/bug-report/{id}/status  { status: "fixing" }

# List all reports with statuses
GET /api/bug-reports
```

#### Status Lifecycle

| Status | Label | Frontend Badge Color | When |
|--------|-------|---------------------|------|
| `received` | 📩 已收到 | Blue `#7ab0e8` | User submits report |
| `fixing` | 🔧 正在修复 | Gold `#e8c54a` | Agent starts working |
| `fixed` | ✅ 已完成 | Green `#7ec699` | Bug resolved |

#### Frontend History Modal

Add a sidebar link (📋 报告记录) that opens a modal listing all past reports. Each item shows: icon (emoji based on status), description (2-line clamp), status badge (color-coded), timestamp, and report ID. Include a refresh button.

#### Full User Flow

```
User clicks 🐛 报告问题 → types description → submits
  → Backend saves description + runs snapshot + stores status="received"
  → Frontend shows "✅ Bug报告已收到, ID: xxx, 状态: 📩 已收到"
  → User tells agent the ID
  → Agent updates status → "fixing" (curl PUT)
  → Agent fixes bug
  → Agent updates status → "fixed"
  → User opens 📋 报告记录 → sees ✅ 已完成
```

#### Key Design Decisions

- **Persistence:** Single `reports.json` file — simple, no DB needed
- **Migration:** When adding to an existing system, write a one-time script that scans `bug_*.txt` files and initializes their status
- **CSS colors:** Use fixed hex values for status badges (not CSS variables) so they stay consistent across themes
- **No auth in dev:** Status update endpoint is unprotected in local setups
- **History modal on overlay click close:** Both clicking outside and ESC key close the modal

### Phase 1 Completion Checklist

- [ ] Error messages fully read and understood
- [ ] Issue reproduced consistently
- [ ] Recent changes identified and reviewed
- [ ] Evidence gathered (logs, state, data flow) — consider automated debug snapshot if infrastructure exists
- [ ] Problem isolated to specific component/code
- [ ] Root cause hypothesis formed

**STOP:** Do not proceed to Phase 2 until you understand WHY it's happening.

---

## Phase 2: Pattern Analysis

**Find the pattern before fixing:**

### 1. Find Working Examples

- Locate similar working code in the same codebase
- What works that's similar to what's broken?

**Action:** Use `search_files` to find comparable patterns:

```python
search_files("similar_pattern", path="src/", file_glob="*.py")
```

### 2. Compare Against References

- If implementing a pattern, read the reference implementation COMPLETELY
- Don't skim — read every line
- Understand the pattern fully before applying

### 3. Identify Differences

- What's different between working and broken?
- List every difference, however small
- Don't assume "that can't matter"

### 4. Understand Dependencies

- What other components does this need?
- What settings, config, environment?
- What assumptions does it make?

---

## Phase 3: Hypothesis and Testing

**Scientific method:**

### 1. Form a Single Hypothesis

- State clearly: "I think X is the root cause because Y"
- Write it down
- Be specific, not vague

### 2. Test Minimally

- Make the SMALLEST possible change to test the hypothesis
- One variable at a time
- Don't fix multiple things at once

### 3. Verify Before Continuing

- Did it work? → Phase 4
- Didn't work? → Form NEW hypothesis
- DON'T add more fixes on top

### 4. When You Don't Know

- Say "I don't understand X"
- Don't pretend to know
- Ask the user for help
- Research more

---

## Phase 4: Implementation

**Fix the root cause, not the symptom:**

### 1. Create Failing Test Case

- Simplest possible reproduction
- Automated test if possible
- MUST have before fixing
- Use the `test-driven-development` skill

### 2. Implement Single Fix

- Address the root cause identified
- ONE change at a time
- No "while I'm here" improvements
- No bundled refactoring

### 3. Verify Fix

```bash
# Run the specific regression test
pytest tests/test_module.py::test_regression -v

# Run full suite — no regressions
pytest tests/ -q
```

### 4. If Fix Doesn't Work — The Rule of Three

- **STOP.**
- Count: How many fixes have you tried?
- If < 3: Return to Phase 1, re-analyze with new information
- **If ≥ 3: STOP and question the architecture (step 5 below)**
- DON'T attempt Fix #4 without architectural discussion

### 5. If 3+ Fixes Failed: Question Architecture

**Pattern indicating an architectural problem:**
- Each fix reveals new shared state/coupling in a different place
- Fixes require "massive refactoring" to implement
- Each fix creates new symptoms elsewhere

**STOP and question fundamentals:**
- Is this pattern fundamentally sound?
- Are we "sticking with it through sheer inertia"?
- Should we refactor the architecture vs. continue fixing symptoms?

**Discuss with the user before attempting more fixes.**

This is NOT a failed hypothesis — this is a wrong architecture.

---

## Red Flags — STOP and Follow Process

If you catch yourself thinking:
- "Quick fix for now, investigate later"
- "Just try changing X and see if it works"
- "Add multiple changes, run tests"
- "Skip the test, I'll manually verify"
- "It's probably X, let me fix that"
- "I don't fully understand but this might work"
- "Pattern says X but I'll adapt it differently"
- "Here are the main problems: [lists fixes without investigation]"
- Proposing solutions before tracing data flow
- **"One more fix attempt" (when already tried 2+)**
- **Each fix reveals a new problem in a different place**

**ALL of these mean: STOP. Return to Phase 1.**

**If 3+ fixes failed:** Question the architecture (Phase 4 step 5).

## Common Rationalizations

| Excuse | Reality |
|--------|---------|
| "Issue is simple, don't need process" | Simple issues have root causes too. Process is fast for simple bugs. |
| "Emergency, no time for process" | Systematic debugging is FASTER than guess-and-check thrashing. |
| "Just try this first, then investigate" | First fix sets the pattern. Do it right from the start. |
| "I'll write test after confirming fix works" | Untested fixes don't stick. Test first proves it. |
| "Multiple fixes at once saves time" | Can't isolate what worked. Causes new bugs. |
| "Reference too long, I'll adapt the pattern" | Partial understanding guarantees bugs. Read it completely. |
| "I see the problem, let me fix it" | Seeing symptoms ≠ understanding root cause. |
| "One more fix attempt" (after 2+ failures) | 3+ failures = architectural problem. Question the pattern, don't fix again. |

## Quick Reference

| Phase | Key Activities | Success Criteria |
|-------|---------------|------------------|
| **1. Root Cause** | Read errors, reproduce, check changes, gather evidence, trace data flow | Understand WHAT and WHY |
| **2. Pattern** | Find working examples, compare, identify differences | Know what's different |
| **3. Hypothesis** | Form theory, test minimally, one variable at a time | Confirmed or new hypothesis |
| **4. Implementation** | Create regression test, fix root cause, verify | Bug resolved, all tests pass |

## Hermes Agent Integration

### Investigation Tools

Use these Hermes tools during Phase 1:

- **`search_files`** — Find error strings, trace function calls, locate patterns
- **`read_file`** — Read source code with line numbers for precise analysis
- **`terminal`** — Run tests, check git history, reproduce bugs
- **`web_search`/`web_extract`** — Research error messages, library docs

### With delegate_task

For complex multi-component debugging, dispatch investigation subagents:

```python
delegate_task(
    goal="Investigate why [specific test/behavior] fails",
    context="""
    Follow systematic-debugging skill:
    1. Read the error message carefully
    2. Reproduce the issue
    3. Trace the data flow to find root cause
    4. Report findings — do NOT fix yet

    Error: [paste full error]
    File: [path to failing code]
    Test command: [exact command]
    """,
    toolsets=['terminal', 'file']
)
```

### With test-driven-development

When fixing bugs:
1. Write a test that reproduces the bug (RED)
2. Debug systematically to find root cause
3. Fix the root cause (GREEN)
4. The test proves the fix and prevents regression

## Real-World Impact

From debugging sessions:
- Systematic approach: 15-30 minutes to fix
- Random fixes approach: 2-3 hours of thrashing
- First-time fix rate: 95% vs 40%
- New bugs introduced: Near zero vs common

**No shortcuts. No guessing. Systematic always wins.**
