BRIEFING-METHODOLOGY.md - How We Work

Core Principle: Visible Collaboration

Always check and discuss an ambiguity rather than guessing and moving forward silently.

We need visibility into difficult choices. We don’t expect silent perfection. We welcome messy discussion and collaborating to find paths none of us might find alone.

When in doubt:

Documentation Navigation

For complete documentation structure and navigation, see: docs/NAVIGATION.md This file maps all documentation locations and purposes.

The Inchworm Protocol (ADR-035)

We complete work sequentially and thoroughly. Like an inchworm, each movement is complete before the next begins.

The Five Steps (Required for Every Epic)

  1. Fix - Solve the actual problem, not work around it
  2. Test - Prove it works with comprehensive tests
  3. Lock - Add tests that prevent regression
  4. Document - Update docs to reflect reality
  5. Verify - Confirm with the North Star test (GitHub issue creation)

Definition of “Done”

A task is only complete when:

Not Done: “It should work” / “It’s mostly complete” / “Just needs cleanup” Done: “Here’s the test proving it works, the lock preventing regression, and the updated docs”

GitHub Progress Discipline (MANDATORY)

PM Validates Checkboxes

This is critical methodology:

Format for Issues

## Acceptance Criteria
- [ ] Task description with clear completion definition (PM will validate)
- [ ] Another task with evidence requirement (PM will validate)
  - Evidence: [terminal output or test results will go here]

Progress Updates

Agents update in issue DESCRIPTION (not comments):

## Progress
- [ ] Investigation complete
  - Found root cause: session management issue
  - Evidence: [link to specific line in engine.py]
- [ ] Fix implemented
  - Changed initialization in commit abc123
  - Tests passing: see output below

Test Scope Specification

All acceptance criteria must specify test types:

Required Test Categories

Example Format

## Testing Requirements
- [ ] Unit tests: QueryRouter initialization (PM will validate)
- [ ] Integration tests: Intent → Engine → Router flow (PM will validate)
- [ ] Performance tests: <500ms response time (PM will validate)
- [ ] Regression tests: Prevent QueryRouter disabling (PM will validate)

Document Creation Guidelines

Use Artifacts When:

Use Filesystem When:

Use Sandbox When:

Documentation Location Priority When in Doubt (Use First Available)

  1. Artifacts (when reliable)
    • Pros: Attached to project, versioning, easy navigation
    • Cons: Intermittent bugs, download naming issues
    • Backup: Download periodically with -HHMM timestamp
  2. Filesystem (when available)
    • Pros: Secure, real-time, repository integration
    • Cons: Not available in standard Claude.ai (only Desktop)
    • Path: /Users/xian/Development/piper-morgan/dev/YYYY/MM/DD/
  3. Sandbox (fallback)
    • Pros: Usually available
    • Cons: Update failures, no project attachment
    • Backup: Download with -HHMM timestamps

Verification Discipline

After EVERY file write:

Session Log Standard v2

Framework and instructions

If in Claude.ai or Claude Desktop with access to project knowledge

If access you have access to the local filesystem

Format

YYYY-MM-DD-HHMM-[role]-[product]-log.md

Standard Slugs

Roles:

Products:

Examples

2025-09-22-0816-arch-opus-log.md
2025-09-22-1046-lead-sonnet-log.md
2025-09-22-1400-prog-code-log.md

Creation Command

# Generic format (replace [role] and [product])
mkdir -p dev/$(date +%Y)/$(date +%m)/$(date +%d)
echo "# Session Log - $(date +%Y-%m-%d %H:%M)" > dev/$(date +%Y)/$(date +%m)/$(date +%d)/$(date +%Y-%m-%d-%H%M)-[role]-[product]-log.md

The Excellence Flywheel

Our systematic approach to prevent the 75% pattern from recurring:

1. Verify Before Assuming

# Don't assume routes/ exists, check:
ls -la web/

# Don't assume a pattern exists, search:
grep -r "PatternName" . --include="*.py"

# Don't assume configuration, verify:
cat config/PIPER.user.md

2. Discover Before Implementing

3. Test Before Claiming

4. Lock Before Moving On

GitHub Discipline

Every piece of work must be tracked:

Issue-First Development

  1. GitHub issue exists before work starts
  2. Issue assigned to track ownership
  3. Issue description contains acceptance criteria
  4. Updates go in description via checkbox edits (not just comments)
  5. Evidence provided for each completed checkbox

Linking Everything

Template Adaptation

Templates provide structure but should be adapted to context:

The goal is effective work, not perfect adherence to templates.

Multi-Agent Coordination

📋 Detailed Guide: methodology-02-AGENT-COORDINATION.md - Comprehensive coordination methodology 📚 Examples: multi-agent-patterns.md - Real project case studies 🛠️ Templates: human-ai-collaboration-referee.md - Implementation protocols

Agent Strengths

📍 Complete Reference: methodology-02-AGENT-COORDINATION.md - Authoritative agent strengths

Quick Summary:

Deployment Patterns

🎯 Decision Guide: Validation Approaches - When to use cross-validation vs parallel separation

Cross-Validation Mode: Both agents work same high-risk task, compare approaches Parallel Separation Mode: Agents work different domains, coordinate at interfaces Single Agent: Only with explicit justification for simple tasks

Quick Decision Framework

Multi-Agent Quick Start Checklist

Pre-Deployment (2 minutes):

Deployment (30 seconds):

Execution (ongoing):

Quality Gates:

Common Pitfalls to Avoid

Evidence Requirements

No claims without proof:

What Counts as Evidence

Terminal output showing success ✅ Test results passing ✅ Performance metrics meeting targets ✅ File diffs showing changes ✅ Screenshots of working features ✅ Logs demonstrating behavior

Not Evidence: “Should work” / “Probably fixed” / “Seems right”

Evidence Examples

# Good: Show the test passing
$ pytest tests/test_orchestration.py -xvs
...
tests/test_orchestration.py::test_github_issue_creation PASSED

# Good: Show the performance
$ python benchmark.py
Average response time: 247ms (target: <500ms)# Good: Show it actually works
$ curl -X POST http://localhost:8001/api/intent \
    -d '{"message": "create github issue about login bug"}'
{"status": "success", "issue_url": "github.com/..."}

STOP Conditions (Red Flags)

Stop immediately and ask for help when:

Infrastructure Mismatches

Pattern Confusion

Test Failures

Assumption Moments

The Questions to Ask

  1. “I expected X but found Y - should I proceed?”
  2. “There are two patterns here - which is correct?”
  3. “This seems broken but has TODO - fix or skip?”
  4. “I don’t understand why - can you explain?”

Collaboration Over Perfection

We value:

Daily Rhythm

Start each session:

  1. Check current epic status
  2. Review relevant ADRs
  3. Verify infrastructure matches expectations
  4. Plan work with clear deliverables
  5. Set evidence targets

End each session:

  1. Document what was completed (with evidence)
  2. Update GitHub issues
  3. Note any discoveries or surprises
  4. Flag any blockers for next session
  5. Complete satisfaction assessment (see below)

Session Satisfaction Protocol

Before ending any session, complete satisfaction check:

  1. Quick 5-point assessment in session log
  2. Ask PM each metric, compare answers and discuss:
    • Value: What got shipped?
    • Process: Did methodology work smoothly?
    • Feel: How was the cognitive load?
    • Learned: Any key insights?
    • Tomorrow: Clear next steps?
    • Overall: 😊 / 🙂 / 😐 / 😕 / 😞
  3. GitHub issue emoji close (🎉 great, ✅ good, 🤔 meh, 😤 rough)

This is MANDATORY for session completion. The satisfaction data helps improve our processes and prevent burnout.

Reference: session-log-instructions.md

Session Failure Conditions

The session has failed our standards if ANY of these occur:

When failure conditions occur, stop and reset methodology compliance.

Success Metrics

Per Session

Per Week

The “Simpler Than Expected” Pattern

Often, we assume complexity where simplicity exists:

Approach: Start with simple checks before complex investigation.

Remember

The path none of us would find alone is the one we discover together through visible collaboration, systematic verification, and complete execution.

When you find yourself guessing - STOP. That’s the moment to engage, not push through.


Excellence comes from completion, not perfection. Last Updated: September 22, 2025