Phase 1: Test Infrastructure Activation Created: 2025-08-20 by Chief Architect deployment
# Quick validation (<5s)
./../../scripts/run_tests.sh smoke
# Development workflow (<30s)
./../../scripts/run_tests.sh fast
# Pre-merge comprehensive testing
./../../scripts/run_tests.sh full
# Weekly coverage analysis
./../../scripts/run_tests.sh coverage
Our test infrastructure provides 4 execution modes optimized for different development phases:
| Mode | Duration | Purpose | Use Case |
|---|---|---|---|
| smoke | <5s | Quick validation | Pre-commit, rapid feedback |
| fast | <30s | Development workflow | Feature development, debugging |
| full | Variable | Comprehensive testing | Pre-merge, CI/CD |
| coverage | Variable | Analysis + reporting | Weekly reviews, quality audits |
tests/
├── unit/ # Fast, mocked tests
│ ├── test_domain_models.py # Core business logic
│ ├── test_shared_types.py # Enums and constants
│ └── test_services.py # Service layer logic
├── integration/ # Real database tests (port 5433)
│ ├── test_repositories.py # Data access layer
│ ├── test_orchestration.py # Multi-component workflows
│ └── test_api_endpoints.py # End-to-end API testing
└── orchestration/ # Advanced coordination tests
├── test_multi_agent_coordinator.py # 773 lines, 36 tests
├── test_excellence_flywheel_integration.py # 943 lines
└── test_excellence_flywheel_unittest.py # Standalone tests
Our testing methodology follows the Excellence Flywheel pattern:
We apply our own methodology to test our methodology:
# Test the Excellence Flywheel using Excellence Flywheel principles
PYTHONPATH=. python -m pytest tests/orchestration/test_excellence_flywheel_unittest.py -v
# Test Multi-Agent Coordinator using systematic verification
PYTHONPATH=. python -m pytest tests/orchestration/test_multi_agent_coordinator.py -v
# 1. Start development session
source venv/bin/activate
docker-compose up -d
# 2. Before making changes
./../../scripts/run_tests.sh smoke # Baseline validation
# 3. After implementing feature
./../../scripts/run_tests.sh fast # Development validation
# 4. Before committing
./../../scripts/run_tests.sh smoke # Pre-commit check (automated by git hook)
# 5. Before pushing
./../../scripts/run_tests.sh fast # Pre-push check (automated by git hook)
fast test suite before allowing push# Custom test enforcement (bypasses git hooks)
.git/hooks/test-enforcement smoke # Quick validation
.git/hooks/test-enforcement fast # Development validation
.git/hooks/test-enforcement full # Comprehensive validation
Purpose: Rapid feedback for development workflow Target: Critical path validation only
./../../scripts/run_tests.sh smoke
What it tests:
Performance requirement: Must complete in <5 seconds
Purpose: Development workflow validation Target: Unit tests + standalone orchestration tests
./../../scripts/run_tests.sh fast
What it tests:
Performance requirement: Must complete in <30 seconds
Purpose: Comprehensive pre-merge validation Target: All tests including integration tests
./../../scripts/run_tests.sh full
What it tests:
Requirements:
Purpose: Quality auditing and gap identification Target: Comprehensive coverage reporting
./../../scripts/run_tests.sh coverage
What it generates:
htmlcov/# tests/unit/test_example.py
import pytest
from services.domain.models import ExampleModel
class TestExampleModel:
def test_creation_with_valid_data(self):
"""Test model creation follows domain rules"""
model = ExampleModel(name="test", value=42)
assert model.name == "test"
assert model.value == 42
def test_validation_enforces_business_rules(self):
"""Test business rule enforcement"""
with pytest.raises(ValueError, match="Invalid value"):
ExampleModel(name="test", value=-1)
# tests/integration/test_example.py
import pytest
from tests.conftest import async_transaction
class TestExampleRepository:
async def test_create_and_retrieve(self, async_transaction):
"""Test repository CRUD operations"""
# Use async_transaction fixture for database tests
repo = ExampleRepository(async_transaction)
# Test implementation
created = await repo.create(example_data)
retrieved = await repo.get_by_id(created.id)
assert retrieved.name == example_data["name"]
# tests/orchestration/test_example.py
import pytest
from unittest.mock import Mock, AsyncMock
class TestExampleCoordinator:
async def test_coordination_workflow(self):
"""Test coordination patterns without database"""
coordinator = ExampleCoordinator()
# Mock dependencies
coordinator.agent_pool = Mock()
coordinator.task_decomposer = Mock()
# Test coordination logic
result = await coordinator.coordinate_task(test_task)
assert result.status == "completed"
assert len(result.subtasks) > 0
| Test Category | Max Duration | Timeout | Purpose |
|---|---|---|---|
| Smoke Tests | 5 seconds | 3s per test | Rapid feedback |
| Fast Tests | 30 seconds | 10s per test | Development workflow |
| Integration Tests | No limit | 30s per test | Comprehensive validation |
| Individual Unit Tests | 1 second | 3s per test | Rapid iteration |
The test infrastructure automatically monitors and reports:
# Solution 1: Start with docker-compose
docker-compose up -d postgres redis
# Solution 2: Check service status
docker-compose ps
# Solution 3: Restart services
docker-compose restart postgres redis
# Solution: Activate virtual environment
source venv/bin/activate
# Or recreate if missing
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
# Solution: Always set PYTHONPATH explicitly
export PYTHONPATH=.
PYTHONPATH=. python -m pytest tests/unit/ -v
# Our test script handles this automatically
./../../scripts/run_tests.sh smoke
# Solution 1: Check module structure
find services/ -name "*.py" | head -10
# Solution 2: Verify imports work manually
python -c "import services.domain.models; print('✅ Import successful')"
# Solution 3: Check for circular imports
python -c "import services; print('✅ Services package loads')"
# Validate test infrastructure itself
./../../scripts/run_tests.sh smoke # Should complete in <5s
echo $? # Should return 0 for success
# Check git hook integration
.git/hooks/test-enforcement smoke # Manual hook testing
# Validate coverage infrastructure
./../../scripts/run_tests.sh coverage # Generate full coverage report
open htmlcov/index.html # View coverage in browser
Before implementing new functionality:
All development progress must be backed by:
./../../scripts/run_tests.sh fast must passLink test infrastructure work to GitHub issues:
# Run specific test file
PYTHONPATH=. python -m pytest tests/unit/test_domain_models.py -v
# Run specific test method
PYTHONPATH=. python -m pytest tests/unit/test_domain_models.py::TestModel::test_creation -v
# Run with custom timeout
PYTHONPATH=. python -m pytest tests/unit/ --timeout=5 -v
# Run with coverage for specific module
PYTHONPATH=. python -m pytest tests/unit/ --cov=services.domain --cov-report=term-missing -v
# Install pytest-xdist for parallel execution
pip install pytest-xdist
# Run tests in parallel (for large test suites)
PYTHONPATH=. python -m pytest tests/unit/ -n auto -v
# .github/workflows/test.yml example
- name: Run smoke tests
run: ./../../scripts/run_tests.sh smoke
- name: Run fast test suite
run: ./../../scripts/run_tests.sh fast
- name: Run full test suite
run: ./../../scripts/run_tests.sh full
- name: Generate coverage report
run: ./../../scripts/run_tests.sh coverage
Note: This test infrastructure was created as part of Phase 1: Test Infrastructure Activation by the Chief Architect on 2025-08-20. For questions or improvements, refer to the Excellence Flywheel methodology and create GitHub issues with evidence-based proposals.