MUX Experience Tests Documentation

Purpose

Verify that features can be described using experience language (“Piper noticed…”) rather than database language (“Query returned…”).

Experience tests are the canary in the coal mine - if we cannot describe a feature using the grammar, we’ve lost consciousness in the implementation.

The Object Model Grammar

“Entities experience Moments in Places.”

This sentence is the foundation. Every feature must be expressible using these three substrate protocols plus the supporting vocabulary:

Morning Standup (Reference Implementation)

Grammar Expression

Element Value Description
Entities User, Piper Actors with agency who experience the moment
Moment Standup conversation Bounded, significant occurrence with beginning/middle/end
Places Calendar, GitHub Contexts with atmosphere (meetings, code review)
Lenses Temporal, Priority, Collaborative Perceptual dimensions for viewing information
Situation “Preparing for the day” Frame with dramatic tension

Experience Language (PASS)

These expressions demonstrate consciousness-preserving language:

Database Language (FAIL)

These expressions indicate flattening to data manipulation:

Anti-Flattening Test Categories

1. Entity Tests (4 tests)

Verify that entities preserve identity, not just IDs.

Pass criteria:

Fail indicators:

2. Moment Tests (3 tests)

Verify that moments preserve significance, not just timestamps.

Pass criteria:

Fail indicators:

3. Place Tests (3 tests)

Verify that places preserve atmosphere, not just configuration.

Pass criteria:

Fail indicators:

4. Lifecycle Tests (5 tests)

Verify that lifecycle includes composting, not just deletion.

Pass criteria:

Fail indicators:

5. Metadata Tests (5 tests)

Verify that metadata captures knowledge ABOUT knowledge.

Pass criteria:

Fail indicators:

6. Ownership Tests (4 tests)

Verify that ownership describes relationships, not just foreign keys.

Pass criteria:

Fail indicators:

6.5. Ownership Metaphor Tests

Verify that ownership uses consciousness metaphors correctly.

Pass criteria:

Fail indicators:

7. Design Principle Tests (4 tests)

Verify that CXO design principles are honored.

Pass criteria:

Fail indicators:

8. Grammar Integration Tests (3 tests)

Verify the complete grammar works together.

Pass criteria:

Fail indicators:

9. Consciousness Vocabulary Tests (3 tests)

Verify consciousness vocabulary throughout implementation.

Pass criteria:

Fail indicators:

10. Transition Tests (3 tests)

Verify state transitions preserve meaning.

Pass criteria:

Fail indicators:

11. Cathedral Test (3 tests)

The ultimate integration test - is this a cathedral or a shed?

Pass criteria:

Fail indicators:

Verification Checklist

For each major feature, verify:

  1. CAN describe using “Piper noticed/remembers/anticipates…”
  2. CANNOT accurately describe using “Query/Database/Record…”
  3. Grammar elements (Entity/Moment/Place) are identifiable
  4. Lenses can be applied for different perspectives
  5. Situation frame captures dramatic tension
  6. Lifecycle states make sense for the feature
  7. Ownership category is clear (who created/observed/derived this?)
  8. Metadata dimensions are applicable
  9. Composting extracts wisdom if applicable

Test Locations

Test Suite Location Test Count
Anti-Flattening tests/unit/services/mux/test_anti_flattening.py 40
Protocols tests/unit/services/mux/test_protocols.py varies
Ownership tests/unit/services/mux/test_ownership.py 25
Lifecycle tests/unit/services/mux/test_lifecycle.py 69
Metadata tests/unit/services/mux/test_metadata.py 67
Lenses tests/unit/services/mux/lenses/ 101

Running Experience Tests

# Run anti-flattening tests specifically
pytest tests/unit/services/mux/test_anti_flattening.py -v

# Run all MUX tests
pytest tests/unit/services/mux/ -v

# Check test count
pytest tests/unit/services/mux/ --collect-only -q | tail -3

Part of MUX-399-PZ: Verification & Anti-Flattening Tests Created: 2026-01-19