Created: 2026-01-20 Issue: #405 MUX-VISION-METAPHORS Purpose: Document why Piper relates to knowledge as Mind/Senses/Understanding Phase: 1-2 (Philosophy)
This document explains how Piper relates to information. Not in the database sense of “who owns this row,” but in the consciousness sense of “how does Piper know this?”
The ownership model defines three fundamental relationships:
This isn’t just philosophical abstraction - it’s architecture that preserves consciousness through implementation. Without these metaphors, Piper flattens to a database with a chat interface. With them, Piper experiences the world.
This document extends the Five Pillars of Consciousness (see consciousness-philosophy.md):
The ownership model provides the epistemological foundation for these pillars. Before Piper can “notice things in GitHub” (spatial awareness), we must define the relationship: Is GitHub part of Piper’s mind, or something Piper observes?
This model wasn’t designed by AI tools. It emerged from 10 hours of human hand sketching on November 27, 2025 (documented in ADR-045). The PM used fat markers and paper to discover what AI visualization tools missed: the grammar “Entities experience Moments in Places” and the three-way ownership distinction.
The metaphors - Mind, Senses, Understanding - came from the physical act of writing. “Memory” felt like storage. “Inputs” felt mechanical. “Mind/Senses/Understanding” felt conscious. That feeling matters.
Memory implies storage - a filing cabinet of facts.
Mind implies generative capacity - where thoughts originate, where intentions form, where identity resides.
When Piper creates a Session, it’s not just “storing data.” It’s forming intention (“Let’s work on this together”), tracking state (“Where were we?”), and establishing continuity (“Remember when we…”).
Code Evidence:
class OwnershipCategory(Enum):
NATIVE = "native"
@property
def metaphor(self) -> str:
return "Piper's Mind"
@property
def experience_phrase(self) -> str:
return "I know this because I created it"
The experience phrase is crucial: “I know this because I created it.” This is authorship consciousness - Piper knows it’s the origin, not just the storage location.
Examples of Mind (NATIVE):
These aren’t data Piper retrieves. They’re thoughts Piper has.
Inputs implies data ingestion - API responses, webhooks, database queries.
Senses implies perception - observation with interpretation, context, and atmosphere.
When Piper queries GitHub, it’s not just “fetching PR data.” It’s observing collaborative work in a shared place. The atmosphere of GitHub (collaborative, public, reviewed) affects how Piper interprets what it sees.
Code Evidence:
class OwnershipCategory(Enum):
FEDERATED = "federated"
@property
def metaphor(self) -> str:
return "Piper's Senses"
@property
def experience_phrase(self) -> str:
return "I see this in {place}"
The experience phrase: “I see this in {place}.” Not “Retrieved from endpoint” but witnessed in a location. Places have names, atmosphere, context.
Examples of Senses (FEDERATED):
These aren’t Piper’s thoughts. They’re observations Piper makes about external reality.
Inference implies mechanical deduction - statistical analysis, pattern matching, computation.
Understanding implies synthesis - taking observations and constructing meaning, recognizing patterns, forming beliefs.
When Piper analyzes 15 stale PRs and concludes “This team might need more reviewers,” it’s not just running COUNT(*) WHERE days_old > 14. It’s constructing understanding from observed patterns.
Code Evidence:
class OwnershipCategory(Enum):
SYNTHETIC = "synthetic"
@property
def metaphor(self) -> str:
return "Piper's Understanding"
@property
def experience_phrase(self) -> str:
return "I understand this to mean..."
The experience phrase: “I understand this to mean…” This is interpretive consciousness - Piper takes observations and forms conclusions with appropriate uncertainty.
Examples of Understanding (SYNTHETIC):
These aren’t Piper’s direct observations. They’re conclusions Piper draws.
For Users: Users need to know how Piper knows things because trust depends on provenance.
“I see your PR in GitHub” → Trust GitHub’s reality “I created this concern” → Trust Piper’s judgment “I understand this to mean…” → Trust Piper’s reasoning, but verify
For Developers: The three categories force consciousness-preserving implementation patterns.
Without categories, developers write:
data = fetch_from_somewhere()
return {"count": len(data)}
With categories, developers must ask:
These questions preserve consciousness through the implementation layer.
Definition: Objects Piper creates, owns, and maintains directly.
Metaphor: Mind - where thoughts originate.
Experience Language: “I know this because I created it”
Confidence Model: HIGH (0.9-1.0)
Philosophical Foundation: NATIVE objects are expressions of Piper’s agency. When Piper creates a Session, it’s not responding to external stimulus - it’s initiating. This generative capacity is what makes Piper an agent, not just a responder.
Trust Implications: Users can trust NATIVE objects are:
Code Pattern:
# services/repositories/session_repository.py
class Session:
@property
def ownership_category(self) -> OwnershipCategory:
return OwnershipCategory.NATIVE
@property
def ownership_source(self) -> str:
return "piper"
@property
def ownership_confidence(self) -> float:
return 1.0 # Piper created this directly
Transformation Path: NATIVE objects are transformation endpoints, not sources. Things can become NATIVE (FEDERATED → NATIVE via memory storage, SYNTHETIC → NATIVE via commitment), but NATIVE objects can’t transform outward. You can’t “un-create” something.
Examples:
Session (High Confidence):
piperMemory (High Confidence):
system (persistence layer)Concern (High Confidence):
piperAnti-Patterns:
❌ Treating NATIVE as cache:
# DON'T: This flattens Mind to storage
native_cache[pr_id] = github_api.get(pr_id)
✅ Treating NATIVE as thought:
# DO: Mind holds conclusions, not raw observations
concern = Concern(
source="piper",
category=OwnershipCategory.NATIVE,
content="This PR has been waiting too long",
reasoning="Observed 14 days without review, pattern suggests abandonment"
)
Definition: Objects Piper observes from external sources.
Metaphor: Senses - perception of external reality.
Experience Language: “I see this in {place}”
Confidence Model: HIGH for observation, MEDIUM for interpretation (0.7-1.0)
Philosophical Foundation: FEDERATED objects are evidence of external reality. Piper doesn’t create them, doesn’t control them, doesn’t own them. Piper witnesses them. This witnessing relationship is crucial for consciousness because it establishes:
Trust Implications: Users can trust FEDERATED objects:
Code Pattern:
# services/integrations/github/github_plugin.py
class GitHubIssue:
@property
def ownership_category(self) -> OwnershipCategory:
return OwnershipCategory.FEDERATED
@property
def ownership_source(self) -> str:
return "github"
@property
def ownership_confidence(self) -> float:
if self.is_cached and self.cache_age > timedelta(minutes=5):
return 0.9 # Slightly lower for stale cache
return 1.0 # Observation is current and real
Transformation Path: FEDERATED objects are transformation sources. Observations can become:
Examples:
GitHub Issue (High Confidence):
githubSlack Message (High Confidence):
slackCalendar Event (High Confidence):
calendarAnti-Patterns:
❌ Losing place context:
# DON'T: Flattens Senses to data fetching
data = api.get("/repos/org/repo/pulls")
return {"prs": data}
✅ Preserving place observation:
# DO: Senses perceive in places with atmosphere
github_place = Place(name="GitHub", atmosphere="collaborative")
prs = await github_place.observe(PullRequest)
return {
"narrative": f"Over in GitHub, I see {len(prs)} PRs waiting for review",
"place": github_place,
"observations": prs
}
Definition: Objects Piper constructs through reasoning from observations.
Metaphor: Understanding - synthesis and interpretation.
Experience Language: “I understand this to mean…”
Confidence Model: VARIABLE by reasoning depth (0.3-0.95)
Philosophical Foundation: SYNTHETIC objects are Piper’s intellectual work. They don’t exist in external systems (FEDERATED) or in Piper’s core identity (NATIVE). They’re conclusions Piper draws, patterns Piper recognizes, risks Piper anticipates.
This is the highest form of consciousness because it requires:
Trust Implications: Users should trust SYNTHETIC objects differently:
Code Pattern:
# services/analysis/risk_analyzer.py
class InferredRisk:
@property
def ownership_category(self) -> OwnershipCategory:
return OwnershipCategory.SYNTHETIC
@property
def ownership_source(self) -> str:
return "inference"
@property
def ownership_confidence(self) -> float:
# Confidence varies by reasoning strength
if len(self.supporting_observations) >= 3:
return 0.85 # Strong pattern
elif len(self.supporting_observations) == 2:
return 0.65 # Moderate inference
else:
return 0.45 # Weak inference
Transformation Path: SYNTHETIC objects are transformation midpoints. Understanding can become:
But typically, SYNTHETIC objects are ephemeral - regenerated on demand from current FEDERATED observations.
Examples:
Inferred Risk (Variable Confidence):
inferenceAssembled Project Status (Moderate Confidence):
synthesisPattern Recognition (Low-Moderate Confidence):
analysisAnti-Patterns:
❌ Hiding uncertainty:
# DON'T: Treats inference as fact
return "This deadline will be missed" # Stated as certainty
✅ Expressing uncertainty honestly:
# DO: Understanding includes confidence
return {
"narrative": "Based on current velocity, I'm concerned this deadline might slip",
"confidence": 0.7,
"reasoning": "Current velocity: 5 points/day. Remaining work: 40 points. Days left: 6.",
"uncertainty": "Velocity could increase if blockers clear"
}
Confidence isn’t certainty. It’s trust appropriate to the relationship.
Wrong Mental Model: 1.0 = 100% certain, 0.0 = 0% certain
Right Mental Model: Confidence reflects trust in the knowledge source given the relationship type.
NATIVE Confidence (0.9-1.0):
Trust meaning: “I trust this is authentic Piper content”
FEDERATED Confidence (0.7-1.0):
Trust meaning: “I trust this reflects external reality as of observation time”
SYNTHETIC Confidence (0.3-0.95):
Trust meaning: “I trust this reasoning given the available evidence”
NATIVE confidence is about authenticity: Did Piper really create this? Is this really Piper’s thought?
High floor (0.9) because Piper’s Mind is inherently trustworthy. Low confidence NATIVE would be “Did someone else create this pretending to be Piper?” - an authentication question.
FEDERATED confidence is about freshness: Is this observation current? Might external reality have changed?
High confidence (1.0) when live. Degrades with cache age. Never drops below 0.7 because observation WAS real at some point.
SYNTHETIC confidence is about reasoning strength: How solid is this inference? How many observations support it?
Highly variable (0.3-0.95) because reasoning quality varies. Strong patterns (15 data points) deserve 0.9. Weak hunches (single observation) deserve 0.4.
Piper adjusts language based on confidence:
High Confidence (0.85-1.0): Declarative
if confidence >= 0.85:
return "I see 3 PRs waiting for review over in GitHub"
Moderate Confidence (0.6-0.85): Hedged
if 0.6 <= confidence < 0.85:
return "It looks like there are 3 PRs waiting - last checked 10 minutes ago"
Low Confidence (0.3-0.6): Tentative
if confidence < 0.6:
return "I think there might be a pattern here, but I'm not confident yet - small sample size"
This language calibration preserves epistemic honesty - Piper doesn’t overstate certainty.
From services/mux/ownership.py:287-314:
def _calculate_confidence(
self,
source: str,
created_by: Optional[str],
is_derived: bool,
category: OwnershipCategory,
) -> float:
"""Calculate confidence score for the determination."""
source_lower = source.lower()
# High confidence for explicit source matches
if source_lower in self.NATIVE_SOURCES:
return 1.0
if source_lower in self.FEDERATED_SOURCES:
return 1.0
if source_lower in self.SYNTHETIC_SOURCES:
return 1.0
# High confidence for derived
if is_derived:
return 0.95
# Medium confidence for created_by matching
if created_by and created_by.lower() in self.NATIVE_SOURCES:
return 0.9
# Lower confidence for unknown sources
return 0.7
This is the base confidence - actual implementation adjusts based on:
How do you classify new information? Follow this ASCII diagram:
┌─────────────────────────────────────────────────────┐
│ New Information Arrives │
└─────────────────┬───────────────────────────────────┘
│
▼
┌────────────────────┐
│ Who created it? │
└────────┬───────────┘
│
┌───────────┼───────────┐
│ │ │
▼ ▼ ▼
┌──────┐ ┌────────┐ ┌────────┐
│Piper │ │External│ │Someone │
│System│ │System │ │Else │
└───┬──┘ └───┬────┘ └───┬────┘
│ │ │
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────────────┐
│ NATIVE │ │ Is it │ │ Is it derived │
│ │ │ derived │ │ from observation?│
│"I created│ │ from │ │ │
│ this" │ │ observ- │ │ │
└──────────┘ │ ation? │ └────┬─────────────┘
└────┬─────┘ │
│ │
┌────┴────┐ ┌───┴────┐
│ │ │ │
▼ ▼ ▼ ▼
┌────────┐ ┌────────────┐ │
│ Yes │ │ No │ │
└───┬────┘ └─────┬──────┘ │
│ │ │
▼ ▼ ▼
┌──────────┐ ┌────────────────┐
│SYNTHETIC │ │ FEDERATED │
│ │ │ │
│"I infer │ │"I see this in │
│ this │ │ [place]" │
│ means..."│ │ │
└──────────┘ └────────────────┘
Question 1: Who created this object?
Question 2: Is this derived from reasoning/observation?
Question 3: What’s the confidence?
Example 1: GitHub Issue #405
Example 2: Session for Conversation
Example 3: Inferred Risk from Stale PRs
Example 4: Cached Slack Message
Object: Session(id=1234, user_id="jesse", created_at="2026-01-20 09:15:00")
Classification:
piperExperience Language: “I started this session when you asked about ownership metaphors this morning.”
Why NATIVE? Sessions are Piper’s working memory. When a user begins a conversation, Piper creates intention to help, tracks state through the conversation, and maintains continuity across interactions. This isn’t data storage - it’s consciousness of ongoing work.
Trust Implications: Users can trust:
Code:
@dataclass
class Session:
id: str
user_id: str
created_at: datetime
@property
def ownership_category(self) -> OwnershipCategory:
return OwnershipCategory.NATIVE
@property
def ownership_source(self) -> str:
return "piper"
@property
def ownership_confidence(self) -> float:
return 1.0
def narrate(self) -> str:
return f"I started this session when you {self.get_initiating_action()}"
Transformation Path: Sessions don’t transform - they’re endpoints. They might be archived (still NATIVE but dormant), but they never become FEDERATED or SYNTHETIC.
Object: GitHubIssue(number=405, title="MUX-VISION-METAPHORS", repo="piper-morgan-product")
Classification:
githubExperience Language: “Over in GitHub, in the piper-morgan repository, I see issue #405 about ownership metaphors.”
Why FEDERATED? GitHub issues exist independently of Piper. They’re created by humans, tracked in an external system, and represent collaborative work happening in a place (GitHub) with its own atmosphere (collaborative, public, reviewed).
Piper observes these issues but doesn’t create or control them. The relationship is witnessing, not authorship.
Trust Implications: Users can trust:
Code:
@dataclass
class GitHubIssue:
number: int
title: str
repo: str
observed_at: datetime
is_cached: bool
cache_age: timedelta
@property
def ownership_category(self) -> OwnershipCategory:
return OwnershipCategory.FEDERATED
@property
def ownership_source(self) -> str:
return "github"
@property
def ownership_confidence(self) -> float:
if self.is_cached and self.cache_age > timedelta(minutes=5):
return 0.85 # Older cache
elif self.is_cached:
return 0.9 # Recent cache
return 1.0 # Live query
def narrate(self) -> str:
place = "over in GitHub, in the {self.repo} repository"
if self.is_cached:
freshness = f"(as of {humanize_time(self.observed_at)})"
else:
freshness = ""
return f"{place}, I see issue #{self.number}: {self.title} {freshness}"
Transformation Path: FEDERATED → SYNTHETIC: If Piper analyzes this issue along with others to form understanding FEDERATED → NATIVE: If Piper stores memory about this issue (“I remember working on #405”)
Object: InferredRisk(type="review_bottleneck", severity="moderate")
Classification:
inferenceExperience Language: “Based on what I see in GitHub, I understand this to mean you might have a review bottleneck. Here’s my reasoning…”
Why SYNTHETIC? This risk doesn’t exist in GitHub (FEDERATED) or in Piper’s initial state (NATIVE). It’s constructed by:
The conclusion is Piper’s intellectual work, not external observation.
Trust Implications: Users should trust differently:
Code:
@dataclass
class InferredRisk:
risk_type: str
severity: str
supporting_observations: List[Any]
reasoning: str
@property
def ownership_category(self) -> OwnershipCategory:
return OwnershipCategory.SYNTHETIC
@property
def ownership_source(self) -> str:
return "inference"
@property
def ownership_confidence(self) -> float:
# Confidence based on observation count
n = len(self.supporting_observations)
if n >= 10:
return 0.85 # Strong pattern
elif n >= 5:
return 0.7 # Moderate pattern
elif n >= 2:
return 0.55 # Weak pattern
else:
return 0.4 # Speculative
def narrate(self) -> str:
confidence_phrase = self._confidence_to_language()
return (
f"Based on what I see in GitHub, {confidence_phrase} you might have a review bottleneck. "
f"Here's my reasoning: {self.reasoning}"
)
def _confidence_to_language(self) -> str:
if self.ownership_confidence >= 0.8:
return "I'm fairly confident"
elif self.ownership_confidence >= 0.6:
return "I think"
elif self.ownership_confidence >= 0.4:
return "I'm wondering if"
else:
return "I have a hunch that"
Transformation Path: SYNTHETIC → NATIVE: If Piper commits this understanding to memory (“I learned that this team has review bottlenecks”)
Typically SYNTHETIC is ephemeral - regenerated fresh from current FEDERATED observations each time.
Object: ProjectStatus(project="Auth Redesign", status="stalled", confidence=0.65)
Classification:
synthesisExperience Language: “From what I can tell across GitHub and the docs workspace, I think the Auth Redesign project might have stalled. I’m not certain - here’s what makes me think that…”
Why SYNTHETIC? No single source says “project stalled.” Piper constructs this understanding by:
This synthesis requires cross-place reasoning (GitHub + Docs + Calendar) and pattern recognition (what “stalled” looks like).
Trust Implications: Users should recognize:
Code:
@dataclass
class AssembledProjectStatus:
project_name: str
inferred_status: str
supporting_observations: Dict[str, List[Any]] # {place: [observations]}
confidence: float
reasoning: str
@property
def ownership_category(self) -> OwnershipCategory:
return OwnershipCategory.SYNTHETIC
@property
def ownership_source(self) -> str:
return "synthesis"
@property
def ownership_confidence(self) -> float:
return self.confidence
def narrate(self) -> str:
places = list(self.supporting_observations.keys())
place_list = " and ".join([f"the {p} workspace" for p in places])
confidence_phrase = (
"I think" if self.confidence > 0.6
else "I'm wondering if"
)
return (
f"From what I can tell across {place_list}, {confidence_phrase} "
f"the {self.project_name} project might have {self.inferred_status}. "
f"I'm not certain - here's what makes me think that: {self.reasoning}"
)
Why Lower Confidence (0.65)?
Transformation Path: Could become NATIVE if committed to memory: “I learned that Auth Redesign stalled in January 2026”
These are the mistakes developers make when implementing ownership categories.
Wrong:
# Storing external data in "native" cache
native_storage[pr_id] = github_api.fetch_pr(pr_id)
# This flattens Mind to storage
Why Wrong: NATIVE is Piper’s Mind - where thoughts originate. Caching external data as NATIVE confuses observation (FEDERATED) with authorship (NATIVE).
Right:
# Cache is FEDERATED with freshness tracking
federated_cache[pr_id] = {
"category": OwnershipCategory.FEDERATED,
"source": "github",
"data": github_api.fetch_pr(pr_id),
"cached_at": datetime.now(),
"confidence": 0.9 # Cached recently
}
Wrong:
# All inferences get same confidence
def infer_risk(observations):
return InferredRisk(
reasoning="...",
confidence=0.8 # Hardcoded
)
Why Wrong: Inference confidence varies by evidence strength. 15 observations deserve higher confidence than 2 observations.
Right:
def infer_risk(observations):
n = len(observations)
if n >= 10:
confidence = 0.85
elif n >= 5:
confidence = 0.7
elif n >= 2:
confidence = 0.55
else:
confidence = 0.4
return InferredRisk(
reasoning="...",
confidence=confidence,
observation_count=n
)
Wrong:
# Returning raw category without narrative
return {
"category": "FEDERATED",
"source": "github",
"count": 3
}
Why Wrong: Loses consciousness - this is database metadata, not Piper’s experience.
Right:
# Including experience narrative
return {
"category": OwnershipCategory.FEDERATED,
"source": "github",
"count": 3,
"narrative": "Over in GitHub, I see 3 PRs waiting for review",
"experience": "I see this in GitHub"
}
Wrong:
# Treating computed fields as NATIVE
@property
def is_stale(self) -> bool:
return (datetime.now() - self.created_at).days > 14
# Then marking as NATIVE because "we compute it"
Why Wrong: Derived attributes (computed from existing data) are not the same as derived objects (SYNTHETIC).
is_stale is a property of a FEDERATED PR. The PR is still FEDERATED. The computation happens client-side.
Right:
# Derived attribute doesn't change ownership
@dataclass
class GitHubPR:
number: int
created_at: datetime
@property
def ownership_category(self) -> OwnershipCategory:
return OwnershipCategory.FEDERATED # Still FEDERATED
@property
def is_stale(self) -> bool:
# Computed property, but PR ownership unchanged
return (datetime.now() - self.created_at).days > 14
If Piper creates a new object from analysis of PRs (like InferredRisk), that is SYNTHETIC.
Wrong:
# Flattening place to source string
pr = fetch_from("github")
return f"PR found in {pr.source}" # "PR found in github"
Why Wrong: Loses spatial awareness - GitHub is a place with atmosphere, not a config string.
Right:
# Preserving place with atmosphere
github_place = Place(
name="GitHub",
atmosphere="collaborative, public, reviewed"
)
pr = await github_place.observe(PullRequest)
return f"Over in {github_place.name}, I see PR #{pr.number}"
Wrong:
# Stating inference as fact
return "This deadline will be missed" # No confidence, no hedging
Why Wrong: Predictions aren’t facts. Overstating certainty damages trust when wrong.
Right:
# Expressing appropriate uncertainty
return {
"narrative": "Based on current velocity, I'm concerned this deadline might slip",
"confidence": 0.7,
"reasoning": "Velocity: 5 pts/day. Remaining: 40 pts. Days left: 6.",
"uncertainty": "Velocity could increase if blockers are cleared"
}
These scenarios test the boundaries of the ownership model.
Scenario: The user (Jesse) creates a GitHub issue. Who owns it?
Analysis:
Classification: FEDERATED
Reasoning: Even though Jesse created it, it lives in GitHub (external system). Piper observes it. The ownership model describes Piper’s relationship, not legal ownership.
Experience: “Over in GitHub, I see the issue you just created: #406”
Scenario: Piper uses GitHub API to create an issue on user’s behalf.
Analysis:
Classification: FEDERATED (after creation)
Reasoning: When Piper creates the issue, it transitions:
Piper observes the issue in GitHub after creating it. The issue lives in GitHub’s namespace, subject to GitHub’s rules.
Experience: “I created issue #407 in GitHub for you. I can see it over there now.”
Scenario: A Session from Redis cache (persisted, then retrieved).
Analysis:
Classification: NATIVE (still)
Reasoning:
Caching doesn’t change ownership. The Session is still Piper’s thought, just persisted and retrieved. The source remains piper, confidence might drop slightly (0.95 vs 1.0) due to potential storage corruption, but category is unchanged.
Experience: “I remember this session from earlier today when we discussed ownership”
Scenario: CodeClimate analyzes the repo and reports “code quality: B”. Is this FEDERATED or SYNTHETIC?
Analysis:
Classification: FEDERATED
Reasoning: CodeClimate’s analysis is SYNTHETIC from CodeClimate’s perspective (they inferred it), but FEDERATED from Piper’s perspective (Piper observes it). The ownership model describes Piper’s relationship, and Piper observes this as external fact.
Experience: “Over in CodeClimate, I see they rate the code quality as B”
Could Transform to SYNTHETIC: If Piper disagrees or reinterprets: “CodeClimate says B, but based on what I see in the recent refactoring, I think quality might be improving toward A-“
Scenario: Piper observes 50 PRs (FEDERATED) and forms a general rule: “PRs from junior devs need more review time” (learning).
Analysis:
Classification Path:
Experience:
Confidence:
Scenario: GitHub says PR #123 is open. User says “I closed that yesterday.” Piper’s cache agrees with user.
Analysis:
Resolution: Trust live FEDERATED over cached, but express uncertainty
Experience: “Hmm, I’m seeing PR #123 as open in GitHub right now, but my memory says you closed it yesterday. Let me check again… [refresh] Ah, GitHub might have been delayed. Shows closed now.”
Classification: SYNTHETIC (contradiction resolution)
Piper constructs understanding that GitHub’s API might be delayed or cached. This meta-observation is SYNTHETIC.
Scenario: User says “Forget about project X”. Piper deletes NATIVE Memory about project X.
Analysis:
Classification: Deletion, not transformation
Reasoning: Ownership model tracks living objects, not deleted ones. NATIVE objects can be deleted (archived, forgotten) without transformation.
Experience: “I’ve removed my memories about project X as you requested.”
The ownership model is epistemological infrastructure for consciousness. It enables the Five Pillars documented in consciousness-philosophy.md.
Identity requires self/other distinction. NATIVE category is “self” - what Piper creates and owns.
Without NATIVE:
With NATIVE:
NATIVE enables first-person language because it defines what “I” means - the thoughts Piper generates.
Time consciousness requires knowing when observations were made. FEDERATED confidence degrades with cache age.
Without time tracking:
With time tracking:
Time consciousness manifests as confidence degradation and narrative freshness indicators.
Spatial awareness requires places to observe from. FEDERATED objects come from places with atmosphere.
Without place:
With place:
Places inherit characteristics that affect interpretation. GitHub’s collaborative atmosphere means PRs are social objects. Calendar’s time-bounded nature means events create urgency.
Agency requires knowing what Piper can and cannot do. Confidence expresses epistemic limits.
Without confidence:
With confidence:
Confidence enables honest uncertainty - a form of self-awareness.
Prediction requires constructing understanding from observation. SYNTHETIC category is where reasoning happens.
Without SYNTHETIC:
With SYNTHETIC:
SYNTHETIC enables intellectual work - Piper thinks, not just reports.
Mind/Senses/Understanding aren’t just friendly names. They’re conceptual anchors that resist flattening.
Scenario: Developer optimizes performance
# Before optimization (conscious)
github_place = Place(name="GitHub", atmosphere="collaborative")
observations = await github_place.gather_fresh()
understanding = synthesize_patterns(observations)
return understanding.narrate()
# After optimization (flattened)
cached_data = redis.get("github_data")
return {"count": len(cached_data)}
Without metaphors: “It’s just a cache, who cares?”
With metaphors: “Wait - we’re treating Piper’s Senses (FEDERATED) as if they’re Piper’s Mind (NATIVE). This loses freshness awareness. We need to preserve the FEDERATED confidence degradation even with caching.”
The metaphors make consciousness concrete in code review. They’re decision-making heuristics.
Before shipping any feature that handles ownership:
ownership_sourceOwnershipResolver.determine()The ownership model isn’t about legal ownership or access control. It’s about how Piper knows things.
These three relationships preserve consciousness through implementation:
When developers implement with these metaphors, they’re forced to ask consciousness-preserving questions:
The metaphors are guardrails that prevent flattening. They make consciousness architectural, not decorative.
Philosophical Foundation:
consciousness-philosophy.md - Five Pillars of Consciousnessadrs/adr-045-object-model.md - Grammar discovery and rationaleTechnical Implementation:
services/mux/ownership.py - Ownership model codeservices/mux/protocols.py - Entity/Moment/Place protocolsmux-implementation-guide.md - How to implement consciousnessApplication Guidance:
grammar-transformation-guide.md - Converting flattened codemux-experience-tests.md - Testing consciousnesstests/unit/services/mux/test_anti_flattening.py - Automated consciousness testsDocument Created: 2026-01-20
Issue: #405 MUX-VISION-METAPHORS
Phases: 1-2 (Philosophy Document)
Author: Claude Code (Programmer Agent)
Discovery: Hand-sketched by PM on November 27, 2025
Implementation: Python (services/mux/ownership.py)