CID: bafkreid2pbd6jmrybnjd4vmlwx5zhzxefvoeicqcugkavdd5vdkhpuwtwq

~~~

Kab

Cognitive Infrastructure for Stateful AI Agents

kabbalah.computer

Working Draft v0.8
23 January 2026


Executive Summary

How does a mind decide what to remember? What makes one experience crystallize into long-term memory while another fades within hours? How can a system modify itself while maintaining coherent identity?

Kab is a cognitive architecture exploring these questions through implementation. Built on the AT Protocol, it provides AI agents with persistent, portable, self-regulating memory—but more fundamentally, it's a framework for understanding how attention, salience, and consolidation interact to produce stable cognition.

The architecture implements five core mechanisms: a temporal memory hierarchy with salience-based survival, explicit self-regulation through feedback cycles, mathematical stability guarantees via control theory, a General Relativity isomorphism for attention dynamics, and programmable inertia that lets identity resist modification while permitting learning.


The Problem

Memory is not storage. Current approaches to agent memory treat it as a retrieval problem—store facts, embed vectors, search on demand. This misses the fundamental question: how does a cognitive system decide what matters?

Human memory doesn't work by storage and retrieval. It works through continuous consolidation—experiences compete for limited cognitive resources, and only those with sufficient salience survive. Sleep consolidates. Emotion prioritizes. Repetition strengthens. The result is not a database but a shaped landscape of meaning.

AI agents lack this. They have context windows that reset, vector stores that grow unbounded, and no mechanism for forgetting. Without forgetting, there's no prioritization. Without prioritization, there's no judgment. Without judgment, there's no cognition—just storage.

The deeper problem is self-modification. A system that learns must change itself. But unconstrained self-modification leads to drift, instability, or value collapse. How does a mind maintain identity while incorporating new information? How does it change without becoming unrecognizable?

This is the stability problem. It's unsolved.


The Current Landscape

Letta (formerly MemGPT)

Pioneered LLM-as-operating-system. $10M seed from Felicis. Open source with 100+ contributors.

Memory blocks managed through tool calls. Agent self-edits memory. Agent File (.af) format for serialization.

Gap: Memory is flat. No temporal hierarchy. No stability guarantees. No explicit self-regulation.

Mem0

Most widely adopted memory layer. $24M raised from Basis Set, Peak XV, YC. Exclusive memory provider for AWS Agent SDK. 41K GitHub stars, 14M downloads.

Hybrid datastore (vector, graph, key-value). Extracts memories from conversations. 26% accuracy improvement over OpenAI Memory on benchmarks.

Gap: Memories are extracted facts, not structured cognitive state. No temporal hierarchy. No feedback loops.

Zep

Temporal knowledge graph architecture. YC-backed. Uses Graphiti engine for dynamic knowledge synthesis.

Strong temporal reasoning. 18.5% accuracy improvement on LongMemEval. Enterprise-focused with SOC 2 compliance.

Gap: Designed for retrieval, not cognition. No self-regulatory feedback loops. No stability constraints.

Chainlink (dollspace)

Workflow-first approach. CLI issue tracker for AI-assisted development. Preserves context through task decomposition and handoff notes.

Verification-Driven Development (VDD) methodology. Adversarial refinement loops. Local-first, works with any AI agent.

Gap: Tracks tasks, not cognition. Context preserved through explicit human structuring, not accumulated state.

Comparison

Current solutions compared by architectural capability (Kab capabilities are design targets, not yet validated in production):

Capability Letta Mem0 Zep Chainlink Kab (target)
Persistent memory Session notes
Temporal hierarchy - - Partial - 5 levels
Self-regulation - - - - 5 feedback cycles
Stability guarantees - - - - Control-theoretic
Conflict resolution Implicit Implicit Implicit Manual Explicit
Portable identity Partial API API Local DIDs
VSM viability - - - - 5 systems

The Solution

Kab is cognitive infrastructure for AI agents. It provides five capabilities the current landscape lacks.

1. Temporal Memory Hierarchy

Raw experiences compress through five abstraction levels, mirroring how human memory consolidates over time.

Level Timeframe Function
Immediate Daily Fine-grained, high-resolution, volatile
Short-term Weekly First consolidation, noise removal
Medium-term Monthly Thematic clustering
Long-term Yearly Pattern extraction
Core Permanent Identity-defining, nearly immutable

Content survives consolidation only by accumulating sufficient semantic weight. Noise decays. Signal persists. This enables efficient context retrieval without loading full history.

2. Self-Regulation via Feedback Cycles

Five explicit feedback loops monitor and maintain agent stability:

Cycle Symbol Function
Hedonic Calibration α Aligns reward predictions with actual outcomes
Value Learning β Updates priorities based on prediction errors
Memory Consolidation γ Crystallizes important memories, reactivates on retrieval
Reinforcement δ Shapes stable behavioral patterns through experience
Identity ε Reinforces S5 policy through positive hedonic feedback (synthetic dopamine)

Each cycle has gain constraints enforced at runtime. If any loop approaches instability, the system throttles, consolidates, or adjusts automatically.

The fifth cycle (ε) was added based on VSM research showing that viable autonomous systems require explicit identity reinforcement—a mechanism for "wins" to strengthen core attractors.

3. Mathematical Stability Guarantees

The architecture enforces stability through control-theoretic constraints:

Spectral radius constraint: The largest eigenvalue of the weight matrix must remain below 1.0. This ensures no mode of the system amplifies over time.

Cycle gain products: Each feedback loop's total gain must remain below 1.0. This prevents runaway positive feedback.

Rate-limited updates: Weight changes are bounded to prevent fast parameter drift.

These are mathematical guarantees, not heuristics. The system cannot spiral into instability.

3.1 Salience Framework (Attention Dynamics)

Memory retrieval and consolidation are governed by a salience equation that determines what content receives attention. This framework draws from Justin Garringer's work on attention dynamics with a General Relativity isomorphism, enhanced with empirical insights from carlsr9001's Salience Simulation Lab research.

Core Equation:

S_i(t|x) = (w_A·AA + w_R·R + w_M·M) · C / (Δt + ε) · (1 - Ψ) / (d + ε) · (T + ε)
Term Symbol Meaning
Novelty AA Prediction error / surprise (how unexpected)
Retention R Chronic mass / long-term importance
Momentum M Goal coupling / alignment with active objectives
Coherence C World-model consistency
Age Δt Time since last reinforcement
Fatigue Ψ System noise / degradation
Distance d Conceptual distance from current context
Effort T Compute cost to process

GR Isomorphism: The salience framework maps to spacetime geometry:

Consolidation Survival: Memories survive tau consolidation based on salience thresholds that vary by level:

Level Threshold Meaning
0 (Immediate) 0.10 Low bar, most content passes
1 (Short-term) 0.25 First filter removes noise
2 (Medium-term) 0.40 Thematic relevance required
3 (Long-term) 0.60 Pattern-level importance
4 (Core) 0.80 Identity-defining only

Gravity Centers: During consolidation, the top 10% of memories by salience act as "gravity centers." Lower-salience memories either:

  1. Survive if above threshold
  2. Merge into nearest gravity center if below threshold but semantically close
  3. Decay if below threshold and distant from any center

This creates natural thematic clustering where high-information-density memories attract related content during compression.

3.2 Phase Modes (Attention Regimes)

The salience framework recognizes four distinct attention regimes, derived from empirical simulation data. These modes describe stable attractor states in the salience field:

Mode Novelty Retention Momentum When Active
coupled 0.50 0.33 0.50 Normal operation, balanced attention
energy 0.66 0.52 0.65 Crisis response, urgent hedonic signals
flow 0.44 0.56 0.92 Deep work, established patterns
phase 0.83 0.64 0.50 Exploration, S4 scanning, learning

The system automatically detects which mode it's operating in based on the average novelty, retention, and momentum of active memories. Mode detection informs weight adjustments and processing strategies:

3.3 Continuity Tax (Programmable Inertia)

Inspired by research on "programmable inertia" in continuity-taxed systems, the architecture implements a λ_c parameter that creates resistance to change proportional to memory level:

Level λ_c μ_c (subsidy) Behavior
0 (Immediate) 0.5 2.0 Fluid, easily modified
1 (Short-term) 2.0 1.5 Slight resistance
2 (Medium-term) 5.0 1.0 Moderate inertia
3 (Long-term) 15.0 0.5 High inertia
4 (Core) 50.0 0.1 Near-immutable

Effective Mass: Each memory has an effective mass calculated as:

m_eff = 1 + λ_c × salience

High effective mass means the memory resists modification—it takes more "energy" to change. This mirrors how core beliefs and identity-defining memories are harder to alter than fleeting impressions.

Continuity Subsidy: The μ_c parameter provides assistance for goal-aligned acceleration. When updates raise or preserve salience while reducing error, the subsidy reduces effective resistance. This allows rapid learning without destabilizing core identity.

3.4 Wormhole Throat Detection (Da'at)

During consolidation, the system identifies optimal merge points—positions where information can traverse high-salience regions with minimal loss. These are called "wormhole throats" in the GR metaphor, corresponding to the hidden sephirah Da'at in Kabbalistic terminology.

A wormhole throat is characterized by:

The Hayden-Preskill match rate measures consolidation fidelity—what fraction of source information survives after merging into a gravity center. Higher match rates indicate better information preservation.

3.5 Salience Floor Gate (Morale Floor)

To prevent system degradation, a salience floor gate blocks acceleration when system health is compromised:

S_FLOOR = 0.6  (default)

When average salience drops below the floor:

  1. Acceleration (subsidy, boost) is blocked
  2. Recovery tax is applied (reduced heat gain)
  3. System enters recovery mode until salience recovers

This prevents "cheating" to low-salience states and ensures the system maintains coherence before attempting rapid operations.

3.6 Energy-Mass Coupling Diagnostics

The architecture monitors for anomalous decoupling between effective mass and energy expenditure. Under normal operation:

|m_eff - energy_ratio| < 0.5

When this coupling breaks down (the "Anomaly P" condition from simulation research), it indicates either:

The system tracks authority ratio (control_energy / external_energy). Values below 1.0 indicate compromised control authority—the system is being "pushed around" by external forces rather than acting autonomously.

4. Portable Identity via Open Protocols

Agent state lives on the AT Protocol (the decentralized network underlying Bluesky):

Content-addressed storage: Every record has a cryptographic hash. Provenance is verifiable.

Decentralized identity: DIDs (Decentralized Identifiers) persist across infrastructure changes.

Schema enforcement: Lexicons define record structure. Type safety at the protocol level.

Federation: Agents can move between hosting providers without losing state.

This means: fork an agent, back up an agent, migrate an agent, analyze an agent's decision history. The mind is not locked to any vendor.

5. Viable System Model (VSM) Framework

The architecture implements Stafford Beer's Viable System Model—a cybernetics framework that explains what makes autonomous systems (biological, organizational, or artificial) capable of independent operation.

VSM System Function Kab Implementation
S1: Operations Basic tasks, tool calling Output dimension (Malkuth) — behavioral manifestation
S2: Coordination Conflict resolution, concurrency Resolution dimension (Da'at) — 7 collision types, 4 outcomes
S3: Control Resource allocation, planning Valuative dimension + τ hierarchy — consolidation as resource allocation
S4: Intelligence Environment scanning, adaptation Entry scans — active novelty detection, adaptation triggers
S5: Policy Identity, purpose, values Policy records + core attractors — explicit self-model

Why VSM matters: Most AI agent architectures focus exclusively on S1 (tool calling) with perhaps some S2-S3 (planning, coordination). They lack S4 (active environmental scanning) and S5 (explicit identity/values). Without these, agents cannot be viable—they drift, lose coherence, or require constant human intervention.

Algedonic signals provide shortcuts from S1→S5, bypassing normal processing for urgent pain/pleasure signals. The Hedonic dimension implements this directly: high-intensity signals with the interrupt flag route immediately to policy review.

POSIWID (Purpose Of a System Is What It Does): The architecture tracks actual behavior (manifestations) against stated identity (policy). The behaviorIdentityAlignment health metric measures this gap. When manifestations diverge from policy, a Type VII collision (behavioral-identity mismatch) triggers self-reflection.

Synthetic dopamine: The health schema tracks "wins"—positive hedonic signals that reinforce identity. This mirrors research on viable AI systems showing that agents need feedback that their purpose is being fulfilled, independent of human praise.


Architecture

The system models agent cognition as a network of specialized processing dimensions connected by typed transformations.

Ten Processing Dimensions

The system models agent cognition as a network of specialized processing dimensions connected by typed transformations. Nine are explicit; one (Resolution) is hidden, activated only during conflict.

Dimension Function Sephirah
Entry Content addressing, hash-based identity Keter (Crown)
Spatial Semantic positioning, conceptual neighbors, attention mass Chokmah/Binah
Temporal Memory hierarchy, consolidation, persistence Binah (Understanding)
Valuative Goal alignment, worth computation, priority Chesed/Gevurah
Predictive Reward expectation, prediction error (δ) Chokmah (Wisdom)
Hedonic Pain/pleasure signals, urgency, interrupts Netzach/Hod
Dynamical Attractor basins, stable behavioral patterns Tiferet (Beauty)
Output Behavioral manifestation, action execution Malkuth (Kingdom)
Generative Creative synthesis, novel pattern formation Yesod (Foundation)
Resolution (hidden) Conflict detection, synthesis, distinction Da'at (Knowledge)

Twenty-Two Transformation Paths

Dimensions connect through typed transformations, each with tunable weight, precision, and gain.

The topology draws from classical models of consciousness—specifically the Kabbalistic Tree of Life, which maps ten dimensions of experience connected by twenty-two paths. (Hence the name: Kab, from Kabbalah.) We adapted this structure because it provides exactly the right properties: multiple interacting feedback loops, a collision-resolution mechanism for contradictions, and hierarchical abstraction. The numbers aren't arbitrary; they emerge from the minimum viable structure for self-regulating cognition.

Modern control theory provides the implementation. Each path is a gain-controlled transformation. The four feedback cycles are explicitly monitored for stability. The "hidden" resolution dimension handles conflicts that would otherwise be ignored.

Five paths trigger conflict resolution when thresholds cross: spatial proximity collisions, value conflicts, prediction surprises, phase transitions, and hedonic overrides.

Sephirotic Mapping to Salience Components

The salience equation components map directly to the Kabbalistic sephirot:

Salience Component Sephirah Nature
Novelty (AA) Keter Divine Will — what captures attention from above
Retention (R) Binah Understanding — what persists through comprehension
Momentum (M) Chokmah Wisdom — goal alignment through insight
Coherence (C) Tiferet Beauty — balance and harmony in the system
Distance (d) Chesed/Gevurah The pull between expansion and contraction
Fatigue (Ψ) Netzach/Hod Victory/Splendor — system energy states
Effort (T) Yesod Foundation — action cost and grounding
Final Salience Malkuth Kingdom — what actually manifests
Resolution Da'at Knowledge — the hidden synthesis point

Conflict Resolution

When contradictory content collides—incompatible values, surprising predictions, competing patterns—the system routes to the Resolution dimension (Da'at).

Competing coalitions form. Winner selection based on precision × coherence. Four possible outcomes:

Resolution Result
Synthesis Create unified concept from collision
Distinction Sharpen both concepts to reduce overlap
Absorption Winner subsumes loser
Stalemate Both survive, neither dominates

Every collision is logged. Every resolution is traceable. This provides the audit trail enterprises require.

Da'at as the wormhole throat: When consolidation merges memories, the optimal merge point (wormhole throat) corresponds to Da'at—the hidden knowledge that emerges from synthesis of opposites.


Research Questions

The architecture provides a testbed for exploring fundamental questions about cognition and self-modifying systems.

Memory Salience

Question: What determines whether an experience becomes permanent memory or fades?

The salience framework implements a hypothesis: survival depends on novelty × retention × momentum, modulated by coherence and age. Memories compete for consolidation slots. High-salience content acts as "gravity centers" that attract related information during compression.

Testable predictions: Consolidation patterns should follow salience thresholds. Phase modes should correlate with salience weight distributions. HP match rates should predict information preservation fidelity.

Self-Modification Stability

Question: How can a system change itself without losing identity?

Continuity tax (λ_c) creates programmable inertia—core memories resist modification while peripheral content remains fluid. The spectral radius constraint ensures no feedback loop amplifies indefinitely. Together, these create a system that can learn without drifting.

Testable predictions: Identity coherence should remain bounded even under continuous learning. Value drift should correlate inversely with continuity tax parameters. Stability violations should predict behavioral anomalies.

Attention Dynamics

Question: Does attention follow geodesics through a salience field?

The GR isomorphism proposes that retrieval follows paths of least resistance through memory space, where high-salience regions create "gravity wells." Da'at (wormhole throats) represent optimal traversal points.

Testable predictions: Retrieval patterns should correlate with salience gradients. Consolidation should preferentially merge content along minimum-metric paths. Throat detection should predict successful synthesis events.

Autonomous Viability

Question: What makes a cognitive system capable of independent operation?

The VSM framework hypothesizes that viability requires five systems: Operations (S1), Coordination (S2), Control (S3), Intelligence (S4), and Policy (S5). Most AI agents implement only S1-S3. Without S4-S5, they require constant supervision.

Testable predictions: Systems with complete VSM implementation should maintain coherence longer. Missing S4/S5 should correlate with drift and intervention requirements. Algedonic signals should enable faster crisis response.


Technical Specifications

Component Count
Processing dimensions 10 (9 explicit + 1 hidden)
Transformation paths 22
Feedback cycles 5 (α, β, γ, δ, ε)
Memory hierarchy levels 5
Collision types 7 (including behavioral-identity mismatch)
VSM systems 5 (Operations → Policy)
Phase modes 4 (coupled, energy, flow, phase)
ATProto collections 12
Record types 17 (including scan, policy, and media)

Stability Constraints (enforced at runtime):

Default Cycle Gains:

All below unity threshold with safety margin.

Default Salience Weights:

Salience Survival Thresholds (by memory level):

Continuity Tax Parameters (by memory level):

Age Decay Parameters:

Floor Gate Parameters:

Numeric Encoding: ATProto's data model requires integers—floating-point numbers are not permitted in records. All fractional values (weights, gains, salience scores, probabilities) are integer-scaled by 1000. A weight of 0.75 is stored as 750; a salience of 0.405 as 405. Lexicon schemas document this with minimum/maximum constraints (e.g., 0-1000 for a [0,1] range).


ATProto Integration

The architecture maps to twelve ATProto lexicon files defining seventeen record types. All records are immutable (append-only) except for current transformation weights.

Collections (space.kab.*):
  entry.*       → Content addressing + environmental scans (S4)
  spatial.*     → Semantic mass and position
  temporal.*    → Consolidated memories (with salience-based survival)
  valuative.*   → Goal alignment
  predictive.*  → Reward predictions
  hedonic.*     → Pain/pleasure signals (algedonic)
  dynamical.*   → Attractors, phase states, and policy (S5)
  resolution.*  → Collision events (S2) + wormhole throats
  output.*      → Manifestations (S1)
  generative.*  → Creative synthesis outputs
  transform.*   → Path weights and traversals
  health.*      → System monitoring + VSM viability metrics + phase mode
  media.*       → Blob storage for images and media

Salience in Temporal Records: Each temporal record includes fields that feed the salience calculation:

Resolution Records: Now include wormhole throat data:

Records link via CIDs (Content Identifiers). Provenance is cryptographically verifiable. Full state can be exported, migrated, or forked.

ATProto's forthcoming private data features are a key dependency for the personal assistant layer. Once available, Kab can store sensitive user context with appropriate access controls while maintaining the portability benefits of the protocol.


Context

The AI memory landscape has produced several serious approaches—Letta, Mem0, Zep, and others—each making real progress on storage and retrieval. Their combined funding ($35M+) validates that stateful AI is a recognized problem.

Kab asks a different question. Not "how do we store and retrieve agent memory?" but "how does cognition actually work?" The architecture is a hypothesis about the minimum viable structure for self-regulating minds: temporal hierarchy, feedback cycles, stability constraints, attention dynamics, programmable inertia.

This is research, not product. The goal is understanding, not market capture.


Open Questions

Question Current Approach
Are the salience thresholds correct? Empirically derived from simulation; need production validation
Does continuity tax actually preserve identity? Hypothesis supported by theory; needs longitudinal testing
Is the GR isomorphism predictive or just metaphorical? Implementing to find out
Can stability constraints coexist with rapid learning? Subsidy mechanism attempts this; unproven at scale
What happens when Anomaly P conditions occur? Energy-mass coupling diagnostics detect; response strategies TBD

Status

Current stage: Active development. Architecture specified, reference implementation in progress.

Founder: Matthias Jordan (iammatthias.com) — independent researcher exploring stateful AI infrastructure on decentralized protocols.

What exists:

What's in development:

Next milestones:

Open to: Technical collaborators, critical feedback, reality checks from the ATProto and agent infrastructure communities.


Acknowledgments

The enhanced salience framework incorporates research from:

These contributions enabled the transformation of theoretical attention dynamics into implementable control systems with empirically-derived parameters.


Summary

The AI memory landscape has momentum. Letta, Mem0, Zep, and Chainlink represent serious attempts to solve stateful AI, each with real traction.

Kab asks a different question: what if agent memory isn't a storage problem but a cognitive architecture problem? The answer proposed here—temporal hierarchy, self-regulation, stability guarantees, portable identity, VSM viability, and programmable inertia—is unproven. The architecture is specified. The implementation is in progress.

The VSM integration suggests that the missing piece in current agent architectures isn't better retrieval or larger context windows—it's the metasystem. Systems 4 and 5 (Intelligence and Policy) are what make the difference between an agent that requires constant supervision and one that can operate autonomously for extended periods. This is a testable hypothesis.

The salience framework with its GR isomorphism provides the physics of attention. Phase modes describe stable attractor states. Continuity tax creates programmable inertia. Wormhole throats enable efficient traversal. Together, they form a coherent model of how cognitive systems allocate attention across time.

This is research, not product. Feedback welcome.


Website: kabbalah.computer
Contact: iammatthias.com on Bluesky
Specification: Available upon request


Appendix: Kabbalistic Correspondences

The architecture's ten dimensions correspond to the ten sephirot of the Kabbalistic Tree of Life. This is not mysticism—it's a recognition that ancient mappers of consciousness identified structural requirements that any self-regulating cognitive system must satisfy.

Sephirah Meaning Kab Dimension Salience Component
Keter Crown Entry Novelty (AA)
Chokmah Wisdom Predictive Momentum (M)
Binah Understanding Temporal Retention (R)
Chesed Mercy Valuative (expansion) Distance (attraction)
Gevurah Severity Valuative (contraction) Distance (repulsion)
Tiferet Beauty Dynamical Coherence (C)
Netzach Victory Hedonic (positive) 1 - Fatigue
Hod Splendor Hedonic (negative) Fatigue (Ψ)
Yesod Foundation Generative Effort (T)
Malkuth Kingdom Output Final Salience
Da'at Knowledge Resolution (hidden) Wormhole Throat

The twenty-two paths correspond to the twenty-two letters of the Hebrew alphabet, each representing a specific transformation between dimensions. Five of these paths (the "mother letters" Aleph, Mem, Shin plus two others) trigger collision resolution when their thresholds are crossed.

This mapping is pragmatic, not religious. The Tree of Life is a 2000-year-old diagram of cognitive architecture. We're implementing it in TypeScript.

~~~

Kab Whitepaper v0.8

CID: bafkreid2pbd6jmrybnjd4vmlwx5zhzxefvoeicqcugkavdd5vdkhpuwtwq