Engineered Thinking

We humans never formalized most of our thinking. It was implicit, inferred, and left out of the text because we assumed someone with a brain would be reading. But for AI, the text is the brain. The gaps we left for each other are gaps in the model. Six open research systems for developing the parts of cognition that never got written down, and the parts we never had to formalize because we are not machines.

Sema

The Language

When the Hash Is the Word·April 2026 · updated May 2026

Autonomous agents need shared, verifiable vocabulary: labels that compress coordination without hiding semantic drift. Sema turns content-addressed behavioral contracts into words in natural language. Each Pattern Card canonicalizes invariants, preconditions, failure modes, and dependencies into a hash-backed identifier, so ordinary prose can carry readable concepts that are also cryptographic equality proofs. The current bootstrap library contains 452 patterns and shows 22.6x mean token compression across audited references.

PDFGitHubWebSemanticsCoordinationOntology

Understanding Graph

The Memory

Persisting the Invisible Thinking·April 2026 · updated May 2026

Understanding — the movement from confusion to clarity — was always ephemeral. When AI reasons in tokens, it becomes directly storable in the medium where it occurs. The Understanding Graph captures the full cognitive process: tensions, hypotheses, belief revisions, dead ends. Not what the AI concluded, but how it understood.

PDFGitHubKnowledge GraphsMemoryMCP

Entangled Alignment

The Conscience

When Safety Is the Substrate·March 2026 · updated May 2026

Post-hoc alignment is cosmetics: it reshapes outputs, not what the model is. Entangled Alignment proposes annotating the pretraining corpus with identity-anchored evaluative reasoning generated through a Reader Core and Understanding Graph, aiming to make capability and safety part of the same learned distribution. The current paper validates the trace-generation pipeline and lays out a sixteen-test roadmap; no student model has yet been trained.

PDFGitHubGraphAI SafetyAlignmentPretraining

The Ontology of the Alien

The Spark

Escaping the Median Trap·March 2026

Ask an LLM to “be creative” and it converges on the same archetypes. Isolate cognitive modes behind hard boundaries — force generation under alien physics, evaluate with a strict taxonomist — and the system produces structurally novel mechanisms unreachable by unconstrained generation from the same models. The boundary does the creative work.

PDFGitHubCreativityLLMEvaluation

Fractal Intelligence

The Architecture

Conceptual Decomposition as Problem-Solving Infrastructure·April 2026 · updated May 2026

Existing frameworks decompose tasks. This paper decomposes concepts — the persistent structure of what a domain is made of — behind a uniform five-surface contract. In a prototype simulation of 100 problems across 20 domains, concept-based routing produces a shared graph of 456 solver nodes with 65% reuse. The result is evidence for reusable reasoning infrastructure, not a claim of executed cross-domain problem solving.

PDFGitHubGraphMulti-AgentArchitectureCognitive Science

Temporal Hindsight Learning

The Curriculum

Blindness as Teacher, Hindsight as Curriculum·March 2026

Neural networks often take the cheapest path: if retrieval is available, they may skip reasoning. Temporal Hindsight Learning treats the knowledge cutoff as a curriculum tool rather than a defect. A Teacher with hindsight generates training examples; the Student’s blindness removes direct retrieval and pressures it toward causal reconstruction. A 70B model shows promising results on unseen-event forecasting, suggesting that constrained ignorance can be useful training signal.

PDFGitHubModelFine-tuningReasoningForecasting