MANIFESTO
Organizations have more data than ever. Yet they still lack systems to think better.
By Miguel Ángel Sánchez Ciria · May 2026 · ~5 min read
§ 1. The paradox of abundance
There is more data available than at any moment in human history. More tools to process it, more infrastructure to store it, more algorithms to find patterns in it. And yet, inside most organizations, the most expensive questions remain the same: why did we decide this? what were we considering when we decided? who signed off? what alternatives were discarded and why?
The answers exist. They are in last quarter's Slack threads, in PDF decks no one ever reopened, in wikis that were maintained for three months, in the heads of people who rotated to another company. The accumulated cognition of the organization lives scattered, with no common substrate to preserve it.
That is the paradox of abundance: having more data than ever and, at the same time, having no system to think better.
§ 2. Why current approaches don't solve this
The industry has answered with several categories of tool. Each one solves something. None of them solves this.
General-purpose chatbots and assistants (ChatGPT, Claude, Gemini) offer powerful conversational intelligence, but they are amnesiac by design. Every conversation starts from scratch. What you learned three weeks ago doesn't inform what you answer today. Memory, when it exists, is optional, individual, and lives at the model provider.
Code copilots (Copilot, Cursor) autocomplete in the editor. They are useful for generating code but do not govern the decision cycle around that code: what was decided, who signed, why. A three-year audit is impossible.
RAG systems solve "search your documents and generate an answer based on them". Useful, but passive: it depends on someone having written the right documents in the first place. Cognition that doesn't get written down doesn't get retrieved.
Agent frameworks (LangGraph, CrewAI, AutoGen) optimize multi-agent autonomy. The question they don't answer is responsibility: who signs off on each action, how the decision can be reconstructed, how a wrong state can be reverted.
Wikis and KMS tools (Confluence, Notion, SharePoint) document passively. The organization has to dedicate hours to maintaining something that, normally, no one consults.
Each of these categories is valuable in its domain. None of them preserves the accumulated cognition of an organization with traceability and governance.
§ 3. The third path
The two options the market assumes are: ungoverned autonomous AI, or unamplified humans. The third path is governed, persistent, accountable intelligence.
This means four concrete properties:
— Capture context without asking anyone to write extra documentation. Cognition is preserved inside the natural cycle of AI-assisted work.
— Structure thinking instead of leaving it in threads, emails, and people's heads. The way something was decided becomes explicit.
— Govern decisions assisted by AI through a formal cycle: the model's proposal, the human review, the signed accountability. Natively auditable.
— Accumulate intelligence over time, surviving human turnover and the replacement of the underlying language model.
§ 4. INTENTIA+ — what it is, in honest terms
INTENTIA+ is the infrastructure that implements that third path. It is not a single product, but the composition of three pieces:
— AICA (AI Cognitive Agent) is the system's governed cognitive agent. It receives human intent, reasons over the live context of the organization, proposes actions, recognizes its limits, requests human decisions at critical points, and maintains continuity. AICA is not an autonomous LLM: it is an agent that operates under contract with the organization.
— CPR (Cognitive Persistent Repository) is the structural, persistent memory of the system. Here lives the cognition that survives time: decisions, constraints, discarded alternatives, justifications, lineage. The useful intelligence of INTENTIA+ is not in the language model (which gets replaced); it is in what the CPR has accumulated about your organization.
— AI-REPO+ is the governance framework that makes the previous two possible. It defines the HHAC cycle (Human-Human-AI Cycle), a three-step protocol that signs every meaningful interaction with verifiable provenance.
Built this way, what the organization gains is independence from the AI vendor, operational traceability, reversibility of decisions, and persistence of cognition in the face of human turnover.
§ 5. What INTENTIA+ does NOT promise
Honesty is part of the architecture. INTENTIA+ does not replace the human team: it amplifies. It does not promise "total control" or "safe AI": it reduces the risk of bringing AI in with explicit governance. It is not an academic promise: it is in internal production with versioned sprints, automated tests, and reproducible snapshots.
And it does not solve the cultural problem on its own. If the organization does not want to preserve cognition, no system will preserve it for it.
§ 6. An invitation
INTENTIA+ is in selective rollout. If the problem described in this text is recognized inside your organization — and you believe the third path is worth exploring — write to us.
— Miguel Ángel Sánchez Ciria
May 2026