Skip to content

Nidus — externalised reasoning for AI-assisted engineering

The reasoning-provenance problem

When an autonomous agent works on an engineering task, the output is a code change, a document, a test result — the reasoning that produced it is usually lost. A week later, when the change needs review, debugging, or regulatory defence, the reasoning is gone. The organisation is left with artefacts but no audit trail of why.

Training LLMs to follow engineering discipline through instruction-tuning and RLHF addresses the symptom; it does not solve the provenance problem. Nidus takes the opposite path: externalise the reasoning substrate so that it is inspectable, verifiable, and re-executable by construction.

Nidus: the externalised-reasoning kernel

Nidus is a small, language-agnostic kernel in which every agent reasoning step is a first-class, versioned artefact. A decision is not a hidden chain-of-thought — it is a traceable step with inputs, outputs, and a cryptographic link to the step before it. Replaying a Nidus trace produces the same outputs. Modifying the trace produces a verifiable diff. Auditing a trace is a text-editor operation.

For AI-assisted engineering teams — and for regulated deployments that must be able to defend every machine-made decision — this moves reasoning from an opaque property of the model to an explicit property of the system.

Preprint

Nidus: Externalised Reasoning for AI-Assisted EngineeringarXiv:2604.05080.

Related research

Discuss a Nidus deployment →