AI, Constraint, and Application

I've been blogging recently about the principles and philosophy behind our agentic AI system called MOTHER. The whole thing started from a handful of design constraints necessary for our agentic system to be an organic piece of our wider architecture: purely serverless with no long-running processes, event-driven and reactive, immutable and event-sourced as an audit trail in an industry where compliance and correctness matter. These constraints rose naturally from our existing system and our domain, and they presented real design challenges in a landscape where agentic systems are built around imperative loops and complete freedom (albeit with permissions-based gates).

After MOTHER was created, it became a reliable staple of our system and continues to be extended. Along the way I continue to reflect on it. One thing that has stood out is the idea of correct and constrained AI. This was not quite an intentional design factor, but the execution context (F#) made it naturally a pillar of MOTHER. The system's only way of acting on a decision is to dispatch a mathematically sound data type through a strict type system, which makes MOTHER structurally constrained, even when an LLM is making the decisions. The type system doesn't guarantee every decision is the right one, but it guarantees that catastrophically wrong decisions cannot be represented at all. Impossible states cannot happen. Decisions the system does not support cannot happen, even if the LLM goes terribly awry. This is fascinating to me.

The current hype cycle focuses on the unbounded things AI can do. It can round up all these white-collar tasks (checking emails, finding tickets, scheduling meetings, transcribing meetings) and save a lot of time. That is not insignificant, albeit primarily for someone with a desk job. But when we think about the true impact the LLM era of AI can make, it's robotics, medical devices, critical software that drives cancer clinics and hospitals. The big-impact applications of AI don't have room to mess up. They don't have room for unbounded freedom with an occasional request for permissions. They have to be correct. Mathematically correct, in the "pure" mathematics sense, not the arithmetic one. Though that matters too. This is the world in which MOTHER thrives, and it's a world society has barely realized.

The path to it runs through languages where types are almost data, code is almost data, and data can be code. That is the kind of substrate a functional, ML-descended type system provides natively, and the kind of substrate imperative, dynamically-typed AI tooling has to approximate at the framework level, badly.