Agentic AI, Built on Functional and Reactive Principles

Our team has been building reactive systems grounded in functional principles for years now. A system that is event-sourced, event-driven, non-blocking, resilient, and elastic can feel esoteric at first - daunting to imagine and harder still to commit to. But by front-loading that complexity at the foundation, the long-term benefits become obvious: systems that are easy to extend, bounded by clear domain lines, and equipped with perhaps the best auditability imaginable in an industry where compliance is important.

Our entire platform runs on these principles. Functional programming is in our DNA - immutability, referential transparency, and pure functions guide not only how we design applications, but also how we shape the system architecture.

Then came the Wild West days of LLMs. The prevailing approaches pushed toward long-running processes: persistent servers maintaining WebSocket connections, stateful orchestrators holding context in memory, and blocking execution models that tie up resources waiting for AI responses. In an industry obsessed with "agents" as persistent entities, the temptation was to abandon our serverless, event-driven principles just to integrate AI capabilities into our software.

We chose principles over convenience. MOTHER was born from that choice.

Functional Programming, Meet AI Agents

MOTHER stands for Model Orchestrated Transparent Hermetic Event Reactors. Yes, it’s a backronym. No, we’re not sorry.

The name also pokes fun at the AI industry’s obsession with sci-fi - whether it’s Sam Altman’s fixation on Her, or MCP’s nod (intentional or not) to Tron’s Master Control Program. In that spirit, MOTHER references the Nostromo's onboard computer from Alien, MU-TH-ER.

But the name isn’t just a joke. Every word encodes a design decision - one that keeps faith with reactive principles and functional programming, while still enabling autonomous, agent-like AI behavior.

Before breaking down the name, let’s clarify the mission: MOTHER is about making agentic AI workflows themselves (as much as is possible) subject to the same rules of immutability, event sourcing, and unidirectional flow that govern the rest of our architecture. It’s a system where agentic AI sits at the architectural core while still honoring the paradigms that shape the rest of our platform.

What constitutes MOTHER:

  • MO – Model Orchestrated
    The central arbiter is an LLM-driven decision-maker that reacts to completed artifacts and dispatches typed messages. The type system defines the contract: new capabilities come from introducing new event types. In a truly reactive fashion, there is no orchestration logic that invokes tools directly, rather a message is dispatched with a serialized payload generated by the LLM. The shape of the data determines the tool that reacts.
  • T – Transparent
    Every decision is logged with reasoning and metadata. Autonomy comes with a complete event trail, essential in industries where compliance and trust matter.
  • H – Hermetic
    Reactors are isolated, serverless units of business logic. Each one acts as a checkpoint for data hygiene, ensuring inputs are reviewed and constrained before they ever reach the pipeline.
  • ER – Event Reactors
    The system has no mutable state. All state is derived from immutable events. State is always a projection at a point in time: fold the events to see a session’s status, replay them to see how a decision was made. Events are immutable and form the only source of truth.

    Reactors are modeled after the idea of pure functions: Event -> Artifact. Outputs are predictable to the extent possible with probabilistic models, and any variability is recorded in the event stream. Agentic behavior emerges from the interplay of events, reactors, and arbiter decisions. As the name suggests, reactors respond to messages.

The Power of Algebraic Data Types as Contracts

Central to our approach, we treat the LLM as a function that produces typed events. In this case, F#’s discriminated unions act as the entire contract between our LLM and the reactive system.

Consider what happens in MOTHER when the arbiter needs to dispatch work:

type SystemEvent = 
    | ValidateCompliance of ComplianceRequest
    | GenerateReport of ReportRequest  
    | AuditTransaction of AuditRequest
    | ProcessingComplete of CompletionDetails

The arbiter doesn’t need to know how compliance validation works or where the ComplianceReactor lives. It only needs to produce a valid ValidateCompliance event with the required fields.

Reactors then respond directly to the discriminated union case:

match event with
| ValidateCompliance request -> 
    // React to compliance check
| GenerateReport request ->
    // React to report generation
| _ -> 
    // Ignore events this reactor doesn't handle

This approach has proven impressively robust. The arbiter cannot produce an invalid event because the type system won’t allow it. Reactors can’t mishandle events because pattern matching is exhaustive. New capabilities can be added without touching existing code, since unmatched patterns must be handled explicitly.

An important aspect is that F#’s discriminated unions serialize directly to JSON and back. The arbiter produces JSON that deserializes into a typed event. No schema validation, no runtime type checking, no defensive programming — the type system itself becomes the validator. If it deserializes, it’s valid. If it’s valid, it can be processed.

The arbiter’s code is therefore agnostic to any specific Reactors, present or future. It knows event shapes, not implementations. Add a new Reactor tomorrow with a new event type, and the arbiter can immediately begin using it.

The orchestration layer collapses into:

  1. LLM produces a typed event as JSON
  2. System deserializes to a discriminated union
  3. Reactors pattern match and respond
  4. Results become new events

The breakthrough emerges when extending the system. Adding a new capability - say, user lookup or document analysis - requires no changes to the arbiter itself. You define the tool’s interface as a discriminated union case, register its schema in the tool registry, and implement the reactor. The arbiter immediately sees the new capability and incorporates it into execution plans. The LLM reads the registry, generates the appropriate JSON payloads, and orchestrates the new tool as if it had always existed.

The reason this works is that the arbiter has no control flow at all. There are no hardcoded routes, no imperative logic dictating execution. Every decision is made by the LLM itself, bounded only by the type contracts. It builds execution graphs in real time: generating meaningful steps (find_hotelanalyze_hotel_feedback), inferring dependencies, retrying with new parameters, and extending workflows as new context arrives.

The arbiter also passes context intelligently between steps. For example, when an entity lookup returns: {"entityID": "a91c4118-5c2c-43bc-a286-b38a939b6db5", "entityName": "Foo Bar"} it knows that, say, the hotel review sentiment analysis needs the entityID, while reporting may require the entityName. This context flows seamlessly without hardcoded mapping. The arbiter becomes the intelligent glue between structured outputs and structured inputs, replacing brittle dispatch logic with adaptive orchestration.

The beauty of this is that F#’s type system is widely recognized as one of the strongest in the industry. The F# community has long embraced type-driven development and careful domain modeling for exactly this reason. Designing a protocol like MOTHER on top of those ideals highlights how an “idealistic” language choice can pay off in practice.

Functional Programming In an Inherently Chaotic Environment

The challenge with building a system that uses an LLM as a foundational component is that an LLM is a black box. From the outset, we knew we didn’t want mutation, sprawling context windows, or hidden state creeping into the architecture. So we took care to remember a few important functional programming principles.

Functions everywhere: Every Reactor is a function (with an aim to be as pure as possible): Event -> Artifact. Same event, same artifact - to the extent possible with probabilistic models. Variability is tamed by constraining how context is passed: inputs are always derived directly from the original prompt, never from layers of re-summarized summaries. No hidden state. No side effects. Just transformation.

Strive towards referential transparency: Reactors approach referential transparency - same inputs produce structurally similar outputs, with variation captured in audit trails. Any arbiter decision can be replaced by its result. While this is attractive in theory, it has the practical benefit of making debugging possible. When every computation can be substituted with its result, the system becomes explainable.

Immutability: Events are immutable. Artifacts are immutable. Even progress updates are immutable, append-only logs. Nothing mutates, everything is created. This gives us functional purity in practice, while allowing event-sourcing techniques like replays and projections to be applied across any session’s history.

Event Sourcing as the Data Model

When everything is immutable and every function is pure, event sourcing, which can feel like an esoteric technique, becomes a natural and approachable pattern. In general as we've embraced event sourcing, we’ve found that long-term maintenance and extension are easier than expected: new capabilities arise from the simple addition of small events or extra context, while complex updates are relegated to aggregate functions and projections. We're seeing that this is also the case with MOTHER.

The system itself has no mutable state. It has events. State is nothing more than a projection of events at a point in time. Want to know the status of a session? Fold over its events. Want to replay a decision? Replay its events.

This provides remarkable flexibility: you can dig as deep as you like into the history, or stay shallow and consume only the arbiter’s latest response. Embracing this functional pattern with an LLM as the architectural lynchpin has been a curious expedition - but it has proven just as practical here as in any other context.

Design Decisions That Mattered

Putting the high-level principles together, the most important design decisions can be distilled into the following:

  • Type safety through algebraic data types. Every event is a discriminated union. You can’t create an invalid event. You can’t process one incorrectly. The type system enforces correctness at compile time. Even more importantly, the LLM produces serialized payloads that conform to these ADTs - meaning the central arbiter never reaches into domain types with ad-hoc logic.
  • Event sourcing as the source of truth. The database doesn’t store state; it stores events. Current state is always computed by folding over events. This makes history replayable and future extensions trivial.
  • Referential transparency despite AI non-determinism. We treat each LLM call as referentially transparent by embedding all context in the event. Given the same input (including timestamp, session context, and model parameters), the call is deterministic enough. Variability is captured in the event itself, never hidden in mutable state.
  • Function composition. The central arbiter makes decisions from a contract of tools, much like MCP. New Reactors (serverless functions) can be added without touching any existing code. These Reactors can call an LLM, hit a database, or even trigger other agents. With contracts defined as ADTs in and structured data out, the arbiter knows exactly which functions to call, in which order, to compose results. It can often use the output of one function for the input of another.
  • Serverless by design. Every component - even the central arbiter - is just another serverless function reacting to events. Elasticity and isolation are the default.
  • Making non-determinism visible. Pure functions are deterministic. LLMs are probabilistic. We squared this circle by being honest about what’s pure and what isn’t: We don’t pretend the LLM is deterministic. We make its non-determinism visible in the event stream. Each arbiter decision carries the model’s reasoning. Every artifact includes generation metadata. The non-determinism becomes data.

Referential Transparency in Practice

The real test of referential transparency: can you replace a function call with its result without changing the system's behavior?

In MOTHER, you can. Replace any Reactor with a lookup table of Event -> Artifact mappings. Replace the arbiter with a recording of its decisions. The system still works. This is only possible by sticking to the ideals that has driven our architecture prior to this effort: no hidden state, no side effects, no mutation. Every function is replaceable by its output.

Other Considerations

Much of what I’ve described so far has been high-level architecture. But there are a few operational details that matter just as much in practice:

  • Auditability as responsibility. If you’re working in any domain where an LLM decision affects someone’s well-being - whether they pass or fail, get a loan or don’t - I believe it’s irresponsible not to build transparency into the product. Compliance shouldn’t be treated as a box to check after the fact; it should be baked into the design from day one.
  • Ethics as responsibility. Beyond compliance, software creators carry a duty to define ethical standards and protect human dignity. Pedantry here is a virtue: we should be uncompromising in how we frame, constrain, and deploy AI decisions. In our case, we’ve formalized this with an AI charter - a living document we revisit regularly, much like a team agreement. It explicitly defines acceptable usage and reminds us that in times of rapid change it’s all too easy to abandon core values.
  • Data hygiene at the boundaries. With a compositional serverless setup, each Reactor is a hermetic unit of business logic. That makes every function a checkpoint where inputs can be reviewed, vetted, and constrained with intention. One critical aspect is ensuring that no PII or unnecessary data ever enters the agentic pipeline. Each function can independently enforce these boundaries before anything touches the LLM.

The Lesson

The industry is moving fast into new territory, but in many ways is falling back on old programming habits. When faced with the complexity of LLM integration, the default response has been to reach for familiar procedural patterns: sequential execution, mutable state management, and imperative control flow. It's easier now than ever to abandon the sophisticated patterns we've learned over decades - functional composition, immutable data structures, reactive architectures - and opt for the kind of procedural, stateful code that feels immediately comprehensible when working with conversational AI. But we decided agents don't need persistent state if they can have immutable events. They don't need accumulated memory if they can have event sourcing. They don't require us to abandon the elegant functional patterns that have proven their worth in building resilient, distributed systems.

Designing MOTHER has highlighted just how powerful functional programming is, and how rewarding it is to reject the easy fallback to mutability. It's been a reminder that functional programming and reactive system design are relevant in all programming contexts. When you model AI operations as pure functions producing immutable events, something beautiful happens: the entire system becomes comprehensible, debuggable, and composable.

As you can imagine, this was an interesting endeavor. In the ideation phase I often wondered if we were being too idealistic - if we should just fall back on an enduring server and a straightforward MCP setup. But the Wild West of a new paradigm doesn’t come often, and when it does, it can be rewarding to innovate. What we walked away with was technically impressive and also felt innovative - AI truly choreographing serverless systems utilizing some of functional programming's most important principles.