Vibe Coding, Clarity Supremacy, and Software Engineering’s Future

“Vibe coding” is having its moment. Andrej Karpathy of OpenAI coined the term in February 2025 for AI-assisted programming where you “forget that the code even exists”. Y Combinator reports 25% of their latest batch built 95% of their codebase with AI. The narrative is compelling (if not pandering): programming is finally accessible to everyone.

However, I think we may be seeing the opposite. What looks like democratization is actually an evolution - and the hard problems are about to get harder.

A Pattern We’ve Seen Before

When new tools threaten established professions, there’s a predictable pattern. Sociologist Andrew Abbott called it “boundary work” - professionals don’t disappear, they evolve and find new ways to apply their expertise [1].

We’re seeing this with vibe coding already. Anyone can generate a simple app by describing it, but they hit walls quickly when that app needs real-world capabilities: database connections, security, scaling, system integration. That’s where engineering knowledge becomes essential - not for writing functions, but for understanding how complex systems work together.

Remember Dreamweaver in the late 1990s? It was supposed to eliminate web developers. Instead, it raised expectations for web quality. Everyone could make a basic website, so professional work had to be exceptional. Web developers became more valuable, not less.

The same thing is happening now with code.

Don Syme and the Clarity Supremacy

Don Syme, creator of F# and architect at GitHub, recently wrote an excellent piece about this shift in his blog post On Natural Language Programming. He talks about moving from “The Symbolic Supremacy” - where all programming must be mathematically precise - to what he calls “The Clarity Supremacy,” where we focus on conveying clarity of intent, whether in precision programming or natural language programming.

Syme argues that natural language programming works well for what he calls “deliberately ambiguous natural language programming” that “rests on a bedrock of non-ambiguous, precise programming.” His proposed Clarity Supremacy acknowledges that different types of programming problems require different approaches to precision and ambiguity.

Think about it like construction. You can be creative with interior design, but the foundation and load-bearing walls better be perfect. The difference between a trained interior designer and an interior decorator is, in part, being trained to know and identify these details. When someone uses vibe coding for an expense tracker, fine. When they use it for the authentication system protecting user accounts? We have a problem. Clarity of intent is needed for both precise and ambiguous context. The interior designer and the architect.

The Tacit Knowledge Gap

Michael Polanyi spoke of “tacit knowledge” - understanding you can’t easily put into words [2]. It’s what lets experienced engineers recognize when a system design will fail under load, spot subtle security vulnerabilities, or know which architectural decisions will haunt you later.

This knowledge doesn’t transfer through natural language prompts because it’s not explicit in the first place. It comes from experiencing system failures, watching requirements evolve, and seeing architectural decisions play out over years.

Security researchers are already finding serious vulnerabilities in vibe-coded applications. When these systems break - and complex systems always break - their creators often can’t fix them because they don’t understand how they work.

Normal Accidents in AI Systems

Here Charles Perrow’s concept of “normal accidents” may become relevant. In tightly coupled, complex systems, certain failures become inevitable - not from mistakes, but from interactions too complex to predict [3]. Granted, while MCP and other protocols do promote decoupling, we can be sure that as companies chase the “productivity” that LLMs promise, tightly coupled, vibe coded systems are inevitable.

Complexity science tells us that as systems grow, the number of possible failure modes increase. Each new component doesn’t just add its own failure possibilities - it multiplies them through interactions with everything else. True, expertly architected systems guard against this complexity with concepts like bounded contexts encapsulation, and hexagonal architectures; AI-generated code accelerates this complexity growth because it lacks the architectural constraints humans naturally impose.

When you have multiple AI-generated components interacting with databases, APIs, and third-party services, you’re creating what complexity theorists call a “phase transition” - where system behavior becomes fundamentally unpredictable. Managing this requires exactly the systematic thinking that engineers develop through training and experience.

The Economic Lock-In

Journalist and data scientist Karen Hao has documented how the dominance of large language models wasn’t inevitable - it emerged from commercialization pressures and investor returns [4]. The path to AGI got redirected toward LLMs not because they were the best approach, but because they could generate impressive demos and attract funding. According to Hao, Sam Altman’s once-in-a-generation fundraising talent and dealmaking prowess played an outsized role in securing the massive funding that made scaling large language models possible.

This creates what economists call “path dependence” - early choices that lock in particular directions even when better alternatives exist. Massive VC funding has pushed us toward computationally expensive AI solutions that prioritize demonstration value over engineering rigor.

Look at Google’s AI search features - near universally derided, yet forced on users anyway. Without their search monopoly, they couldn’t have pushed these degraded experiences. The same dynamic is happening with code generation. Companies with market power are pushing AI solutions not because they’re better, but because they can.

The technical debt from vibe-coded applications is already generating demand for cleanup and professional oversight. As these systems mature and face real business requirements, the market increasingly values people who can bridge the gap between AI-generated code and production-ready systems.

The Clarity Supremacy in Practice

Don Syme’s suggested Clarity Supremacy isn’t about abandoning rigor, but about understanding the levels of precision we use for various programming tasks. In fact, in light of the challenges of what we've discussed (complexity, tacit knowledge, and the economic bulldozing of LLMs), clarity of intent is in some cases more rigorous than precision. How many books have been written on the importance clean code, self-documenting code and the like? Perhaps clarity of intent is a skill that requires more expertise than knowing the mechanics of code. Use mathematical proofs for consensus algorithms, type systems for API contracts and domain modeling, but perhaps natural language for user-facing workflows. It’s an engineering skill to judge where precision matters most; intent is never absent from that judgment.

Engineering has faced this kind of existential challenge before. When physics revealed that absolute precision was impossible - that even measurement depends on the observer’s frame of reference - it didn’t destroy engineering. Instead, the profession adapted by developing statistical methods, error bounds, and fault tolerance. The loss of absolute certainty led to more sophisticated practice, not less.

Now we face a similar moment with AI-generated code. The comfort of precision (deterministic algorithms) is being replaced by probabilistic systems we can’t fully predict. This is not so much a crisis as it is an evolution.

Barry O’Reilly’s Residuality Theory - which applies complexity science to software resilience - shows that robust systems require actively identifying and addressing the “residues” left by stress events [5]. These are the weak points that only reveal themselves under pressure. LLMs, by their agreeable nature and pattern-matching approach, fundamentally lack the creative skepticism needed for this work. They won't challenge your assumptions, propose adversarial scenarios you haven't considered, or imagine the novel failure modes that emerge when theoretical systems meet messy reality. At least, not in the vigorous way required to identify stressors and their resultant residues.

In the past, the collapse of absolute precision forced engineering to invent statistical methods, error bounds, and fault tolerance. In the same way, the agreeable limits of LLMs make frameworks like Residuality Theory essential - not as abstractions, but as tools for building truly resilient systems.

The Future

I imagine soon it’s likely we won’t distinguish between “programmers” and “prompt engineers.” We’ll distinguish between “app builders” using natural language and “systems engineers” orchestrating complex architectures. The latter will command premium salaries.

We’re not witnessing the democratization of programming so much as its layering into different domains with different requirements. Vibe coding will find its place in rapid prototyping and simple applications. But the foundational systems that make modern software possible will continue to require systematic engineering thinking that can’t be reduced to natural language instructions.

This evolution mirrors broader patterns in how technical roles have evolved over the past couple decades. In the age of startups we've seen less specialization. Fewer DBAs, fewer architects, fewer back-end devs; more full stack devs and an explosion of front-end devs. This could very much be a shift back into the other direction, with UIs being an implementation detail - even disposable. It's very likely the expertise will shift back to systems and architecture, where the most important thing is intent...the ability to express what's an implementation details and what's essential. The tools are changing. The fundamental work of engineering - building systems that solve real problems reliably - remains as important as ever.

An Invitation to Intentionality

I'm writing this not to dismiss AI tools - I use them regularly and expect to use them more. Rather, it's to put the current hype into historical perspective. In moments like these, it's essential to be sober-minded about what we're actually experiencing.

More importantly, we need to establish ethical boundaries. There's real danger in giving into unsustainable hype, both from our responsibility as product makers and in solidarity with those most vulnerable to AI's disruption - and let's be honest, software developers aren't the most vulnerable.

Software engineers, especially those in senior leadership, have a responsibility to understand this moment historically. We need to know when to say "yes" to application of AI and when to say "no" - not from fear of change, but from understanding the implications, from recognizing what's coming from an insatiable hunger for profit margins or genuine enhancement of the human experience.

The evolution I've described isn't just a career prediction; it's a call for intentional professional development. If we're moving toward a world where systems engineering becomes the premium skill, we need to prepare for that responsibly. That means investing in the deep technical knowledge, architectural thinking, and ethical frameworks that will matter when the hype cycle ends and we're left with the actual work of building reliable systems.

The tools will keep changing. Our responsibility to build systems that serve people well - that's constant.


References

[1] Andrew Abbott - The System of Professions

[2] Michael Polanyi - The Tacit Dimension

[3] Charles Perrow’s - Normal Accidents

[4] Karen Hao - The Chaos Inside OpenAI; also Empire of AI

[5] Barry O’Reilly - Residuality Theory