April 1, 2026 · 8 min read · 1,792 words

The Missing Layer

Why the $6 Trillion AI Market Is Missing Its Most Valuable Layer

Redpoint Ventures estimates that AI agents, as they mature from copilots to autonomous systems, will expand the addressable market from $0.5 trillion in software spend to $6.1 trillion in knowledge worker payroll. That figure assumes a specific direction: more autonomy, more value.

But there is a layer of the enterprise where that assumption breaks down.

Every organisation has systems of record for transactions. Systems of engagement for collaboration. Systems of intelligence for analytics. But no system for the thing that connects all three: the decision itself.

This is not a feature gap. It is a missing category.

Four unrelated data points, from a venture capital market report, a behavioural economist, a peer-reviewed meta-analysis, and an architectural engineering teardown, converge on the same conclusion. The judgment layer in organisations is uninstrumented. And the cost of leaving it that way is larger than most people realise.


The market is pricing in full autonomy. The evidence does not support it.

Redpoint Ventures published their 2026 Market Update in March, surveying 141 CIOs and mapping the trajectory of AI spending against existing software budgets. The core framing is an agent maturity curve: copilots operating for seconds, task agents for minutes, workflow agents for hours, autonomous agents for days. At each stage, the addressable market expands. US software spend is roughly $0.5 trillion. Add services automation and you reach $1.2 trillion. Add knowledge worker payroll and the figure reaches $6.1 trillion.

The implicit assumption is directional: more autonomy equals more value captured. The market is pricing this in aggressively. Public horizontal SaaS is down 35% over the past twelve months. The categories most exposed to AI displacement, according to the CIO survey, are salesforce automation (83% of CIOs open to replacement), customer service management (56%), and IT service management (55%). These are coordination problems. AI solves coordination problems natively.

$6.1T
Addressable market if AI agents reach full autonomy. The assumption: more autonomy, more value.

But decisions are not coordination problems.

A decision is not a task to be executed faster. It is a commitment made under uncertainty, with trade-offs that depend on context, constraints, and objectives that shift over time. Automating the execution of a decision is straightforward. Instrumenting the judgment that produced it is a fundamentally different problem.

Redpoint’s own data contains the tell. Vertical SaaS, which holds proprietary industry data accumulated over years, is up 3%. Infrastructure, where AI creates demand rather than displacement, is up 2%. The categories that are holding value are the ones with accumulated, domain-specific, irreplaceable data. The judgment behind enterprise decisions is exactly this kind of data. It just does not exist yet in any structured form.


Every industry pays for uninstrumented decisions. Construction puts a number on it.

In March 2026, a16z published an analysis of the architecture, engineering, and construction industry. The headline was about Autodesk’s monopoly and the opportunity for AI-native building design tools. But buried in the data is a more fundamental observation.

85% of construction projects exceed their budgets. Three quarters finish late. Rework, miscommunication, and time spent hunting for project data costs the US construction industry $177 billion annually. And more than 70% of that rework traces back to design errors, not site conditions or bad weather.

$177B
Annual cost of rework in US construction, 70% from design errors

The drawings were wrong before anyone broke ground.

This is a decision instrumentation problem being treated as a tool problem. The structural beam moved on Tuesday. The MEP consultant found out on Friday. But the deeper failure is not the notification delay. It is that nobody recorded why the beam was moved, what constraints were active, what trade-offs were considered, and who was supposed to be consulted. The information state at decision time was never captured.

The construction industry is an extreme example because the costs are physical and visible. A rerouted duct costs real money in real materials. But the same dynamic plays out in every enterprise, invisibly. A procurement decision made in Q1 based on assumptions that changed in Q2. A capital allocation that prioritised one programme over another based on a forecast nobody can reconstruct. A market entry delayed by a committee that cannot articulate what information would have changed their mind.

The difference is that in construction, someone eventually gets a $60 million dispute claim. In enterprise operations, the cost is absorbed into variance reports and attributed to market conditions.


The contrarian opportunity is preserving human judgment, not automating it away.

Rory Sutherland, the behavioural economist and Vice Chairman of Ogilvy, describes a three-phase evolution of AI adoption. Phase One is “the same, worse, but cheaper”: organisations impose AI to reduce headcount, degrading the process in the name of cost savings. Phase Two is “the same but better”: the technology is used to genuinely improve outcomes rather than just cut costs. Phase Three is “reinventing things altogether”: organisations redesign their processes around what the technology actually makes possible.

His analogy is the industrial revolution. When factories replaced steam engines with electric motors, the first generation simply swapped one large central power source for another. The factory floor layout did not change. True productivity gains came only when engineers realised that small electric motors could be distributed throughout the building, and redesigned the entire production flow around that capability.

The parallel to enterprise AI is direct. Phase One is what most consulting firms are selling today: bolt AI onto existing processes to reduce labour costs. The tools do the same thing, worse, but with fewer people. Phase Two is where a handful of companies are arriving: using AI to make existing processes measurably better. But Phase Three, the redesign, requires asking a different question entirely.

The question is not “how do we make decisions faster?” It is “what would it mean to treat decisions as a first-class engineering discipline, with their own infrastructure, their own data model, and their own audit trail?”

Sutherland makes a further prediction: because everyone is rushing toward automation and AI-driven self-service, there will be a genuine business opportunity in doing the exact opposite. In a market flooded with autonomous agents, the scarce resource becomes trustworthy human judgment. The organisations that systematically develop and preserve that judgment will have a structural advantage over those that automated it away.


The research confirms what the market has not yet absorbed.

In 2024, Vaccaro, Almaatouq, and Malone published a meta-analysis in Nature Human Behaviour examining how AI-assisted decision-making performs compared to human or AI decision-making alone. The finding was counterintuitive: across the studies analysed, human-AI combinations did not reliably outperform the best individual performer, whether human or AI.

More striking: AI-generated explanations and confidence scores, the two most common interventions designed to improve human-AI collaboration, produced no statistically significant improvement in decision quality.

This is not a finding about AI being bad at decisions. It is a finding about the interaction model being wrong. The standard approach, give a human an AI recommendation with an explanation and a confidence score, does not work for decision tasks. The interface between human judgment and machine computation is not an explanation layer. It is something else entirely.

What the research did not test is whether a different kind of intervention works. Specifically: retrospective decision debriefs, where the system captures the information state and reasoning at decision time and feeds it back as calibration data over subsequent decisions. This is not explanation (telling someone why the AI thinks X). This is instrumentation (recording what the human knew, assumed, and weighed when they chose Y, so that the next time a similar decision arises, the organisation has a structured basis for comparison).

The distinction matters because explanation is a one-shot intervention. Instrumentation is compounding infrastructure. One informs a single decision. The other improves the decision-making capacity of the organisation over time.


The honest difficulty

None of this is purely a technology problem. Decision instrumentation requires people to externalise reasoning they have never had to articulate. That is a behaviour change, and behaviour change has a cost that no architecture diagram captures. I have written about this in more detail in The Decision Gap: the systems that succeed will be the ones that capture reasoning as a byproduct of the decision workflow, not as a separate documentation step. The hardest part is not building the infrastructure. It is designing it so that using it feels like less work, not more.


The missing category

These four data points come from different industries, different analytical traditions, and different motivations. Redpoint is pricing market opportunity. Sutherland is diagnosing behavioural economics. Vaccaro et al. are conducting empirical research. The a16z team is writing an investment thesis. None of them are talking to each other.

Yet they converge on the same structural gap.

Every enterprise has invested decades in systems that record what happened: ERP for transactions, CRM for customer interactions, BI tools for analytics. The AI market is now investing trillions in systems that predict what will happen: forecasting models, recommendation engines, autonomous agents. But between the record of what happened and the prediction of what will happen sits the moment of commitment: the decision itself.

Who made it. What they knew. What they did not know. What alternatives they considered. What constraints they accepted. What assumptions they relied on. What they would need to see to change their mind.

This information is the most valuable data an organisation produces. It is also the only data that is never systematically captured.

Context is scalar. It describes the state around a decision: what data was available, what market conditions existed, what resources were allocated. Judgment is a vector. It describes the force and direction applied to that context: which trade-offs were prioritised, which risks were accepted, which signals were weighted more heavily than others.

Every existing system captures context. No existing system captures judgment.

That is not a feature to be added to an existing platform. It is infrastructure that does not yet exist. And the evidence, from venture capital markets to construction sites to controlled experiments, suggests that the cost of its absence is measured in trillions.

The organisations that build this layer first will compound an advantage that cannot be replicated by acquiring more data, deploying more models, or automating more tasks. Because the scarce resource in an age of abundant computation is not intelligence. It is judgment.

And judgment, unlike data, does not accumulate by default. It has to be instrumented.