What if the System Could Dissent?
Building an infrastructure layer that amplifies the signals humans ignore. On designing systems for disconfirming evidence.
This is Article 2 in a series on decision infrastructure. Previously: The Silence Problem
If we accept that humans are socially wired to seek consensus, even disastrous consensus, we need a counterbalance that isn’t susceptible to social pressure.
We need to stop relying on the “brave maverick” to save us and start building infrastructure that institutionalizes dissent.
In almost every corporate disaster, the data already knew the strategy was wrong. The Qwikster data showed customer sentiment cratering. The Kodak data showed exponential improvements in digital sensors. The signals were there, buried in dashboards, ignored in favor of a cleaner, more politically palatable narrative.
We need an infrastructure layer designed for systemic dissent.
Currently, our business intelligence systems are designed for affirmation. They are built to track KPIs we have already decided are important, and to report progress toward goals we have already set. They answer the questions we ask.
A system designed for dissent would answer the questions we aren’t asking.
Imagine a structured counterweight sitting on top of your data warehouse. It doesn’t care about your quarterly bonus or whether the VP of Product likes it. It only cares about disconfirming evidence.
When a team presents a hockey-stick growth projection based on a new feature, the system shouldn’t just validate the math. It should automatically surface the three historical instances where similar features failed to move the needle.
When a strategy deck claims a competitor is irrelevant, the system should surface that competitor’s recent surge in patent filings or talent acquisition.
Here’s a concrete example:
A pharma company’s commercial team projects 40% growth for a new product launch. The system surfaces: “Historical launches in this therapeutic area averaged 18% first-year growth. Three of four comparable launches missed their Year 1 targets. The one that succeeded had 6 months longer lead time with payers.”
The system isn’t saying the projection is wrong. It’s saying: here’s what you’re betting against.
This is not about replacing human judgment with algorithms. A machine cannot understand context, nuance, or vision.
The goal of systemic dissent isn’t to make the decision for us. It is to raise the cost of ignoring reality.
Right now, it is very easy to ignore uncomfortable data because you have to go looking for it. A dissenting infrastructure makes the uncomfortable data impossible to miss. It forces the human leaders in the room to look at the contrary evidence and explicitly say, “We see this risk, and we are choosing to proceed anyway.”
That is a very different conversation than the polite silence that currently guides us toward cliffs.
I’m building ChainAlign to create this dissenting infrastructure layer. A system that surfaces what the data knows before the room decides to ignore it.
Next: Decision Coherence