Situational Awareness from the Outside
What the people building with AI see, two years after Leopold's essay.
In June 2024, Leopold Aschenbrenner published “Situational Awareness: The Decade Ahead,” a 165-page essay series that became the defining document of the AGI acceleration movement. It was alarming and widely read by exactly the kind of people who shape technology investment and policy. Leopold’s opening line set the tone: “You can see the future first in San Francisco.”
I read it from Basel, Switzerland. I am not a researcher at a frontier AI lab. I have no access to unreleased models. I was not, by Leopold’s definition, among the “few hundred people” with situational awareness. I am a consultant and founder with two decades of enterprise transformation experience across pharma, chemicals, and supply chain. By the essay’s own framing, I was one of the people who didn’t have “the faintest glimmer of what is about to hit them.”
Two years later, I’d like to offer a different report. Not from inside the labs, but from outside them. Because what I’ve seen from here tells a story that Leopold’s essay didn’t anticipate, and arguably couldn’t, given where he was standing.
I. The insider economy is leaking
Leopold’s essay derived its authority from proximity. He had worked at OpenAI. He knew the people. He had seen the trendlines from the inside. His rhetorical move was: “I am inside the circle, and here is what we see. You should be worried.”
This framing was effective, and it was also a perfect illustration of the problem it claimed to diagnose. The AI industry, particularly in San Francisco, operates on an insider knowledge economy. Who has seen the next model. Who knows the benchmark numbers before release. Who has access to unreleased capabilities. This information asymmetry is not accidental. It is structural, and it serves the commercial interests of the people who maintain it.
I see a smaller version of this in Switzerland. The ETHZ network, brilliant as it is, operates with similar gatekeeping dynamics around AI and robotics. Certain groups set the agenda, certain people are in the room, and outsiders are politely directed elsewhere. This pattern repeats everywhere there is concentrated technical talent and funding. It is human nature. But it is also a strategic choice, and choices have consequences.
The biggest consequence of the Western insider knowledge economy is that it created a vacuum. And in January 2025, DeepSeek filled it.
When DeepSeek released R1, claiming performance on par with OpenAI’s o1 at a fraction of the cost and under an open-source licence, the reaction in San Francisco was shock. But the shock was revealing. The frontier labs had convinced themselves, and everyone else, that you needed $500 million training runs and 100,000 H100 GPUs to compete. DeepSeek proved you needed clever engineering and a willingness to publish.
Leopold’s essay had an entire section arguing that the West needed to “lock down the labs” and maintain its lead through security and export controls. What actually happened was almost the opposite. The most consequential AI development of 2025 came from a Chinese hedge fund’s research lab, built on chips the US had tried to restrict, and released for anyone to use.
But here is the part that matters most, and that the geopolitical framing obscures: DeepSeek’s real impact was not on the US-China rivalry. It was on everyone else. Before January 2025, the mental model in Europe, India, Southeast Asia, and most of the world was: “We cannot play this game. The compute requirements are too high. The talent is in SF. We should focus on regulation and hope for the best.” DeepSeek shattered that. It proved that algorithmic ingenuity could substitute for brute-force hardware. It proved that the bottleneck was not compute alone. It was the belief that compute was the only bottleneck.
Since then, something shifted. India is funding sovereign AI initiatives with actual money, not just press releases. France has Mistral genuinely competing. Switzerland has Apertus building foundation models. The EU is discussing open-source AI as strategic infrastructure rather than just a target for regulation. If you look at the Hugging Face model directory today, the volume of capable, open-source Chinese models is extraordinary. This openness, whatever its strategic motivations, has created possibility for people who were previously locked out.
The insider economy is leaking. That is the story of 2025, and Leopold did not see it coming.
II. The real value is in the road cars
Leopold’s essay is obsessed with the frontier. The biggest models. The most powerful clusters. The race to AGI. This is understandable given his vantage point. When you work at OpenAI, the frontier is all you think about.
But most economic value is not created at the frontier. It never has been.
Consider a parallel. NASA’s research budget produced extraordinary innovations in materials science, thermal management, and miniaturisation. That value absolutely showed up in space: the success of the missions was the point. But the value to society came from the transfer, in cooking vessels, running shoes, and mobile phones. Formula 1 pushes the boundaries of aerodynamics, materials, and powertrain engineering. Those innovations matter at 300 kph, that is where they prove themselves. But they matter to civilisation at 60 kph, in the road cars that inherit them five or ten years later.
The same transfer is happening in AI right now, and it is happening much faster than the space or automotive analogies suggest. The GPU arms race, the trillion-dollar clusters, the race to train models on ever-larger datasets: all of this is the F1 programme. The real economic transformation is in the road cars.
Here is what I mean concretely. Machine learning models that required specialised infrastructure and weeks of training two years ago now run in hours on commodity cloud hardware. Inference costs have dropped roughly a thousandfold for equivalent capability in under two years. A mid-market pharmaceutical company can now run the kind of molecular simulations and demand forecasting models that were the exclusive domain of the top five pharma companies. A supply chain team at a mid-sized manufacturer can deploy optimisation models that would have required a dedicated data science team of ten.
The obsession with AGI, with whether models can “outpace college graduates” or “match PhD-level reasoning,” misses the point for 99% of the world’s organisations. They do not need AGI. They need last year’s models deployed well against their specific problems. The models are good enough. The bottleneck has moved to domain expertise, organisational readiness, and the ability to frame the right questions.
China is making enormous practical progress here. While the Western discourse is fixated on who will build AGI first, Chinese companies are deploying AI across logistics, manufacturing, agriculture, and public services at remarkable scale. One can have legitimate concerns about surveillance applications, but the uncomfortable truth is that Palantir’s deployment across US government agencies operates on much the same logic. The difference is branding, not behaviour.
The frontier matters. But the road cars are where the civilisational impact actually shows up.
III. The coding bottleneck fell, and nobody planned for it
This is the part that is hardest to see from San Francisco, and that I can only describe from personal experience.
Leopold’s essay frames AI development as a race between a small number of powerful actors: the US labs, the Chinese government, and eventually, a Manhattan Project-style national effort. The implicit assumption throughout is that the number of people who matter in this story is small. A few hundred with situational awareness. A handful of labs with the compute. Governments that will eventually step in.
What he did not anticipate, and what I believe matters most from the past two years, is what happened to people like me.
In the past year, using tools like Claude Code and other AI-assisted development environments, I have built things I could not have built before. Not toy projects. A Decision Intelligence platform with a Socratic Inquiry Engine that learns from rejection. Multiple deployable products with serious architectural complexity.
A year or two ago, any one of these would have required a funded engineering team of five to ten people, half a million dollars in runway, and proximity to a hiring market. I built them from Basel, working solo, with domain expertise as my primary input and AI-assisted coding as the multiplier.
I know I am not unusual. There are thousands of people around the world right now having this same experience. People with deep domain knowledge in medicine, law, finance, logistics, agriculture, people who understood their problems intimately but lacked the engineering capacity to build solutions. The coding bottleneck kept them on the sidelines. That bottleneck is falling fast.
And here is the part that makes this an unintended consequence rather than a planned feature: the companies building these tools did not set out to democratise software creation. They set out to build AGI and to sell subscriptions and API access along the way. The fact that their intermediate products enable someone like me to build complex systems is a side effect of their frontier ambitions. It is the road car that fell off the F1 programme.
I was recently told, in the context of an application for a research fellowship, that the ambition of what I was building was surprising for a solo effort. That reaction tells me something. The evaluators are still calibrated to a world where one person cannot build systems of this complexity. They have not updated their priors. But the world has moved.
Now, I want to be honest about the limits. The gap between “I can build a working system” and “I have a scaled, enterprise-grade product with customers” is still enormous. That gap is about sales, trust, support, compliance, and team, not about code. I am not claiming that AI coding tools turn consultants into unicorn founders. But they do turn domain experts into builders, and that is a shift with very large implications.
Leopold was right about the trajectory of capability improvement. He was right about the industrial mobilisation. He was right that governments would start treating AI as a national security concern. The Stargate Project, the energy emergency declarations, the geopolitical posturing: all of this is playing out roughly as he described.
But his essay was written from a position of concentrated power looking outward, and it could only imagine futures that concentrated power further. A Manhattan Project for AGI. Export controls that maintain Western dominance. A small number of actors deciding the fate of the technology.
What actually happened is messier and, I think, better. The technology leaked. The costs fell. The tools reached people they were not designed for. And now, alongside the trillion-dollar race for AGI, there is a quiet, distributed revolution of domain experts becoming builders, of mid-sized companies deploying last year’s models to solve this year’s problems, of countries and individuals who were told they couldn’t play discovering that they can.
You can see the future first in San Francisco, Leopold wrote. Maybe. But you can build it from anywhere.