Vasa Syndrome: We’ve Been Sinking the Same Ship for 400 Years
- Святослав Щербатюк

- 22 апр.
- 7 мин. чтения
"'But he has nothing on!' the whole town cried out at last. The Emperor shivered, for he suspected they were right. But he thought, 'This procession has got to go on.' So he walked more proudly than ever."
Hans Christian Andersen, “The Emperor's New Clothes” (1837)
In 1628, the “Vasa”, the most powerful warship ever built, sank twenty minutes into its
maiden voyage. It never fired a single shot in battle. The cause wasn't a storm. Nor
was it enemy fire. It was a king who didn't understand shipbuilding giving orders to
the people who did.

Nowadays, Vasa sits in a museum in Stockholm. Nearly 400 years old, but almost perfectly preserved by the cold Baltic water. A silent monument to what happens when authority does not count in real expertise. Millions of people visit it every year, business schools teach it, product management frameworks are literally built around this event, and yet…
A note on intent from the author. This article is not a case against AI. It is a case against bad product decisions wearing “AI digital transformation” as a costume. The Vasa ship, a toothbrush/trash can with integrated machine learning, and the enterprise that eliminated its support team overnight share the same root failure: nobody asked whether the problem actually required this solution or whether the organization was ready to live with the consequences. Product management exists precisely to answer those questions before “the ship leaves the harbor”. What concerns me and what I see with increasing regularity is that the people whose job it is to ask those questions are being bypassed or ignored entirely. Boards are making workforce decisions in a single quarter based on what a competitor announced. Without considering whether that competitor has the same circumstances. Investors are demanding headcount cuts before the results are in. And the product managers and delivery managers in the room are nodding along, or else… The procession has got to go on no matter what. This article is for the people who suspect the emperor has nothing on and are trying to find the words to say so.
In 2022, Klarna, one of the most valuable fintech companies in Europe, fired 700 customer service employees and replaced them with an AI chatbot. The announcement was triumphant, the slide decks with numbers looked clean, but by early 2025, customer satisfaction had collapsed. The CEO publicly admitted the company had "focused too much on efficiency and cost" and that "the result was lower quality." Klarna is now rehiring human agents.
Salesforce offers even a sharper illustration of the same pattern and a more revealing one, because the contradiction is on the record in the CEO’s own words. In August 2025, Marc Benioff publicly dismissed concerns about AI-driven white-collar layoffs. Weeks later, he announced the company was “rebalancing” its workforce by replacing roughly 4,000 customer support employees with AI agents, cutting the support division from 9,000 to 5,000 people. The announcement was framed as a win enabled by Agentforce, Salesforce’s own AI platform. Months later, senior executives admitted the company had overestimated AI’s readiness for real-world deployment and moved too quickly. Salesforce is now rehiring. The company disputes the word “layoffs”, calling it a redeployment, but the outcome is the same: institutional knowledge was lost, customer trust was damaged, AI could not fill the gap, and the people are being brought back. The most revealing detail is not the reversal itself. It’s the sequence: Benioff said mass AI layoffs weren’t happening. Then he announced them. Then he acknowledged that they had gone too far. All of that happened within a matter of months.
The Ship Is Still Sinking
Klarna is not an outlier. According to Forrester, 55% of employers now report regretting AI-driven layoffs. A Careerminds survey of 600 HR professionals, published in early 2026, found that two-thirds of companies that cut jobs due to AI are already rehiring, and more than a third spent more on restaffing than they saved from the cuts. Gartner predicts that by 2027, half of all AI-attributed layoffs will be reversed.
Over half of the companies that rehired did so within six months of the original cuts: not years later, but months.
Research published by the National Bureau of Economic Research, based on 6,000 Executives across the US, UK, Germany, and Australia, found that the vast majority of those leaders see little to no measurable impact of AI on their operations. The Harvard Business Review summarized it plainly: companies are laying off workers because of AI's deemed potential, not its actual performance.
King of Sweden Gustav II Adolf, who was the commander of the greatest warship ever built, never set foot on the ship. He issued his orders from a distance, overriding engineers, expanding the gun decks, and adding cannons that the hull could not support. The stability tests failed, but the ship sailed anyway. Nobody told him that would happen, not because they didn't know, but because the chain of command between a frightened shipwright and a king fighting a war had no room for pushback.
The Task Is Not the Job
Jensen Huang, the CEO of Nvidia, a company whose chips, in fact, power the AI revolution, recently said something that, in my opinion, deserves more attention than it received.

He pointed at radiology. AI became superhuman at reading medical scans in 2019.
Every forecast said radiologists were finished. “The field will be automated, the experts made redundant,” they said. It was, by any technical measure, one of the clearest cases for displacement.
What has happened is that the number of radiologists has grown. There is now a
global shortage.
Huang's explanation is precise: "The purpose of your job and the tasks and the tools
you use to do your job are related. Not the same." Reading a scan is a task. Diagnosing disease and deciding what to do about it is a purpose. AI handled the task. The purpose didn't shrink. It compounded. Faster reads meant more patients seen. More patients meant more diseases caught. More diseases caught meant more demand for the people who decide what happens next.
But then Huang said something else, something about what the fear narrative did to the field independently of the technology: people heard radiologists were finished and walked away. Medicine bled talent it could not afford to lose. Not because the work vanished, but because forecasts said it would.
"The alarmist warning went too far," Huang said, "and it scared people from doing this profession that is so important to society. It did harm."
Two Ways to Sink a Ship
This is where the Vasa story becomes more uncomfortable than most people realize. Because there are not one but two ways to repeat the mistake.
The first is the King Gustav failure: uninformed authority making decisions that experts know are wrong, driven by competitive pressure, ego, and the desire to impress. This is Klarna firing 700 people based on a chatbot’s promise. This is Salesforce replacing 4,000 support employees, while the CEO simultaneously denied that AI-driven layoffs were happening. This is the board of directors, many of whom, according to Time Magazine's research, barely use AI themselves, prompting CEOs to cut 20% of headcount on the assumption that the technology will cover the gap.
The second is subtler: the failure because of induced panic. The story of the sinking ship causes people to abandon shipbuilding entirely. In the same way, prospective radiologists changed career paths; nowadays, junior developers are being scared out of entering a field that AI will almost certainly make more valuable, not less. Entire pipelines of institutional knowledge are at risk of disruption, not because the technology demanded it, but because the narrative around it did.
What connects both failures is the same root cause: decisions made on narrative rather than evidence. Gustav acted on a competitive ego. Modern boards act on investor pressure and AI hype. Prospective radiologists and junior developers act on fear. None of them looked at the actual data.
Why We Keep Doing This
There is a library's worth of material on the Vasa syndrome; business schools have been teaching it for decades, and PMI has frameworks for it. Entire books exist on the subject of stakeholder misalignment and authority without expertise. And yet here we are, watching the same movie with a different cast and costumes.
I don’t think the problem is a lack of knowledge: respective case studies exist, warnings exist, even the data now exists in real time - Klarna’s reversal is documented, Salesforce’s public reckoning is on the record, Forrester’s regret statistics are published, Gartner’s predictions are in the public domain.
I believe the problem is structural. The people making decisions are not the ones who will have to deal directly with the consequences. The board pressuring the CEO to cut headcount will not be the one rebuilding the team eighteen months later at a higher cost. The CEO announcing AI transformation on an earnings call will not be the customer service manager trying to explain to a frustrated customer why the bot gave them the wrong answer three times.
Gustav was at war; he didn't see the ship sink, and he wasn't on it. The shipwrights who knew it would fail watched it go down from the dock. This is the oldest structural problem in organizational decision-making: the consequences of a bad call land hardest on the people who had the least say in making it.
The Question We Keep Avoiding
Huang's Nvidia is growing its engineering headcount. The company building infrastructure for the AI revolution is hiring more of the people who write software, not fewer. "I wanted my software engineers to solve problems," Huang said. "I didn't care how many lines of code they wrote." That's the task versus purpose distinction in a single sentence. And it comes from someone who understands the technology from the inside, not from a distance, not from a board presentation, not from a competitor's announcement.
The Vasa Museum in Stockholm is one of the most visited attractions in Europe. The ship is extraordinary – it is almost perfectly preserved, a genuine wonder of shipcraft. But I keep coming back to one thought every time I see photographs of it.
The engineers knew that the stability tests failed, and everyone involved understood that the
ship would not survive open water. And yet not even one person in that entire chain of command found a way to say: “This ship should not sail today!”
That is the question 400 years of documentation has never fully answered. Not "How do we avoid building a bad product?" - we have numerous frameworks for that.
The real question is “What would it actually take for someone to say no to the
king”?


