
LEAN TECH VOICES: The 93-7 AI mistake
COLUMN – In this new column, three lean and technology experts respond to the same pressing question shaping today’s tech/AI debate. This month, they discuss the mistake companies make every time a new technology emerges.
Columnists: Fabrice Bernhard, Sandrine Olivencia, and Eivind Reke
THE QUESTION
In a recent article, Deloitte’s Bill Briggs wrote that companies spend 93% of their AI investment on technology and just 7% on people. As a result, trust in AI agents is declining and “shadow AI” is filling the gap. From a lean perspective, what does this 93–7 split say about how organizations think and operate—and why do we keep repeating this pattern every time a new technology emerges?
THE ANSWERS

This piece of research reveals how predictably organizations respond to new technologies, and it should give us pause. Time and again, we rush toward the tool and forget the human system it lands in. AI, it turns out, is following exactly the same path as previous technological revolutions.
We have been here before. At the beginning of the industrial revolution, machines were deployed at scale with little thought for how humans would interact with them. Early automated looms, for instance, worked imperfectly and required people—often children—to constantly supervise them, intervene when threads broke, and perform dangerous corrective actions while machines kept running. The machine came first; people had to adapt to it. Productivity increased, but at a profound human cost.
Today’s AI deployments look uncomfortably similar. Organizations are rolling out AI systems that work “well enough” most of the time, then assigning humans the task of supervising them, correcting errors, and absorbing the consequences when things go wrong. Instead of AI working for people, people are being asked to work for AI—proofreading, validating, and fixing outputs that are often boring to check and risky to trust. This is not augmentation; it is displacement of responsibility without displacement of accountability.
In the early 1920s, Sakichi Toyoda came up with a radical invention, an automatic loom that would stop on its own whenever a thread broke. This made operators’ lives easier and freed them up to do other tasks while the loom was working and, in the event of a thread breaking, the invention allowed them to come and fix it on their own time.
This is the origin story behind Jidoka, the automation philosophy that shaped Toyota. Building on it, Lean Thinking offers a radically different perspective on how to deploy AI, one in which machines augment humans—not the other way around. Automation should not eliminate people, but increase their autonomy, capability, and ability to improve the system. Jidoka embeds human judgment into automated systems rather than pushing humans downstream as simple quality inspectors.
Seen through this lens, the 93% investment in tech vs. 7% investment in training split is a mindset problem. It signals that most organizations ask themselves: “What can we replace with AI?” rather than “How do we redesign work so people become more effective with AI?”. The result is predictable: imperfect AI systems, alienating human roles, accountability gaps, and a growing mistrust that drives people toward unsafe workarounds.
A lean approach treats AI as a manufacturing engineering problem. It focuses on shifting quality left, building systems that detect issues early, and automating quality checks—so humans can focus on design, problem-solving, and improvement. Unsurprisingly, such an approach delivers far greater productivity gains. Engineers who leverage AI this way not only get better results upfront, but they can also continuously improve the system by analyzing every time it fails its tasks. This is a sure way to outcompete those who try to replace humans with AI but end up burdening humans with more corrective work to compensate for AI’s shortcomings.

Can you imagine an orchestra conductor who believes that good music is 93% about the quality of the instruments and only 7% about the musicians? They invest in sophisticated violins and shiny brass, while keeping rehearsal time to a minimum. When the music sounds off and the orchestra loses coherence, they do not question their approach. They simply look for even better instruments.
Sound unrealistic? Well, this is exactly how many organizations approach technology today, according to this study by Deloitte.
The consequences of this imbalance will show up quickly in the every-day work. When technology is deployed without sufficient investment in understanding how it should be used, maintained, and improved, people adapt it on the fly to get their job done. They create workarounds and parallel practices that make sense locally but accumulate technical debt over time. This is not a problem of discipline or resistance to change; it is a design failure. The technology has been introduced without being properly connected to how work actually happens, to what customers expect, and to the real trade-offs people face every day.
A persistent misconception sits behind this behavior: investing in technology feels reassuring because it is tangible, measurable, and fast, while building knowledge and human capability feels slow, uncertain, and much harder to justify with familiar indicators, such as production volumes, utilization rates, and short-term ROI. Each new technology wave reinforces this bias. AI simply makes it more visible because it comes wrapped in a particularly seductive promise: intelligence in a box. Under pressure to “move now,” leaders fund what looks like a shortcut and postpone the harder work of turning technology into a reliable part of how decisions are actually made.
Looking at this 93–7 split from a lean perspective reveals a mass-production mindset applied to knowledge work. The focus shifts to “more output, faster,” which translates into more content, more dashboards, more automation, and more activity, even when the underlying knowledge is thin and intent is unclear. Lean leadership is built for the opposite challenge. It places clarity of intent, learning, and care at the center, because those are the conditions under which both humans and machines can contribute to better decisions. A lean environment forces different questions from the start: What are we trying to achieve? What do we need to learn to get there? What do customers truly value? What does “good” look like, and how will we know when outputs are drifting? Technology becomes an enabler for answering these questions. Without judgment, however, tools remain just tools, much like instruments without musicians.
This is where the conductor analogy becomes practical. An orchestra improves when musicians rehearse, listen to one another, correct together, and converge on a shared interpretation of the score. They know their instruments intimately and they know how to adapt in the moment when something does not behave exactly as expected. In the same way, organizations can only function and scale by creating the conditions for sound decision-making. Technology can support this work, but it cannot replace it. When technology is treated as the tool of an artisan, rather than as a substitute for thinking, it becomes possible to develop artisanship at scale: people who master their craft, understand their tools, and know how to make the right call when situations are ambiguous. AI can dramatically augment the work of these people, but it cannot replace their core capability, which is judgment anchored in reality.
The real advantage in the AI age will not come from producing faster. It will come from building and curating high-quality knowledge across teams, so that AI operates on clear intent, reliable context, and explicit responsibility.

The mental model of mass production, which—whether we like it or not—permeates Western management thinking, has always been geared toward automating as much work as possible to achieve efficiency of scale, and only putting employees to work where we can’t get the machine to do it. This way of thinking, which mostly treats people as replaceable parts and a cost to cut, was further reinforced by the shareholder primacy doctrine introduced by Milton Friedman in the 1970s. Under this doctrine, the 97/3 divide makes perfect sense, because the endgame is capital working for capital. However, for the mental model of Lean Management it is pure insanity. Lean believes in serving the greater good with whatever product or service an organization delivers and sees technology as a tool for people to use to create more customer value. From a lean perspective, the right split should be 50/50: whatever we invest in new technology should also be invested in people.
Toyota has never been afraid of new technology, and lean thinkers and practitioners should not be either. Whether you are a CEO, plant manager, lean coordinator, or coach, the trick is to use new technologies in a smart way and leverage the organizational know-how to do just that. In contrast with its public persona, Toyota as a company is much more of an early adopter of new technology, both in production and in product development, than a late follower. This becomes evident if we consider that it installed its first IBM mainframe computer in the 1960s to control the production line, that it developed its internal CAD/CAE system in the early 1980s (it has been kaizening it ever since), or that it first ran experiments in what we would now call the “Digital Twin” landscape in the late 1990s-early 2000s. In terms of production technology, Toyota has a long history of modifications: it buys manufacturing equipment, adapts it to its own needs, sometimes improving upon it, and eventually develops and builds its own.
The other important thing that Toyota does is equally invest in people development, because in the end it is people who will operate the technology to make value creation easier in all functions, not just the plant operations. This is clear if we look at their meticulous onboarding and training processes for newly hired engineers (described in detail in Designing the Future), which last two whole years. If you are a serious lean thinker and practitioner, 50/50 should be how you approach any new technology: equal investment in people and tech, with the aim of creating better and more sustainable products and services for your customers.
Read more


FEATURE – The gemba tells us more than we think. The authors discuss what we need to look at during our walks to understand the impact of non-manufacturing functions on the overall process.


FEATURE – Lean is a people-centric system for learning that acts as an alternative to traditional management and financial capitalism. It represents the best strategy a company can adopt to meet the needs of the future.



INTERVIEW – At the recent UK Lean Summit, we met the Head of Student Services of an English high school. We asked her about the interesting work the school is doing to improve the delivery of education to students with special needs using lean thinking.


CASE STUDY – The Covid-19 pandemic has accelerated the application of Virtual Healthcare across the world. In South Australia, this has been implemented in urgent care.
Read more


INTERVIEW – AI will shrink companies and workflows, challenging human relevance. In a world of accelerating technological disruption, Lean Thinking and adaptability are more important than ever.


FEATURE – This year, PL will try to understand what the future of work looks like in a world with AI. To kick us off and make us think, we publish an article that is the result of a one-hour conversation between a human and a machine.


OPINION – DeepSeek’s AI innovations can be compared to the disruption Toyota brought to automotive, showcasing efficiency, problem-solving, and value-driven adaptability over resource-intensive methods.


CASE STUDY – A Scaling Kaizen initiative at Veolia Water Information Systems engaged 45 teams in Lean IT practices, improving delivery, incident reduction, and fostering talent.