
LEAN TECH VOICES: AI investments that don’t bear fruit
COLUMN – In this column, three lean and technology experts respond to the same pressing question shaping today’s tech/AI debate. This month, why you are not seeing any return on your AI investment.
Columnists: Marie-Pia Ignace, Erasto Meneses, and Theodor Panayotov
THE QUESTION
Recent research cited in Harvard Business Review shows that only about 1 in 50 AI investments deliver transformational value, and most struggle to show measurable return. Lean teaches us to always start by understanding the purpose of what we are doing; in other words, “what problem we are trying to solve.” Is lean value-driven investment planning a missing discipline in today’s AI strategies? What are these AI strategies lacking and what can lean tell us about better aligning tech investment with customer value?
THE ANSWERS

Looking at this piece of research, it is tempting to conclude that what is missing from many AI strategies is a discipline familiar to us lean thinkers: value-driven investment planning. Lean Thinking has long insisted that any improvement effort should begin with a clear understanding of purpose—what problem we are trying to solve for the customer.
In this sense, the answer seems straightforward. Yes, lean value-driven investment planning is often missing from today’s AI initiatives. But if we just stopped at that, we’d miss a deeper, perhaps more uncomfortable question: why are we seeing this pattern again?
We can observe the same dynamic play out when organizations invest in technologies far less novel than AI. Traditional digitalization projects—from ERP implementations and workflow automation to large-scale process digitization—have often struggled with the same problem. In many cases, the value to the customer is only loosely defined, operational impact remains uncertain, and measurable returns are difficult to demonstrate after the fact. In other words, the lack of value-driven investment planning did not start with AI. AI simply makes the consequences more visible.
From a lean perspective, this is surprising, because the discipline of defining value lies at the very heart of our way of thinking. Since The Machine That Changed the World, Lean has emphasized the importance of developing a deep understanding of customer value and of organizing the work around it.
The idea that any investment should be grounded in a clear understanding of value is not new to us, but there is a paradox worth acknowledging. While Lean Thinking has developed rich practices for understanding work, improving processes, and developing people, historically, as a movement, we have engaged much less with the design and evolution of technological systems. There are important examples—Toyota’s integration of production systems and engineering practices, Amazon, Intuit, Qonto —but compared with the scale of digital transformation in modern organizations, Lean’s interaction with technology remains relatively limited.
In practice, this has often led to a structural separation of the two inside companies. Ops and lean teams tend to focus on processes and management systems, while tech teams typically deal with platforms, data, and software. Collaboration certainly occurs, but the two communities have rarely built a shared discipline for thinking about value together. When was the last time both teams went to the gemba together? For many organizations, the answer is probably: not recently. Yet that would be the very first step.
The rise of AI exposes the limits of this separation. AI increasingly influences decision making, knowledge creation, and the daily organization of work. Aligning these systems with customer value, therefore, cannot be done by technologists alone.
Seen in this light, the challenge raised by the HBR statistic may not simply be that AI strategies lack lean discipline. It may also reflect the distance that has long existed between lean practitioners and technology experts.
This creates an opportunity. If AI forces organizations to build bridges between two communities, it may open the door to a more robust approach to technological investment—one grounded both in a deep understanding of work and in technical expertise. In many ways, it’s a classic change challenge. It requires new practices, shared tools, and sustained collaboration between disciplines that have evolved separately. Lean itself, however, was developed precisely as a system for leading and sustaining change. Reconnecting lean practitioners with technologists may be less a departure from Lean Thinking than a natural extension of it.

This feels like déjà vu. Years ago, during my time at Toyota, I learned a lesson that has remained with me ever since: technology rarely fails, but leadership clarity does. Since then, we have witnessed successive waves of technological advances hitting organizations: ERP/CRM, Six Sigma, Digital Transformation, Agile, and so on. Each came with a sense of urgency, with each technology presented as the ultimate competitive advantage. Yet a recurring pattern emerged: the conversation almost always started with the tool rather than the problem.
Recently, I joined a boardroom discussion about Generative AI. The energy in the room was palpable, and the presentations impressive. At one point, an executive asked, “Where can we deploy this quickly?” Another asked, “How do we scale this across the enterprise?”
I asked a different question: “What is the most difficult decision our customers are struggling with today?”
Silence.
It’s not that the people in the room didn’t care. It’s that they are not used to going to see for themselves (the lean practice of genchi genbutsu). They hadn’t mapped a value stream, nor had they defined the performance gap that they needed to close.
When tools come first, organizations typically create “motion” rather than “movement”: pilots proliferate, dashboards multiply, and automation expands, but the customer experience doesn’t improve.
This outcome is increasingly common as more organizations scramble to embrace AI. We could refer to this approach to the new technology as “slop.” This is the same term used to describe the cheap, high-volume AI-generated content inundating our social media platforms, but in this context, we can use it as an acronym for Shallow, Lazy, Off-Purpose. The meaning doesn’t change.
At first, a “slop approach” to AI looks like the smart thing to do. It writes summaries, generates forecasts and produces recommendations, but it is not actually grounded in reality. It is disconnected from operational friction, real-world trade-offs, and the actual constraints customers face. This is not a flaw inherent to AI; it is the byproduct of applying powerful models to poorly defined problems.
In Lean, our advantage was never just operational efficiency; it was discipline. Before proposing a solution, we ask ourselves what problem we are trying to
solve?” and “Where is the evidence?” We goes to the gemba, we observe, we map the flow, we understand variation, and we experiment in small cycles. Technology gets integrated later, only when it serves a specific purpose.
Today, many AI strategies mimic the logic of mass production: large upfront investments, centralized programs, and "scale first, learn later" mandates. Lean turns this mindset on its head and tells us that a robust AI strategy must:
- Start with a clear value gap.
- Understand the flow of work.
- Identify the waste (muda).
- Run disciplined cycles of PDCA-based experiments.
- Prioritize internal capability over external dependency.
I have seen AI create extraordinary value, but only when it is embedded within real value streams. It can have a true impact when it removes genuine bottlenecks, augments overloaded decision-makers, and shortens strategic learning cycles.
Without such grounding, AI doesn't scale intelligence; it scales slop—and it does so very efficiently.
Ironically, AI is the most powerful cognitive tool we have ever possessed. But power without purpose only amplifies confusion. Lean doesn’t slow innovation down; it sharpens it. It forces us to think before we build, to test before we scale, and to align before we invest.
If so few tech/AI initiatives truly contribute to the transformation of an organization, then the problem isn't the technology. It’s that we don’t have the discipline to define value before we deploy it. After all, transformation is never about tools; it’s about how we think.

As a lean thinker, I am not surprised to hear that most AI investments fail to deliver real transformation. As we know all too well, few organizations lay the necessary groundwork.
I run an AI company building knowledge agents for critical infrastructure. Customers come to us with ambitious goals: autonomous decision-making, predictive analytics, copilots. Then, before getting to work, we look at the data (the situation they are dealing with—their current state, so to speak): legacy formats, lack of governance, security barriers, duplication, inconsistency, and so on. The reality at the gemba couldn’t be farther from the vision described in the Boardroom.
Lean teaches us to go and see, but most AI strategies skip this step altogether. Executives expect silver-bullet solutions and budget for new tech tools before assessing the foundational gaps of the organization. Real “go and see” in tech means tracing how data is created, stored, and used across the organization before writing a single line of code. AI can't deliver value if the data is unreliable. The expression “garbage in, garbage out” remains true, and if you can’t trust your data, no model will help you.
I think about this as a three-layer pyramid. At the base is data—structured, clean, accessible. Without this, nothing works. The middle layer is tacit knowledge, the experiential know-how that lives in people’s heads and is hard to codify. It explains why a senior engineer adjusts a process a certain way, or how an experienced buyer reads a supplier relationship. The top layer is judgment—the ability to weigh trade-offs, navigate ambiguity, and make decisions in context. Each layer depends on the one below it.
A recent HBR 2-by-2 matrix framework can help us here: one axis is explicit data vs. tacit knowledge; the other, low error cost vs. high error cost. Low-knowledge, low-risk tasks are ready to automate. But with more tacit knowledge or risk, AI’s role shifts to supporting humans, rather than replacing them. Lean is about matching countermeasures to problems, not applying one tool everywhere.
It’s important to be aware of another trend: AI democratizes coding. Prototyping and testing custom tools are quicker than ever before and, in pure PDCA style, a small team can now build a prototype in no time, test it with real users, and iterate immediately. Value comes from experimenting, instead of large purchases. Organizations win by building, learning, and adapting.
We’re still early in this curve, but one thing is already clear to me. The companies that are seeing real AI returns are those applying lean discipline. Instead of expecting an off-the-shelf solution to solve all of their problems, they are fixing their data, mapping valuable tacit knowledge, and building capability to experiment.
Read more


INTERVIEW - What does "leading through incompetence" mean? Peter Willats discusses the role of leadership in a lean management system and the evolution of lean thinking.


CASE STUDY – How do you give hundreds of primary care units the tools and knowledge they need to make improvements? The Catalan Health Service found the solution in hoshin kanri.


FEATURE - What do Toyota and Procter&Gamble have in common? A passion for developing people, a focus on customer satisfaction, and... enviable results that are sustained over time.


OPINION – When it comes to lean product development, it doesn’t matter whether you are making wine or motorcycles. Success comes from our ability to tell a story that engages our customers and a product they would line up for.
Read more


COLUMN – In this new column, three lean and technology experts respond to the same pressing question shaping today’s tech/AI debate. This month, they discuss the mistake companies make every time a new technology emerges.


INTERVIEW – AI will shrink companies and workflows, challenging human relevance. In a world of accelerating technological disruption, Lean Thinking and adaptability are more important than ever.


FEATURE – This year, PL will try to understand what the future of work looks like in a world with AI. To kick us off and make us think, we publish an article that is the result of a one-hour conversation between a human and a machine.


OPINION – A new publicly owned open-source technology promises to make direct, unmediated trade relations trustworthy, opening the door to an era of democratization and cooperation in global commerce.