Du betrachtest gerade Most Companies Don’t Solve Problems. They Shop for Tools.

Most Companies Don’t Solve Problems. They Shop for Tools.

Most companies don’t solve problems. They shop for tools. This is one of the things that keeps annoying me in our industry: companies say they want to solve a problem, but what they actually do is jump almost immediately to a tool.

All right, let me show you a simple figure.

The basic point I want to make here is that problem solving should be problem-driven. And you might think, well, of course it should be. That sounds completely obvious. But let me put that into perspective, because although it sounds obvious when stated like this, in practice organizations often do not work that way. They buy automation equipment, dashboards, robots, AI, or whatever happens to be fashionable at the time, and only afterwards do they try to explain what problem that thing was supposed to solve in the first place. And because that logic is backwards, many of these projects either disappoint or solve only a very small part of the actual issue.

Let me first walk you through the figure: I think every serious problem-solving process should begin with a very clear problem statement. And that raises an immediate question: what exactly do I mean by a problem-solving process? In our context, I mean any situation in which we want to improve performance by changing something in the system. That could involve process changes, technology, automation, equipment, software, or some combination of these. But before we talk about any of that, we should be very clear about one thing: what is the actual problem we want to solve?

And this really is much less common than it should be. Very often, organizations are surprisingly vague about the problem they are trying to solve. They feel that something is wrong, or they may be dissatisfied with some outcome („picking is too slow“), but they do not formulate the problem clearly enough. Or they formulate multiple conflicting objectives (e.g., „better efficiency and shorter order lead time“). And that is already a problem in itself. So the process should begin with a proper problem statement. (I will explain why this isn’t easy further down on the text).

Now, once the problem statement is clear, it should define what kind of decision we need to make in order to address that problem. A problem statement is not enough by itself. It should lead to a decision problem. The problem statement should narrow the field and tell us what kind of decision is required. What kind of decision do we need to make in order to address the problem?

Now, in order to make a good decision, we need information. Good decisions are informed decisions. And the decision we want to make should determine very precisely what information we actually need. Again, this sounds so obvious that it is almost embarrassing to write it down. We should ask, “What information do we need in order to make this specific decision?”

Now, information is usually not just sitting there. If you are lucky, you have a really good dashboard. I’ll go out on a limb and say that almost nobody has a really good dashboard. A good dashboard should provide the relevant information that informs your decisions. Because literally noone has that information available, we typically have to create it. And we create information from data.

At this point, it is important to distinguish clearly between data and information, because many people use those two words interchangeably, even though they are not the same thing at all.

Data are raw and unstructured. Information is data interpreted in a relevant context. A simple example I often use is this: Suppose you see a person’s name. A name by itself is just a piece of data. Now, if you find that name in your phone contacts, that gives you one kind of information. It tells you that this is a person you can probably call. But if you find the same name written on a gravestone, then the same data create a very different meaning. In that context, you know quite well that you are not going to call that person anymore.

So the same data can mean very different things depending on context. That is why data and information are not the same. Information is what we get when data are connected to a context in a meaningful way.

And in organizations, that often means we have to combine different kinds of data in order to produce useful information. A day with large quantities shipped alone may not tell us much. A machine event log alone may not tell us much. A slump in throughput alone may not tell us much. But if we combine those pieces properly and interpret them in relation to the decision we need to make, then they may generate exactly the information required for good decision-making.

So that is the general logic of structured problem solving shown in the figure: We begin with a problem statement. That problem statement defines the decision. That decision requires information and that information, in turn, requires data.

And in the reverse direction, data support information, information supports the decision, and the decision is intended to address the problem statement.

That is the general logic. Now let us expand the picture slightly. I also added this small box: processes and tools. Here’s why: Any decision or intervention we make will usually lead to one of two broad forms of action. Either we change the process, or we implement or modify some kind of tool. And when I say tool, I mean technology, automation, equipment, or some comparable technical means. So, broadly speaking, our intervention typically falls into one of these two categories: either process change, or tool / technology implementation. And those interventions should, ideally, address the problem.

Now, that is the structured logic. Andwhat happens in reality?

I do not know how many of you have seen this in practice before, but I see it all the time in projects. And I do mean all the time. Almost no organization seems to follow this disciplined sequence. They may have a vague problem statement — something like “our picking process is too slow,” or “our warehouse is too small,” or “labor is too expensive.” But instead of understanding the problem deeply and identifying its root cause, they jump almost immediately to what they think is the solution. And that solution is very often a tool. (Most of the time not any tool, but AutoStore. Hats off to their marketing.) In other words, they jump directly from a vague problem statement to a technology discussion (or even a vendor selection, as with AutoStore). Needless to say, that is not a good habit.

Because what gets skipped in that short-circuit is almost the entire reasoning process. The decision is not clarified properly. The information requirements are not defined properly. The data needs are not derived properly. The real alternatives are not examined properly. And very often, the process option is not taken seriously at all, even though in many cases a process change or process improvement would solve the problem more cheaply and more effectively than a tool.

Also, in practice, organizations are often pushed toward tools very early. And that is not surprising, because this is also how most markets communicate, and that’s how „tool makers“ (warehouse automation companies) make money. If you go to a trade show, you mostly see tools. You see technology, automation, equipment. You also „see“ software, but since that’s much less tangible and more difficult to showcase, it receives much less attention . Hardware dominates the news, the newsletters, vendor websites, sales calls, brochures, and marketing presentations. Much of what companies communicate externally revolves around tools. Perhaps my filter bubble constrains my perception a bit, but I do believe this to be the case.

So, customers are constantly exposed to a flood of communication about tools. And as a result, the disciplined process shown in this figure is often bypassed. Instead of going through the full chain — problem statement, decision, information, data, and then carefully choosing between process and tool interventions — many organizations take a shortcut. They start with a vague problem and jump almost immediately to the tool („We need AutoStore“).

And that is dangerous, because this kind of short-circuit often leads to addressing symptoms rather than problems. That is the central reason why I am showing this figure. The point is that tools should come at the end of a structured reasoning process, not at the beginning of it. We should not start with the technology we want to buy. We should start with the problem we actually need to solve. And that’s not so easy, by the way.

One of the main reasons why structured problem solving is difficult in practice is that what we observe first is often not the actual problem. What we usually notice first is a symptom.

That distinction is very important. A symptom is something visible, measurable, and often painful, and that gets attention. It is what shows up in complaints, KPI deviations, delays, congestion, stockouts, high cost, poor service level, or operator frustration. But the symptom is not necessarily the actual problem. Very often, it is only the visible effect of something deeper in the system. Once we confuse the symptom with the problem, we are already on the wrong path, because then we are likely to choose an intervention that addresses what is visible rather than what is causal.

That is exactly why companies so often jump too quickly to tools. They see a visible symptom and they want a visible fix. If picking is too slow, they think about automation equipment. If service is poor, they think about more automation. If labor cost is high, they think about reducing labor through … well… automation. But unless the deeper cause has been understood, that intervention may only suppress the symptom while leaving the real problem untouched.

A simple example would be low picking speed. Low picking speed is a visible symptom and it is measurable. In fact, that’s often the only thing that’s being measured because it is very easy to measure and also easy to complain about (it must be the workers‘ fault, they are clearly lazy). But what is the actual problem? Is it really the worker’s speed? Long walking distances? Or is it poor slotting? Bad replenishment logic? Too many emergency orders? Poor SKU segmentation? Weak software support? If we jump straight from the symptom “picking is too slow” to the tool “we need automation,” then we may be solving the wrong problem.

The same is true for poor service level. Poor service level is a symptom. But the underlying causes may be very different. It may be demand variability, poor job prioritization, poor inventory policy. Or it may be exception handling or data quality. So, the symptom tells us that something is wrong, but it does not yet tell us what the problem actually is.

Or take a statement like “the warehouse is full.” That sounds like a problem, but it is usually still only a symptom. Why is it full? Because demand developed unexpectedly? Because purchasing policies are poor? Because SKU proliferation has increased? Because old inventory is not being cleared? Because inventory transparency is poor? Again, if we interpret the visible condition as the actual problem, we are likely to move too quickly toward the wrong intervention.

Please note that we typically have to go through multiple levels in the problem hierarchy. An underlying issue almost always is a symptom of an even deeper problem. So, we shouldn’t stop at the first convenient explanation but keep digging deeper.

And that is the core message here. Symptoms need to be understood as the starting point of inquiry because they tell us where pain exists, that there is a problem worth investigating. But they are not the same thing as the underlying problem. This is where many organizations make a mistake. They take the symptom as if it were already the root cause. And that short-circuits the whole problem-solving process we just discussed. Instead of asking what decision is really needed, what information is required, and what data must be gathered, they jump directly from visible symptom to „AutoStore“.

This is also why automation is so often oversold. Automation is usually very good at addressing visible symptoms, it can reduce labor at the point of execution, it can increase picking speed. It can impose order on visible operational chaos (that’s a great side effect of automation, because it requires order). But if the real problems lie deeper — for example in poor (purchasing / inventory / picking) policies, poor data, poor prioritization, poor replenishment logic, or poor communication — then automation may only cover up those issues rather than solve them.

So the important discipline here is this: when you see a symptom, do not ask immediately, “What tool would fix this?” Ask instead, “What might be causing this symptom?” In other words, treat the symptom as the beginning of the investigation, not as the final definition of the problem.

The Toyota people figured this out a long time ago. Much of what I am discussing here is not new at all. If you look at Lean thinking and at the broader Toyota tradition, there is a very strong emphasis on distinguishing between visible symptoms and underlying causes. Concepts such as the 5 Whys, Ishikawa diagrams, and generally root-cause analysis exist precisely because the first thing you observe is often not the actual problem, but only its visible effect. I think this is one of the most valuable contributions of Lean. It trains people not to stop at the symptom. If a machine is down, if quality is poor, if lead times are too long, if service is unstable, Lean tells you to ask why. Then ask why again. And again.

What I find remarkable is that Lean has become so commonplace in modern industry, and yet this very basic and extremely important way of thinking still does not receive the attention it deserves. The vocabulary is everywhere: People talk about Lean, continuous improvement, waste reduction, Kaizen, root-cause analysis, and all the rest.

Also, the obsession with inventory reduction in Lean must be seen in the context of root-cause analysis. Because if you reduce inventory — which some Lean people call “the root of all evil,” even though I think it is more appropriate to treat it as a visible symptom of underlying problems, since inventory serves as a buffer against variability — you will run into trouble if you do not address the underlying problems. The inventory reduction makes underlying problems visible since you’re removing the buffer that has been hiding the problems.

Another image that is sometimes used is that of the iceberg: the symptoms represent only the tip of the iceberg. What is visible above the surface is usually only a small part of the total issue. The larger and more important part lies underneath — in the underlying processes, structures, constraints, assumptions, and interactions that produce the visible outcome in the first place.

And let me close this post by shamelessly inserting a plug here: many warehouse performance issues can be identified through a straightforward (and very affordable) one-day warehouse audit. I know that some companies turn this sort of thing into a multi-year consulting project, which then somehow leads to another multi-year automation tender, but I have yet to come across a site where we could not identify most of the problems within a day and provide very practical guidance for improvement. Talk to me if you are interested to learn more about our warehouse audits.