If AI is the solution, what is the problem?

I sometimes feel I’m the only person not making a song and dance about Artificial Intelligence. I even feel guilty that I haven’t shared my thoughts with my loyal readers. Let me fix that now.

Part of my lack of comment is because, despite the claims, I don’t’ see AI changing the fundamentals. Work still needs to be done, work still needs to flow from one place to another, decisions and action are still needed. Throughout my entire career I have seen a steady advancement of technology: as technology becomes more powerful the problems it can address increase. But we still need to know what problem, or opportunity, we are addressing.

Still asking: “What is the problem?”

Rather than running around saying “We must add AI, just ass AI” and so on, the question we should be asking is: “What do we need to fix/improve/change?” Only then think about the technologies. True, to some degree we won’t see an opportunity until we know what the technology can do but our starting point needs to be the problem/opportunity.

Down the years I have regularly heard “business” types say “Engineers need to stop putting technology first and consider the business need.” Right now it feels like positions are reversed.

AI is three technologies

Behind “AI” there are at least 3 very promising technologies: GPUs, neural networks and (large) language models (LLMs). These build on on another to create today is called AI but each has makes possible solutions and systems which address problems which could not be addressed before.

Ever since computers were first invented (and possibly even before that) people have talked of “electronic brains”. People looked at the early electronic calculators and thought them intelligent. (I suspect many products now labeled “AI” don’t use these 3 technologies and use older technologies which are still pretty impressive.)

Perhaps part of today’s AI boom is simply because these three have arrived at almost the same time.

Individually these technologies are impressive whether it is GPU graphics, crypto currencies or neural networks; neural networks themselves for image recognition, non-deterministic problem solving and language models.

Notice I say Language Models, not Large Language Models. While the synthetic text generation of LLMs is stunningly impressive I increasingly suspect the real future if Small Language Models. These use the same technology but address a specific problem.

What can AI do?

Looking at “AI” as several complementary technologies makes it easier to say: “What problem needs solving?” – and ask “does a GPU change this?” “Is an neural network applicable?” and “Can an LM help?”

Right now the world is running lots of experiments to understand how these technologies can help. It is difficult to disentangle what is real and what is hype but I keep coming back to a few points: processes, Jevon’s paradox and, of course, agile.

Processes

Even if “AI” can revolutionise the way we work there is still the need to move from where we are today to that new world. That requires time and effort.

As always technology is only part of the solution: we need new processes and new ways of working to get the real benefits. Thus more experimentation and human learning.

In particular, even with a brilliant new AI we probably still need to look at the work flow: how does work get to that system? and where does it go afterwards? Improving the whole needs work.

More work

Jevon’s paradox is already visible: Making work more efficient means we do more.

In a recent Blog post Duncan Brown (and an Agile Cambridge I didn’t see) recounts how AI allowed his team to create more prototypes more quickly. However, this made more work because each on required usability testing.

All those models require data: and many corporations have very poor data management. Lots of work required.

One forecast: AI code writing will lead to more code, more code which then needs testing. Much of that code will be end-user (shadow IT) systems which is good. These are great examples of innovation and make peoples work better. But simultaneously they create problems. They should be tested but more won’t, consequently individuals will become more attached to their roles.

These systems will raise cyber security questions and make more work for regular IT departments. The departments already struggle support or extinguish end-user IT.

And Agile?

Finally, there is an irony that the current Agile-winter is largely the result of discretionary corporate spending going into AI not agile. Yet if these tools are going to pay back we need more learning, and most likely ways of working which allow workers using AI tools to work effectively. In other words: to get the most from AI we need MORE AGILE thinking not less.

Regular readers will remember that I have long argued that Agile is the process change that maximises the benefit of Digital tools. AI is next iteration of those tools.

1 thought on “If AI is the solution, what is the problem?”

  1. This is spot on. “AI” (whatever it is) is a solution looking for a problem. And since it is not targeted to solve a problem, it is a very poor solution for anything. YAside from very niche areas, companies can get far more bang per buck by focussing on the peopleware – the interactions and collaborations – than by investing in a text extrusion tool that’s only job is to create the most likely string of text from another string of text (the ‘prompt’).

    As Jerry Weinberg said, “It’s always a people problem”, and technology is always a poor substitute to fixing it. Something being borne out by recent studies.

Leave a Reply