ai

Is AI repeating the historic mistakes of BPR?

Back in my coding days I worked on a death-march. Six days a week and on the seventh carried a pager. The aim was to redesign the way British Railways operated. Now everyone in the UK knows how that story ended.

While politically driven it was also a case of Business Process Reengineering – BPR. It was aggressive, IT lead and became synonymous with expensive failures.

It started with some very clever (i.e. well paid) people saying “Look at the way this company operates, with new technology you could do it so much more efficiently.” The mantra was “Don’t Automate, Obliterate.” This was more than just restructuring, it was creating “Leaner and Meaner” companies.

It went beyond individual process. It meant rethinking the way the whole company operated. My railway programme was not just about selling the industry, it sought to reimagine how trains operated: companies would run trains on the same routes just a few minutes apart and compete on price.

Expensive failures

Many, probably most, BPR efforts were expensive failures. It might be easy to flowchart a process but doing so often missed vital elements. Employees tacit knowledge which made things work was overlooked. Programming a business process the way you programme a computer ignored knowledge, experience, needs and variability of people. BPR programmes used unproven, scaled and stretch technology to a degree not done before.

BPR programmes laid off vast numbers of workers before they were finished. Many of these where hired back later when the BPR effort failed, like at British Railways.

The overworked and under valued staff who weren’t laid off had to pick up the pieces. The new systems frequently didn’t work and complaining about them was not well received. (It was against this background that the British Post office and Fujitsu started the Horizon system which would see staff put in prison and commit suicide.)

Is AI repeating the mistakes of BPR?

Which makes me ask, are companies repeating the mistakes of BPR in their rush to AI?

Like BPR, AI is being driven by technologists. Rather than start with the business need it starts with technology. How is less clear, there is much hand waving. The technology is cutting edge and by definition high risk.

Rather than showing staff how AI can make their life better, staff are being forced to use AI whether it makes sense or not. Complaints are not welcomed and there are frequent examples of how AI creates problems – like at Amazon.

The attitude to workers and aggressive language is very reminiscent of BPR. Some companies claim to be laying people off because of AI and almost everyone seems to be worrying about the prospect of AI redundancies. That is not conducive to successful change.

Tacit knowledge is being ignored again

LLMs only work with explicit knowledge: that which has been written down. If it hasn’t been written into words then LLMs don’t know it. Nor does it hold any kind of philosophy or design of how things should be done. AI might write something good today but what provision is it making for changes tomorrow? Humans are still needed to guide intent.

Before anyone says: “LLM have read millions of books so they have fewer blind spots” let me point out that there is very little written down about how YOUR company actually works. Even if you have a service manual or a standard operating procedure you may well find that people use considerable ingenuity in making the standard process work or finding ways to get work done despite it.

Most AI is a solution in search of a problem.

Most people do not spend their days writing documents, neither do most people spend most of their time reading. That an LLM can write a document is another example of a technology dog walking on its hind-legs. Clever but what use?

Get away from these “party tricks” and you find AI systems – like IT systems before them – need to work with the people, processes and systems that are already there. In time AI might replace these as well but today you have the people you have, the processes you have and the legacy systems you have. Changing more increases risk.

Thus Anthropic, OpenAI and friends are not going to replace SAP, Sage, Microsoft, SalesForce, or the other corporate applications any time soon. The remaining people would need retraining, other systems would need to be integrated, formal contracts and terms and conditions might need changing. Before that, sales need to be made, which means to salesperson needs to displace salesperson. The risk of introducing a new ERP system is enough to make any CEO reach for the whiskey.

Learning from BPR failure

BPR never really went away, it was moderated and became BPM – business process management – and BPI – business process improvement . We in the IT profession learned to work in small steps, integrate feedback, let business and users drive, and to manage the change with employees rather than bludgeoning them.

AI will probably take the same route. Right now the vendors have an incentive to hype it but in time – and perhaps with some high profile failures – things will moderate. Companies will remember that AI is a technology, and technology needs to be applied to a need. In time processes and companies will change but it won’t happen overnight.

Subscribe to get Allan’s thinking in your mailbox (+ free book)

Is AI repeating the historic mistakes of BPR? Read More »

Should AI coding change the Product Manager role?

Ever worked for a company that made ridiculous decisions but was still a nice place to work? For me it was Dodge Group. Beautiful office with great people where I could eat my lunch by the side of the Thames. Watching boats and dangling my feet over the water in Kingston.

I was employed as a coder but it was actually my first experience of Product Management. I went out and met customers. I prioritise the incoming asks. I got to devise solutions and think about the future of the product.

Those days are gone, for me, the company, and, if we believe the AI hype, Coders and Product Managers too.

Product Managers who code

I always advise against having hybrid Coder/Product Managers but the idea keeps resurfacing. Especially in current discussions of LLMs and AI. You can see the attraction: no need for a Product Manager to explain the ask to a Coder, no time wasted talking, no miscommunication and one vision of how it should work. Like me at Dodge.

Then for the company: no Coder means no wage costs. No bolshie programmer attitude. No failure to understand business needs or lack of appreciation for customers. Sweetness and light as ideas move directly from the Product Manager’s mind into code, into customers hands and money comes back.

A few weeks ago I got to talk this over with Peter Hilton, another coder turned Product Manager. I said I felt the desire to have managers code reflected the low status of engineers in the UK. Managers were happier with other managers and not dirty engineers with code under their finger nails. Peter saw the opposite: he saw the engineer hero worship of Silicon Valley trying to ordain everyone a coder.

The killer question for me is: what does the product manager do if they get a spare hour or two?

Underlying the belief that spare Product Manager hours should be spent coding (with or without an AI) is the assumption that there is not enough code and not enough features. More features means more sales. This immediately starts to smell like a feature factory.

Is lack of code the real problem?

I’ve always believe that a Product Manager with spare time should pick up the phone and speak to a customer. Or go and read market research, analyse incoming feature requests and support calls. Review strategy. Speak to stakeholders.

A Product Manager does not add value by building product. They add value by multiplying the value of the work done by builders. Making sure the highest value items are worked on. That the product-market fit is right. That customers are getting the expected benefits. And that everyone knows the strategy so the construction work is focused.

Today we are constantly told that AI makes Coders more productive. Thousands of lines of code can be written in a fraction of the time. Economically that means that the cost of code is less: AI makes code cheap. That might justify paying the Coders less but it means the multiplier is more important.

Why would Product Managers stop doing the high value work of strategy, customers and prioritisation to do the low value work of coding? It doesn’t make sense. Just because you can do something doesn’t mean you should.

At the same time some people are questioning why we need Product Managers at all. Why not just ask the LLM “What features should my app have?” But, if you can ask an LLM so can your competition. An LLM might be a short cut to a quick Product decision but while it delivers an instant fix it has no longevity. If you find yourself playing feature poker you need to change the game.

Competitive advantage

To gain competitive advantage your Product Managers need to find new insights which are not in an LLM. This might be from doing things LLMs can’t do, like visiting customers, watching them, talking to them, understanding their intent. Or looking at data which is not available to an LLM – feature requests coming from existing customers, talking to sales people about failed sales, or analysing your own data.

In fact, because LLMs make research from public sources easier and cheaper, it becomes more important to find things that your competitors can’t find. When you have a product in the market existing customers should be a goldmine of information.

Additionally, in a world were it is cheap and easy to identify and add functionality then products are going to become crowded and less usable. Deciding what to leave out becomes more important.


Subscribe to get Allan’s thinking + e-book

Picture copyright Robert Cook

Should AI coding change the Product Manager role? Read More »

The Shadow IT hanging over AI

Many years ago I got to meet one of my heroes and better still share dinner with him: Charles “Chunk” Moore, inventor of the Forth Language. The reason Forth is called Forth if because Chuck saw it as a fourth generation language: one that could be used by regular people to instruct their machines. To anyone not blessed with a mathematical aptitude that might seem like a joke – in Forth if you want to add 2 and 2 you write “2 2 + .”.

But the “users” Charles had in mind were not average office workers. His typical user probably had PhD in maths, more likely astrophysics. If Forth was an everyday language it was the everyday language of rocket scientists.

Over dinner someone asked Chuck “What surprised him most about the way computers had developed?” (this was 30 years after he created Forth.) I remember his answer like it was yesterday “I alway expected people would write more of their own software for their machines.”

Today corporate IT department hate end-user written code, they go to great lengths to stop it ever existing. Once it does it poses security risks, it may create costs, it may be difficult to move to new machines or break when software updates, it diverts users from doing their real job. That said, end-user create systems can be among the most innovative systems in the company precisely because they were created to serve a real need.

What has this to do with AI?

Well, there is a lot of talk about AI making programming available to regular workers. If claims being made for AI are true then in a few years Chuck might not be so surprised. AI coding is offering a world were everyone can all tell their computer what we want and it will write the code.

In many ways I love this: programming will be democratised, anyone can do it, everyone can have the joy of coding. However, right now I’m doubtful this world will happen but lets accept it as so. In a world were average workers can create their own computer programs and systems there are going to be a lot of problems.

Imaging this world for the moment: there will be an explosion of “home made computer programs”. Jevons Paradox write large: why buy software when a tool can create it for you?

Shadow IT explosion

Corporations are facing an explosion of Shadow IT systems as users who can’t program use AI to create new systems.

One reason corporate IT hate such systems is because of they create security headache. Who knows what ports will be opened and vulnerabilities will be created. And when a popular library needs a security path who knows which shadow systems need an update? And what if the update breaks the system?

Of course AI might help with all the security problems but what about testing? (Especially when a naive user might accidentally create an ethical issue.)

Even programmers dislike testing. Every programmer is convinced they are the chosen one and don’t need to test. What about people who have never coded in their lives? And after all, how can a computer get it wrong?

Some errors might be acceptable, some might be fatal. What about regulated companies? What if a user automates their own work but fails to consider regulations?

If we are to see a boom in end-user systems we also need to see a boom in testing. As testers have always told us “you can’t trust the programmer” so who is going to do it? who is going to pay for it?

And what about usability and disability regulations? Particularly those included in employment law.

Anyone who has ever created a product knows how hard it is to create a product which many users love, let alone how to persuade other people to use it. Now, since everyone can magic up a similar systems for themselves why would they? Why should I learn to use your ugly system when I can create my own?

Which means, there is going to be proliferation of systems which do much the same thing. Yet each one will be different, different individuals different workflows, which means a lack of consistency – what does that do for the outcome and customer experience?

And anyway, if Jill and Josh both build a their own workflow systems, that is two systems that need cybersecurity, testing, maintaining and yet are slightly different and only usable by one person – Jill or Josh. Having two overlapping systems which cost is just the kind of thing corporate IT want to eliminate for good reason.

AI coding still takes time

Don’t forget either that every time some takes time to pause their regular work for long enough to engage with an AI code writer and create a new system to automate their work it takes time. Maybe 5 minutes but it could be 5 days. While they will be more productive in the long(er) run the immediate effect is to slow things down. Now multiple that by the number of people who create their own solution. In the short run we can expect to see a productivity dip while everyone goes off and automates their work.

Some percentage of those system will never pay back the time invested but since this is end-user IT those system will never appear on a portfolio investment plan. It is fantastic that opportunities for improvement that were overlooked, or couldn’t make a business case, will now be realised there is also a downside. These systems will impose costs of maintenance, duplication and misplaced effort.

Don’t take this as my conversion to corporate IT departments – they can be unbelievable painful to work with. The fact that it can be so very hard to exploit these opportunities is a damning indictment of corporate IT processes and ways of working.

In the short run the explosion of end-user AI generated systems are going to increase their workload and costs. Throwing corporate IT and checks away might cure the immediate problem but will store up more problems for later. Don’t throw the baby out with the bathwater.


Signup for the my latest posts by e-mail and download a free book

The Shadow IT hanging over AI Read More »

6 less appreciated points about the brave new AI world

Maybe I’ve been avoiding AI – so please forgive this rush of posts.

That may well be because it seems to be everywhere and constant at the moment. The hype is overwhelming. I use the word hype deliberately, certainly AI – specific massive neural-net systems – do make possible incredible changes, and will effect the way we work for decades to come.

I do not buy the argument that this means that everything that came before is irrelevant, or that anyone (like me) who does not lace every single statement with AI is in someway a cynic and needs to be left behind. Rather, I see these as arguments that are used to sideline naysayers.

I’ve been keeping my AI thoughts to myself because I feel it would be detrimental to share. I know I’m not alone here; discussing AI with a friend before Christmas he felt the need to add “Please don’t share these comments.”

This came home to me when I read this: “OpenAI in particular should beware hubris. One vc says discussion of cash burn is taboo at the firm, even though leaked figures suggest it will incinerate more than $115bn by 2030.” (OpenAI’s cash burn …, The Economist, December 30.)

So here are some thoughts on where we are with AI

#1 Hype makes it difficult

Between the bubble and the hype it is very difficult to have a have an informed conversation about AI. Even without the hype it would be difficult because this is an emerging technology.

#2 Fear over hope

Rationally I know that technology advances benefit humans, create new jobs and improve living standards. However, one can’t help fearing what is to come because of the constant repetition of “AI will cut jobs” (and who is saying it, #6 below.)

#3 Applications

While an LLM writing a document is impressive few of us spend our days writing documents. This is the equivalent of early micros shipping with BASIC. This was cool if you could programme (or learn); it was useful, to some degree but only if you knew what you were doing. Ultimately it was the emergence of games and then basic word processing and calculation applications which made micros worth the investment.

That is why Apple II was a hit and MSX was not, VisiCalc beat Microsoft BASIC. It is the ARM powered Archimedes failed (no killer apps) but ARM powered phones are omnipresent.

To realise the potential of AI/LLM/neural-nets those applications need building. Some are emerging, for example in healthcare, in law enforcement and environmental.

#4 What problem are you solving?

Applying AI to a problem means we need to have an idea what the problem is (requirements), then we need to construct a product (development), somewhere along the line we need to understand the details (specifications), as I described last time, we need to test the result (testing), get it into hands of users (deployment) and refine the result (feedback and iteration).

Recognise that? Just because it is a shiny new technology doesn’t mean those things go away.

This is one of the reason’s AI initiatives are failing. “Just use AI” may impress investors but simply asking an LLM for a document is little more than a party trick. While we need experimentation people are trying to force AI into every conversation and neglecting the basics.

#5 Unappreciated costs

AI is creating jobs, at the moment many of those jobs are low paid, tedious and hidden away behind sub-contractors in Africa, e.g. tagging and moderation.

Then there is the great unmentionable: Power consumption.

In an age of climate change, where we know the damage our power systems are doing to the environment it is disgusting that these systems are given away free.

Please don’t say “they are powered by renewables.” The world hasn’t finished removing fossil fuels so every data centre powered by renewables is reduces the fossil fuel removed form the mix. Nor is it just power consumption: there are grid connections.

Where I live in London companies are building data centres. But London has a shortage of homes. The data centres v. homes debate is only just getting going. Sometimes it can feel like machines already have mastery: people are loosing jobs and homes to machines.

#6 The rise of the right (sorry)

The AI cheer leaders – Thiel, Musk, Andreessen, Altman, etc. – are aligned with the right of American politics. It sometimes seems the AI revolution and the destruction of post-1945 world order are the same thing. For AI to succeed, must we jettison post-1945 morals?

The arrival of the internet was associated with the creation of opportunities. People like Vince Cerf and Tim Berners-Lees were positive role models who kept their politics quiet. The American oligarchs leading the AI boom envisage a Brave New World rather than The Culture.

(Anyone else see Huxley’s “T” icon in the Tesla badge?)

Looping back, ironically, the “absolute free speech” espoused those oligarchs is not extended to anyone expressing scepticism about the brave new world.

Ultimately, it would be easier to be positive about AI if, instead of emphasising about job cuts we talked about new opportunities. But that itself is a political decision that few talk about.


Signup for the my latest posts by e-mail and download a free book

6 less appreciated points about the brave new AI world Read More »

AI or not AI: you still need to test

“artificial intelligence chatbot Grok being used to create non-consensual sexualised deepfake images of women and girls” BBC website

The Grok story would have the power to shock even if it hadn’t become almost routine – both for Elon Musk and AI. It serves to demonstrate that AI systems need testing – and the test results need acting on. Machines have always done unexpected things, thats why we test. As they do more and get more powerful they need more testing.

I learned long ago that just because something is syntactically correct, and may even compile, does not mean it delivers the desired result. And even if something does deliver a result who knows if it is the correct result?

AI systems, and AI generated code, still needs testing. I don’t know how to be any clearer.

The Grok case is pretty extreme. In many ways the system does what it was designed to do, but a good tester would have noticed, and reported, that it went beyond expectations and delivered ethically dubious results.

Our previous generation of technology could mess up just as badly: look at the Post Office Horizon system which put people in goal and lead to suicides. And humans covered up.

Hopefully, once we understand AI and what it does we can avoid these things. But just this morning I discovered the AI Incident Database.

Ethics

Some of these things – like autonomous cars hitting pedestrians – are just good old fashioned failures. They are worse because we are asking the machines to do more and there are many more variables which aren’t tested for. Other things, like Grok undressing people are simply things humans know are wrong, humans know it so obviously that we don’t expect it to be coded, we don’t expect to need to test for it. There is probably no law against computer undressing but it is ethically wrong.

Testing computer systems for ethics isn’t something testers have had to spend much time on before. Complicating matters is that ethics are more difficult to define and vary across people, countries and culture. I’m pretty sure that what is ethically acceptable to Elon Musk isn’t acceptable to me. But then, gun ownership in the USA is ethically acceptable but not here in the UKs. Who’s ethics are we testing for?

But even at a more basic level how can you be sure your AI generated code is producing what you expect?

Imagine you have you AI generate code for an invoicing system. Did you ask it to include VAT? And if you did does it apply it correctly? To the correct products? Does it work correctly across national boundaries? – VAT rates and exemptions differ across countries.

Even if you give you AI your national VAT rule book can you be sure it produced the right results?

You still need to test it.

Which means: there is testing work to be done. And since the system does more there is more to test.

Sure you can have an AI write tests but are you confident in those tests?

Safe AI in regulated domains

My old friend Paul Massey published a video before Christmas, Safe AI Coding in Regulated Domains.

Paul fed a specification into an AI and generated some code. To test it he fed the spec into an AI and asked it to generate tests. Not all the tests passed, the AI generated code contained bugs, fortunately the AI generated tests found them and Paul fixed them.

Paul then applied mutation testing to the code: >= became <=, == became != and so on. He ran the tests again: only 30% of the tests which should have failed did fail. Think about that, 70% of the tests passed when they should have failed.

This leave us with 2 facts:

  • AI can generate code with bugs
  • AI generated tests are not sufficient

Paul also pointed out that the specifications contained gaps. This fits with the older work from Capers Jones where he discusses defects in specification. I can’t remember if it was Jones or Tom Gilb (another old friend) who claims that 30% of defects are defects in the specification.

Now good specification take time to write – even with AI assistance. If you are happy for the AI to make all your decisions then OK, but if you have ideas on how you want the system to be you need humans in the loop. Anyone who has written specification will tell you how stakeholders often don’t agree on what is wanted.

Do you test your spec?

Where do your tests come from?

AI may help but is not enough.

Again, AI may help with the writing but it will need humans in the loop.

In fact, even if AI helps writing the spec, helps write the code and helps with the tests things are going to get harder. There will be more systems created, more code created, more tests needed.

Jevons paradox is at work: when things get more efficient we use more of them. The question is not so much, can AI write all the code? but How are we going to tests everything?

Enter ethical testing

When spec, code and test took time and many people there were more opportunities to for someone to raise the question of ethics. Having reduced the time and people in all those earlier steps there is now a new step that needs to be included: ethical testing.

The process of programming was never just about cutting code nor was the writing of the code the limiting factor – typing is not the bottleneck. In the creation of a system – specification, coding, testing – lots of decisions were being made. Those decisions still need making. Ignoring them simply lets an AI decide, for better or worse.

Do you know all the decisions the AI silently made? Do all your stakeholders agree with those decisions? Are those decisions legal and ethical?

Signup for the my latest posts by e-mail and download a free book

AI or not AI: you still need to test Read More »

Winners and looses when AIs program

I feel guilty, the rest of the world has gone AI mad and I’ve said nothing about it. I’ve been hiding. Part of me feels sad and threatened, is AI going to wipe out the world I knew?

So here is my take. Since I come from a programming background, and since this is where a lot of the AI opportunities are supposed to be I’m going to talk about this. To those of you from elsewhere, let me ask: can you apply my logic to your world?

We’ve been here before, once upon a time code generators were gong to replace programmers, another time it was “programming in pictures”, another time it was 4GLs. Is this time different?

The term “AI” has been applied over the last 20 years to many systems which are little more than rules engines. These may not require programming but they do require configuration. Configuration which can be complicated – more than selecting Preferences/Edit/… and click. Instructing computer how to work, whatever the metaphor, is called programming. Anyone who says “With this tool I replace the programmers” just become a programmers themselves.

Many of those code generators, and programming by clicking systems replace one set of problems with another set.

A thought experiment

So, a thought experiment: lets suppose AI can write code as good as a human. Your programmers are replaced. What happens then?

First: do you trust what the AI writes? Or do you still need testers?

There have always been companies out there who forego testers and testing, undoubtedly many will. But in general you will want to test what the AI creates. Just because an AI says 2+2=5 does not make it right. There are already documented cases of AI exhibiting biases in things like identifying criminals.

In fact you probably need more testers for two reasons: programmers used to do some testing, while AI will not make silly syntax errors it will still make logic errors. Additionally if AIs writes more code faster than before there is simply more work in need of testing.

Second: how do you actually know what you want? – many programmers and testers spend most of their time actually understanding what customers want. Think: when you use travel planning software you may reject the first suggestion because it uses buses not trains, the second because there is too much walking, the third because you prefer connecting at one station over the other.

If the programmers are gone then testers might take on that work as part of testing (trial and error cycles). Or you might turn to Business Analysts and Product Managers. There are the specialists who understand what is wanted.

BAs and Product Managers have another role to play: post evaluation. Now it is cheaper to produce solutions there will be more solutions and someone needs to see whether they actually solve the problem you set out to solve.

In fact, there is more work to do in choosing the problems to solve in the first place. After all, building and deploying a new system is only part of the problem. What about training people to use it? What about changing the processes around it?

In fact, if we are introducing more technology and solutions faster than we are going to need more change managers analysts and consultants to advise on workflow improvements. One day your entire company may be machines working seamlessly together but until then you need to accommodate the people. Which means someone, be they BA or consultant, needs to look again at the workflow.

And if we know anything from the agile and digital movement of the last 20 years it is that changing our approach to work takes time. The technology is the easy bit. It takes years, decades even, for processes to change.

While there are still humans in the system there will still be interfaces which will need designing. Interface, UXD or experience design, is not an entirely logical processes. You need to look at how people respond. With more systems you have more interfaces and more need of interface designers.

And because adding features to your product is now so cheap you suddenly have an explosion of extra features which makes the interface more complicated and may even detract from your product value – remember how the iPod won out over other, more feature rich, competitors? So now you need you analysts and designers to limit the features you add and ensure those you do are usable.

So far we have removed programmers but increased the number of Testers, BAs, Product Manager and Designers.

In one form or another all these people will be telling the AI what to do, as I said this is call programming. So many of those new hires will be doing some form of programming. The programming paradigm has changed, perhaps its more high level, but it is still there.

If AI follows the pattern of past technology change (and why shouldn’t it?) then:

The full benefits of technology are not realised until the rest of the system, particularly processes, change to take advantage of the technology. This can take decades.

Programming isn’t going away

New technology is often billed as replacing previous technology and/or workers. It might do that in time but it also expands the market. Electricity did not eliminate candles, more candles are produced today than ever before but we don’t use them for lighting (so much.)

I don’t see AI programming bots replacing programmers in many detailed roles, perhaps ever. The ins and out of something like Modbus, and at the other extreme enterprise architecture, will make that hard. But there are domains were AI will dominate.

Finally, as we adopt new technology and processes we give rise to new innovations, we find new markets we can address and new ways of addressing existing problems. That generates work and new roles.

So I am sad to think the joys of youth spent writing ‘Writeln(“Hello world.”)’ are coming to an end, and my children will probably never experience the joy of feeling a machine perform their wishes (LDA#0, JSR OSWord, anyone?) those days are already gone.

Rationally I know AI is not something to fear (at least in the jobs context) but emotions are not always rational.

Winners and looses when AIs program Read More »