allan kelly

The Shadow IT hanging over AI

Many years ago I got to meet one of my heroes and better still share dinner with him: Charles “Chunk” Moore, inventor of the Forth Language. The reason Forth is called Forth if because Chuck saw it as a fourth generation language: one that could be used by regular people to instruct their machines. To anyone not blessed with a mathematical aptitude that might seem like a joke – in Forth if you want to add 2 and 2 you write “2 2 + .”.

But the “users” Charles had in mind were not average office workers. His typical user probably had PhD in maths, more likely astrophysics. If Forth was an everyday language it was the everyday language of rocket scientists.

Over dinner someone asked Chuck “What surprised him most about the way computers had developed?” (this was 30 years after he created Forth.) I remember his answer like it was yesterday “I alway expected people would write more of their own software for their machines.”

Today corporate IT department hate end-user written code, they go to great lengths to stop it ever existing. Once it does it poses security risks, it may create costs, it may be difficult to move to new machines or break when software updates, it diverts users from doing their real job. That said, end-user create systems can be among the most innovative systems in the company precisely because they were created to serve a real need.

What has this to do with AI?

Well, there is a lot of talk about AI making programming available to regular workers. If claims being made for AI are true then in a few years Chuck might not be so surprised. AI coding is offering a world were everyone can all tell their computer what we want and it will write the code.

In many ways I love this: programming will be democratised, anyone can do it, everyone can have the joy of coding. However, right now I’m doubtful this world will happen but lets accept it as so. In a world were average workers can create their own computer programs and systems there are going to be a lot of problems.

Imaging this world for the moment: there will be an explosion of “home made computer programs”. Jevons Paradox write large: why buy software when a tool can create it for you?

Shadow IT explosion

Corporations are facing an explosion of Shadow IT systems as users who can’t program use AI to create new systems.

One reason corporate IT hate such systems is because of they create security headache. Who knows what ports will be opened and vulnerabilities will be created. And when a popular library needs a security path who knows which shadow systems need an update? And what if the update breaks the system?

Of course AI might help with all the security problems but what about testing? (Especially when a naive user might accidentally create an ethical issue.)

Even programmers dislike testing. Every programmer is convinced they are the chosen one and don’t need to test. What about people who have never coded in their lives? And after all, how can a computer get it wrong?

Some errors might be acceptable, some might be fatal. What about regulated companies? What if a user automates their own work but fails to consider regulations?

If we are to see a boom in end-user systems we also need to see a boom in testing. As testers have always told us “you can’t trust the programmer” so who is going to do it? who is going to pay for it?

And what about usability and disability regulations? Particularly those included in employment law.

Anyone who has ever created a product knows how hard it is to create a product which many users love, let alone how to persuade other people to use it. Now, since everyone can magic up a similar systems for themselves why would they? Why should I learn to use your ugly system when I can create my own?

Which means, there is going to be proliferation of systems which do much the same thing. Yet each one will be different, different individuals different workflows, which means a lack of consistency – what does that do for the outcome and customer experience?

And anyway, if Jill and Josh both build a their own workflow systems, that is two systems that need cybersecurity, testing, maintaining and yet are slightly different and only usable by one person – Jill or Josh. Having two overlapping systems which cost is just the kind of thing corporate IT want to eliminate for good reason.

AI coding still takes time

Don’t forget either that every time some takes time to pause their regular work for long enough to engage with an AI code writer and create a new system to automate their work it takes time. Maybe 5 minutes but it could be 5 days. While they will be more productive in the long(er) run the immediate effect is to slow things down. Now multiple that by the number of people who create their own solution. In the short run we can expect to see a productivity dip while everyone goes off and automates their work.

Some percentage of those system will never pay back the time invested but since this is end-user IT those system will never appear on a portfolio investment plan. It is fantastic that opportunities for improvement that were overlooked, or couldn’t make a business case, will now be realised there is also a downside. These systems will impose costs of maintenance, duplication and misplaced effort.

Don’t take this as my conversion to corporate IT departments – they can be unbelievable painful to work with. The fact that it can be so very hard to exploit these opportunities is a damning indictment of corporate IT processes and ways of working.

In the short run the explosion of end-user AI generated systems are going to increase their workload and costs. Throwing corporate IT and checks away might cure the immediate problem but will store up more problems for later. Don’t throw the baby out with the bathwater.


Signup for the my latest posts by e-mail and download a free book

The Shadow IT hanging over AI Read More »

6 less appreciated points about the brave new AI world

Maybe I’ve been avoiding AI – so please forgive this rush of posts.

That may well be because it seems to be everywhere and constant at the moment. The hype is overwhelming. I use the word hype deliberately, certainly AI – specific massive neural-net systems – do make possible incredible changes, and will effect the way we work for decades to come.

I do not buy the argument that this means that everything that came before is irrelevant, or that anyone (like me) who does not lace every single statement with AI is in someway a cynic and needs to be left behind. Rather, I see these as arguments that are used to sideline naysayers.

I’ve been keeping my AI thoughts to myself because I feel it would be detrimental to share. I know I’m not alone here; discussing AI with a friend before Christmas he felt the need to add “Please don’t share these comments.”

This came home to me when I read this: “OpenAI in particular should beware hubris. One vc says discussion of cash burn is taboo at the firm, even though leaked figures suggest it will incinerate more than $115bn by 2030.” (OpenAI’s cash burn …, The Economist, December 30.)

So here are some thoughts on where we are with AI

#1 Hype makes it difficult

Between the bubble and the hype it is very difficult to have a have an informed conversation about AI. Even without the hype it would be difficult because this is an emerging technology.

#2 Fear over hope

Rationally I know that technology advances benefit humans, create new jobs and improve living standards. However, one can’t help fearing what is to come because of the constant repetition of “AI will cut jobs” (and who is saying it, #6 below.)

#3 Applications

While an LLM writing a document is impressive few of us spend our days writing documents. This is the equivalent of early micros shipping with BASIC. This was cool if you could programme (or learn); it was useful, to some degree but only if you knew what you were doing. Ultimately it was the emergence of games and then basic word processing and calculation applications which made micros worth the investment.

That is why Apple II was a hit and MSX was not, VisiCalc beat Microsoft BASIC. It is the ARM powered Archimedes failed (no killer apps) but ARM powered phones are omnipresent.

To realise the potential of AI/LLM/neural-nets those applications need building. Some are emerging, for example in healthcare, in law enforcement and environmental.

#4 What problem are you solving?

Applying AI to a problem means we need to have an idea what the problem is (requirements), then we need to construct a product (development), somewhere along the line we need to understand the details (specifications), as I described last time, we need to test the result (testing), get it into hands of users (deployment) and refine the result (feedback and iteration).

Recognise that? Just because it is a shiny new technology doesn’t mean those things go away.

This is one of the reason’s AI initiatives are failing. “Just use AI” may impress investors but simply asking an LLM for a document is little more than a party trick. While we need experimentation people are trying to force AI into every conversation and neglecting the basics.

#5 Unappreciated costs

AI is creating jobs, at the moment many of those jobs are low paid, tedious and hidden away behind sub-contractors in Africa, e.g. tagging and moderation.

Then there is the great unmentionable: Power consumption.

In an age of climate change, where we know the damage our power systems are doing to the environment it is disgusting that these systems are given away free.

Please don’t say “they are powered by renewables.” The world hasn’t finished removing fossil fuels so every data centre powered by renewables is reduces the fossil fuel removed form the mix. Nor is it just power consumption: there are grid connections.

Where I live in London companies are building data centres. But London has a shortage of homes. The data centres v. homes debate is only just getting going. Sometimes it can feel like machines already have mastery: people are loosing jobs and homes to machines.

#6 The rise of the right (sorry)

The AI cheer leaders – Thiel, Musk, Andreessen, Altman, etc. – are aligned with the right of American politics. It sometimes seems the AI revolution and the destruction of post-1945 world order are the same thing. For AI to succeed, must we jettison post-1945 morals?

The arrival of the internet was associated with the creation of opportunities. People like Vince Cerf and Tim Berners-Lees were positive role models who kept their politics quiet. The American oligarchs leading the AI boom envisage a Brave New World rather than The Culture.

(Anyone else see Huxley’s “T” icon in the Tesla badge?)

Looping back, ironically, the “absolute free speech” espoused those oligarchs is not extended to anyone expressing scepticism about the brave new world.

Ultimately, it would be easier to be positive about AI if, instead of emphasising about job cuts we talked about new opportunities. But that itself is a political decision that few talk about.


Signup for the my latest posts by e-mail and download a free book

6 less appreciated points about the brave new AI world Read More »

AI or not AI: you still need to test

“artificial intelligence chatbot Grok being used to create non-consensual sexualised deepfake images of women and girls” BBC website

The Grok story would have the power to shock even if it hadn’t become almost routine – both for Elon Musk and AI. It serves to demonstrate that AI systems need testing – and the test results need acting on. Machines have always done unexpected things, thats why we test. As they do more and get more powerful they need more testing.

I learned long ago that just because something is syntactically correct, and may even compile, does not mean it delivers the desired result. And even if something does deliver a result who knows if it is the correct result?

AI systems, and AI generated code, still needs testing. I don’t know how to be any clearer.

The Grok case is pretty extreme. In many ways the system does what it was designed to do, but a good tester would have noticed, and reported, that it went beyond expectations and delivered ethically dubious results.

Our previous generation of technology could mess up just as badly: look at the Post Office Horizon system which put people in goal and lead to suicides. And humans covered up.

Hopefully, once we understand AI and what it does we can avoid these things. But just this morning I discovered the AI Incident Database.

Ethics

Some of these things – like autonomous cars hitting pedestrians – are just good old fashioned failures. They are worse because we are asking the machines to do more and there are many more variables which aren’t tested for. Other things, like Grok undressing people are simply things humans know are wrong, humans know it so obviously that we don’t expect it to be coded, we don’t expect to need to test for it. There is probably no law against computer undressing but it is ethically wrong.

Testing computer systems for ethics isn’t something testers have had to spend much time on before. Complicating matters is that ethics are more difficult to define and vary across people, countries and culture. I’m pretty sure that what is ethically acceptable to Elon Musk isn’t acceptable to me. But then, gun ownership in the USA is ethically acceptable but not here in the UKs. Who’s ethics are we testing for?

But even at a more basic level how can you be sure your AI generated code is producing what you expect?

Imagine you have you AI generate code for an invoicing system. Did you ask it to include VAT? And if you did does it apply it correctly? To the correct products? Does it work correctly across national boundaries? – VAT rates and exemptions differ across countries.

Even if you give you AI your national VAT rule book can you be sure it produced the right results?

You still need to test it.

Which means: there is testing work to be done. And since the system does more there is more to test.

Sure you can have an AI write tests but are you confident in those tests?

Safe AI in regulated domains

My old friend Paul Massey published a video before Christmas, Safe AI Coding in Regulated Domains.

Paul fed a specification into an AI and generated some code. To test it he fed the spec into an AI and asked it to generate tests. Not all the tests passed, the AI generated code contained bugs, fortunately the AI generated tests found them and Paul fixed them.

Paul then applied mutation testing to the code: >= became <=, == became != and so on. He ran the tests again: only 30% of the tests which should have failed did fail. Think about that, 70% of the tests passed when they should have failed.

This leave us with 2 facts:

  • AI can generate code with bugs
  • AI generated tests are not sufficient

Paul also pointed out that the specifications contained gaps. This fits with the older work from Capers Jones where he discusses defects in specification. I can’t remember if it was Jones or Tom Gilb (another old friend) who claims that 30% of defects are defects in the specification.

Now good specification take time to write – even with AI assistance. If you are happy for the AI to make all your decisions then OK, but if you have ideas on how you want the system to be you need humans in the loop. Anyone who has written specification will tell you how stakeholders often don’t agree on what is wanted.

Do you test your spec?

Where do your tests come from?

AI may help but is not enough.

Again, AI may help with the writing but it will need humans in the loop.

In fact, even if AI helps writing the spec, helps write the code and helps with the tests things are going to get harder. There will be more systems created, more code created, more tests needed.

Jevons paradox is at work: when things get more efficient we use more of them. The question is not so much, can AI write all the code? but How are we going to tests everything?

Enter ethical testing

When spec, code and test took time and many people there were more opportunities to for someone to raise the question of ethics. Having reduced the time and people in all those earlier steps there is now a new step that needs to be included: ethical testing.

The process of programming was never just about cutting code nor was the writing of the code the limiting factor – typing is not the bottleneck. In the creation of a system – specification, coding, testing – lots of decisions were being made. Those decisions still need making. Ignoring them simply lets an AI decide, for better or worse.

Do you know all the decisions the AI silently made? Do all your stakeholders agree with those decisions? Are those decisions legal and ethical?

Signup for the my latest posts by e-mail and download a free book

AI or not AI: you still need to test Read More »

Rational irrationality and anarchy in the workplace

This is not the post I intended to open 2026. In recent weeks several threads have come together to challenge my thinking and I feel compelled to share with my readers. So, a slightly long and philosophical start to the new year!

Garbage can

A while back I read Amethodical systems development (Truex, Baskerville and Travis, 2000), it has stayed with me and has been a major influence on my reasoning about software. The authors argue that every development, contains unique aspects and is not replicable. However, by focusing on “a method” engineering has elevated the processes used to a privileged position and neglected what actually happens. To some degree the emergence of methods based on experience (e.g.XP) in the years after the paper addressed some of this concern but not all. It also means that Scrum, XP, PRINCE2, SSADM, SAFe – or any other brand names method – overlook many important factors. (Hence why I always describe Xanpan as a model of what you can create yourself.)

Rational irrationality

For a few years now I’ve been meaning to write a blog about Rational Irrationality. This is a phenomenon I’ve seen again and again inside corporate environments. In a nutshell this is interplay of rational processes which combine to produce irrational systems. Rational processes and systems are put in place with the best intentions but once you have a few, independently rational, processes in place the interaction of these becomes irrational.

Perhaps the simplest example I remember was a Senior BA who refused to let his analysts look at user requests until the Technical Architects had proposed a design. He reasoned that his BAs were stretched, many user request never went anywhere so until the Architects had given it their backing he wasn’t going to allocate any BA time. Meanwhile, the Lead Technical Architect, quite rationally, didn’t want his people designing systems which hadn’t been scoped, how could the design something if they didn’t know what it was? Both were acting rationally but the result was irrational.

The two times were irrational rationality seems to peak are around project inception and kick-off, then when moving to live production environments, however they everywhere. Perhaps the problem is not with irrational processes and corporations but with me – and maybe you. The problem could be less these systems but our engineering brains which expect there to be a rational, systematic, logical, way through this problem.

Just because my brain can see these systems interlocking, connecting, blocking and deadlocking doesn’t mean others do. Perhaps it is because I’m an engineer, or perhaps because I’m dyslexic and visualise, I can see these things like machines and gears in my brain, the same way I used to imagine code working.

Garbage Can Model

I revisited these ideas a few months ago when I discovered The Garbage Can Model – I must think Mark Smalley and his book AI and the Being Between Us.

The garbage can model goes a long way to explaining rational irrationality and how it comes about: despite what an organization says, and despite artefacts like org charts, the organization is anarchy. Attempts to control it as a rational thing don’t work.

Now I think back to my experience, organized anarchy is probably a better mental model than rational entity in many places. The interplay of all those rational processes creates irrationality and disconnects people. The more people try to join up work the harder it gets, too many connections, Declare independence and you are seen as disruptive and “not team players”, the corporate anti-bodies come out.

The garbage can holds a collection of problems. These may get resolved, delayed or subsumed into something else. These problems are complicated by fluid engagement from stakeholders (they only sometimes join meetings), unclear technology and problematic preferences.

There are solutions too, although not necessarily solutions to the problems in the garbage can. These solutions are products, perhaps backed by vendors but not necessarily so. These solutions are looking for problems they can be applied to. Once in a while decision opportunities arise – IME typically when money needs allocating or a deadline hits. Still, delaying a decision means problems remain.

What ideals are lying around?

The economist Milton Friedman once said: “Only a crisis—actual or perceived—produces real change. When that crisis occurs, the actions that are taken depend on the ideas that are lying around. That, I believe, is our basic function: to develop alternatives to existing policies, to keep them alive and available until the politically impossible becomes politically inevitable.”

Friedman is arguing for the creation of products (policy alternatives) which then wait until a decision point. Friedman is telling us how to work in the garbage can, whether economics, systems development, or geo-politics.

From a amethodoical view engineers are trying to create rational solutions and processes while others are biding their time until their product/solution can have its day.

Fake it

Of course, when we look back and try to explain it – or when someone says “Can you do the same for me?” – we rationalise it. We don’t admit it was a garbage can or lacked method, that some stakeholders never turned up or decisions were delayed until they became irrelevant. We explain it as if it was meant to be, or as Dave Parnas put it “A rational design process: How and why to fake it.”

After all, who would admit their process was amethodical and wasn’t the result of apply career enhancing frameworks? Or that their decision making was little more than a garbage can that only produced decisions when crisis hit?

Now, when someone believes their organisation is rational, maybe they think it follows SAFe, or maybe they the hierarchy works, they treat it like such. But their mental model does not reflect the reality. Consequently the system doesn’t respond as they expect, even if it isn’t anarchy it looks like it.

So what good is this?

I would like to think that I will stop looking for engineering solutions. That I will accept more of the rational irrationality in corporations. That I will practicing patience.

I think it is more likely that simply knowing my engineering brain is wrong to expect a well oiled machine I will be more tolerant. At the very least, I will tell myself to tolerate more. Still, I will endeavour to make my bit of the world a little better, and a little more rationale. Perhaps knowing the world is irrational will help me be rationale.


Signup for the my latest posts by e-mail and download a free book

Rational irrationality and anarchy in the workplace Read More »

Why do we have runaway WIP?

One question that comes up regularly is: “Why do managers accept on too much Work In Progress?” – or “Why is it so hard to reduce WIP?”

Once you understand how more work-in-progress means less work-done it is a plague that you struggle to overcome. And for some reason it is always other people who allow too much WIP. Over this last weekend I had some insights…

You see, last week I realised I was had too much WIP myself. As I wondered across Hampstead Heath on Sunday I found myself wondering “Why have I allowed myself to take on too much WIP?” I came up with some answers for myself which I think might apply more generally. They may even go a little way to answering that “Why do we have too much WIP” question.

Always optimistic

As individuals and organisations we are repeatedly optimistic. We confuse “what we want to do” with “what we can do” or “what we have capacity to do.” Indeed, one can argue that if we were not optimistic, if we did not try and do too much and do things beyond our ability we would never learn and grow. Perhaps taking on too much and then discarding some is the natural state of affairs. In which case, we need to acknowledge that we sometimes need to cull work.

Can’t decide

Ultimately its a question of not being able to prioritise and decide. But first one has to realise that there is a problem and a decision needs to be made. In my own case I “should” be able to just make that decision. In a work environment there may be multiple people who have influence over what is done and not-done. It may well be unclear who can decide to say No and do less.

Yes is easier than No

Saying Yes is easy: saying No is hard, to oneself and to other people. And once you have said Yes there is an element of commitment. Even saying to oneself “I’m not going to do this” can be hard, telling others is hard even before they object.

Postponing a decision makes it go away, for a while at least. But it still takes up cognitive space, part of our brain knows it will still need to be made. Unfortunately postponing a decision can also mean something fails: it delivers too late, or it deprives something else of resources and that fails.

In the IT/digital world people are accustomed to failure. Sometimes it feels like the expectation is for failure, at least one organisation I’ve worked with didn’t know how to manage success. Their processes and procedures were set up in the expectation that work would go wrong. When a project was proceeding well obstacles appeared.

It can be more acceptable to be seen to fail while trying two rather than succeed with one, and fail another by not trying. Consequently, knowing that saying Yes to two different pieces of work will increase WIP, slow delivery and increase the risk for both, but you still say Yes. Failure will probably happen but it is more acceptable than saying No in the first place.

Just too interesting

Personally, I’m just interested in too many things. Half of my WIP is because I like switching between things. I want to too much. And I need to switch between things, my brain gets bored in one venture so need to switch to something else. I know I should “do one, do it to completion and move on” but that requires discipline.

This applies directly to companies. They want it all, everything looks good, each piece of work has its own supports who will be upset if it doesn’t happen so lobby for it. Again it is more acceptable to fail at many things than focus, succeed at one while postponing the others.

While I’m waiting

If you understand WIP you probably understand queuing theory. While we know that we should work with queuing theory and reduce WIP there are somethings which still entail queues. For example, you need to speak to someone, perhaps for market research. It takes time to book them and it takes time before you speak with them. What do you do in the meantime? Surely you could do something value adding while you wait?

Its bad enough when I do this to myself, in organisations the opportunities are unlimited – especially when you need to interface to people and teams outside your immediate area.

Making many bets

Perhaps a variation on “can’t decide”. One of the reason I have several projects on the go is because I don’t know which one is the “right” one to pursue, I don’t know which one(s) will pay off. Therefore I invest a little in each one. I make many small bets.

If I could decide, if I could throw all my efforts into one it has a better chance of success. But I don’t know. Nor is it clear how to decide which one to back. Unless I work on it I won’t know.

I’m sure that this sometimes plays out explicitly in companies. I’m sure some will decide to make small bets on three or four projects and see what happens. However, I think more often than not this is done without such explicit logic. It happens by accident.

Now what?

So now I have a better understanding of how excessive WIP comes to be. The question I have now is: how to I change this?

Why do we have runaway WIP? Read More »

Agile: not Dead, but evolving

Sorry. I’ve deliberately avoiding the click-bait “Agile is Dead” topic, until now.

For the last few years I’ve delivered a lecture on Agile to Oxford University students and this year the tutor specifically asked me to say something about the state of agile. When I looked over last years slides I see I was already talking about this. I’ll write more about this soon, if you can’t wait checkout “Xanpan 2021” from Frug’Agile en Arménie.

So, is Agile Dead?

Clearly not. (Albeit agile mania probably is.)

Agile is all around us. Teams work in sprints, hold daily “stand up” meetings, tools like Jira continue to sell, requirements documents are full of user stories, business journals regularly talk about “agile” and “agility” without any reference to software.

That doesn’t mean the result is perfect. The “agile” which prevails today falls short of what I and others in the community dreamed. As I’ve said before, Agile won the war but lost the peace.

Right now, agile isn’t getting attention because AI is. AI is soaking up all the discretionally time and budget so agile is squeezed out. Ironically, to get the most from AI you need the learning processes embedded in Agile to find better ways. Right now we don’t now the best way to use AI. We are in a vast experimental phase and we need more of the learning and feedback in agile.

Back to the question, is Agile Dead?

The common agile that prevails is a watered down, corporately acceptable version that is still a lot better that went before.

But then, most people don’t remember before agile. The Big Up Front practices which gave massive requirements and functional specifications; the defined process and ISO-9000 process audits, the guilt of “not doing it properly” and the inability of those doing the work to influence how it was done.

While many of those problems have resurfaced under other names in the agile world things are still a lot better. If we had stayed with that approach there would be no automatic updates to the apps on your phone, no digital business, nor much of the other technology that surrounds us. Maybe Apple and Google would be OK but legacy banks, airlines, telecos and Governments would be even worse than they are now and a million start-ups would never have started.

In truth, many of the “waterfall” processes were never followed. I worked on exactly one project that did it by the book, Railtrack Aplan. Officially it was a success, it went live. But what went live was a shadow of what was supposed to be delivered.

Everywhere else did something that (kind of) worked and then felt guilty for not doing “properly”. When I was at Reuters they tried to force us to work by the book, they destroyed much of their own capability in the process.

What has agile ever given us?

Agile showed there was another way and added democracy by opening the debate on “how we work”. The Internet help agile spread and opened up the debate in a way that had never been possible before.

If nothing else Agile gave us a better reference model, a better way of describing our work.

Actually, it gave us several reference models, Scrum, XP, DSDM, etc. Always, and everywhere, people adapt, when processes work they use them, when defined process don’t work they work around them. For a while agile licensed that working around, experiments were everywhere.

Agile was not so much new in itself as a new combination of ideas which were lying around.

The engineering practices in XP descend from the 1970s quality movement based on the work by Phil Crosby and W.Edwards Deming.

The self-organizing teams in Scrum drew on the sociotechnical systems. First recognised in the 1940s and 1950s by the Eric Trisk – then at P&G and Topeka and the genesis of Senge’s organizational learning.

The inspect and adapt philosophy in Tom Gilb’s Evo and then Scrum comes from Stafford Beer and management cybernetics.

Lean thinking draws on many of these ideas directly but lean also begat its own software process in Kanban.

As for the Frankenstein’s monster that is SAFe… you can decide for yourself whether SAFe is agile but it is definitely not lightweight. Because of its size alone it is hard to adapt SAFe and involve the workers.

The return of Agile?

Can we expect Agile to return to its previous permanence? Will the day come when everyone wants to hire a Scrum master? No.

That has passed. Organisations have ticked the Agile box – if only because they have moved on to AI. The days of big agile transformations are largely over because companies have declared success.

Put it another way: Management fads don’t return.

Imperfect agile is here, hopefully enough of it has been adopted that companies will continue to improve.

More importantly, those ideas underlying agile – quality, sociotechnical, cybernetics, learning – are still valid and will continue to have influence. Some companies will embrace them and get a lot from them, some will continue to reject and most will dip-in-and-out. These ideas will return, albeit in a different package and with a different name.

But none of that means agile is dead. Agile mania might be over but agile is continuing to evolve out of sight. Agile wasn’t the first coming of these ideas and it won’t be the last. Next post I’ll talk more about how I see it evolving.


Signup for the my latest posts by e-mail and download a free book


Thanks to Fritz Geller-Grimm for the parrot picture under CC license

Agile: not Dead, but evolving Read More »

Listen to my Top-5 intriguing tips for using OKRs

My Top-5 tips for using OKRs – the podcast!

Rael Bricker has just published his latest Top Five podcast in which I discuss my Top-5 tips for working with Objectives and Key Results.

You can listen on Rael’s website. Or get it from Apple, Spotify, Amazon and many more all via Rael’s page.

To tempt you (or perhaps to give the game away?) the top tips I talk about are:

1) OKRs are a feedback mechanism

2) Involve as many people as possible in setting OKRs

3) OKRs are not a to-do list, they describe a desired outcome

4) Decide where Business as Usual fits in: BAU is boring but it can still badly disrupt to delivery

5) Ambition or predictable? The OKR hype machine makes a lot of ambition but sometimes you will want to be boring and predictable

This is probably a good time to remind everyone that I’m always available to advise, mentor or train on OKRs. Right now I’m offering my subscribers the change to get 30 minutes of my time for free, just book me and pick my brains.

Listen to my Top-5 intriguing tips for using OKRs Read More »

6 ways my OKRs are different

Before I published Succeeding with OKRs in Agile I worried that my message about OKRs was different. Many people see OKRs as a blunt tool of management to enforce evil plans. I see OKRs as an enabling constraint, a liberating structure and a mechanism for bring about a better way of working.

The way I see OKRs may be different to some but I am far from alone. Many people who work with OKRs tend towards my view. Like so many tools OKRs can be used for good or evil – agile too can be used for good or evil. In my book OKRs are less of an end point and more of a starting point, implementing OKRs should drive other changes in an organisation.

Here are 6 ways I see OKRs as an enabler – or perhaps six ways OKRs are misinterpreted.

1. OKRs harness the power of problem solving teams

OKRs do not tell teams what to do. An OKR described a problem the team is tasked with solving. It might not be a problem, it might be a challenge or a opportunity. Ultimately it is an desired outcome.

Deciding, designing and delivering that solution if the job of the team. This is akin to the military idea of mission command: the team have a mission to achieve the OKR using the resources at their disposal.

2. OKRs define the acceptable outcome

OKRs define the desired outcome – hence I wish they were called OACs, Outcomes and Acceptance Criteria. Think of it as Test First Management: the objective is the desired outcome, the key results define the measurements used to define success.

It should be obvious from this that key results are not a to-do list. Nor are key results a work breakdown which when executed will deliver the desired outcome. When the probably is set the solution is probably unknown. Even it is there are few details. The team are problem solvers, not instruction takers, their job is to solve the problem.

The NASA moon landings are possibly the great example of a problem solving team deciding the solution and delivering it. When John F. Kennedy set the “man on the moon” objective nobody knew how it was to be achieved. It took several years before the lunar orbit rendezvous method was agreed.

3. Aspirations optional

The moon landings are a great example of setting an ambitious, inspiring, goal and then looking for people to make the impossible happen. It is not for nothing that the likes of Google talk about “moonshot” projects.

That is great, I love the idea of such goals and people surprising themselves. But… not every organisation is ready for this approach yet. Before people rise to moonshot performance they need to be secure, they need psychological safety and they need to feel that failure is will not hurt them or their career.

Most organisations are far from that. Most want predictability, even certainty. There are those who will say “Put psychological safety in place before OKRs.” I say no, put OKRs in place, accept routine, predictable, results, and work towards building both psychological safety and ambition.

4. Bottom up over top-down builds improvement

Many see OKRs as something that are set by senior people and gifted to workers for them to deliver. I don’t.

I want those doing the work – that problem solving team – to have a voice in setting the OKRs. In my mind the leadership describe the ultimate destination, the ultimate purpose and mission of the company or programme, and they ask the teams for help. The teams reply with OKRs.

The team has a voice and this way their knowledge into play. The team will know more about what technology can do, and they probably know more about customer needs and competitor products than the executives. So the team reply with their interpretation of how all these things fit together.

Thus starts a feedback loop were both the team and executives contribute. This builds a strategy debugger and make for alignment between teams and bigger goals.

5. Business as usual is welcome

There are those who say OKRs are about aspirations and projects. I’m happy to go with that if the problem solving team have no other responsibilities.

But if the team are expected to do other, “business as usual” or “keeping the lights on” work then that needs to be reflected in their OKRs – I write a OKR Zero to catch it. This is necessary to make others – execs and teams – aware of the work, that allows for the strategy debugger and alignment discussion, it also opens the door to saying “No, we’ll stop doing that” or “We’ll give that to someone else.”

6. OKRs are everything, OKRs are the management method

Finally, if you are going to adopt OKRs then you don’t other objectives, side projects, business as usual, and competing demands, getting in the way. It is no use establishing a problem solving team, focusing them on an OKR and then saying “By the way, don’t forget your personal goals agreed with HR”.

OKRs summarise the aims of a team. That is why they are discussed with others, why they include work which might get in the way and why they are used to debug the company strategy and operation.

Thoughts? Let me know, or book a call for a chat.

6 ways my OKRs are different Read More »

What I’ve been getting wrong about PDCA

I’ve been teaching planning lately and once again it seems to me that the PDCA cycle – aka Shewhart or Deming cycle – is pretty much the core of all planning. Or rather, it is the basis for all mutli-pass planning – when iteration is allowed. (One-pass planning, big-up-front-design “BUFD”, is fine for trivial situations but alway has problems in complex situations.)

So, again I’m reminded of why I don’t like PDCA. Two reasons.

Adjust over Act

When the fourth step is labeled “Act” it fails to speak to me. “Act?” I ask, “Didn’t we just DO?” Easily fixed, label step-4 “Adjust” – many people do. Now it says, “Plan a bit, do a bit, check the results, now adjust the plan or the way you are working.” That makes more sense to me.

4 unequal steps

Secondly, the typical presentation – like my diagram above – makes it look like the four steps are equal and that is not the case. Just in terms of the time they take the fourth is almost always the shortest. Which of the other steps dominate is going to depend both on the planning culture where you are and the amount of work that needs doing.

Many places will put a lot of time and effort into planning. While this can entirely justified if you are building something that lives depend on, planning suffers from diminishing returns. It is often far better to plan a little, run around the circle and plan some more. Planning is learning but so is doing, you can learn more by a few minutes of doing than hours of planning.

Other places will skip planning altogether and launch into doing. While over-planning is problematic jumping straight in is also a problem. Either way, in terms of the PDCA cycle, planning is not an equal element.

Now when planning is skipped or rush the doing phase is going to expand. In fact, on a really big endeavour which needs a lot of planning the doing phase can also be very large. Of course, its entirely possible that your planning is so excellent that you see an quick way to deliver. But again, doing is not an equal quarter.

Test-fix-test-fix-test doom loop

The same does for the check (or test) phase. It can be long or short. If your planning was good, and your doing was quality then you can hope that the check phase is really small. It does happen. But too often there is little quality in planning so you actually end-up with a short-circuit as the checks fail and more doing is needed to fix things. (This is the test-fix-test cycle that can destroy any schedule.)

I wouldn’t expect Plan, Do and Check to be equal sizes, depending on your organisation, culture and the nature of the thing you are doing I would expect one to dominate.

But I don’t see that in Adjust. Adjust is the forgotten child. Indeed, in many projects, especially at the end, everything just goes hell for leather do-check-do-check-…. Adjust, and even planning, goes out the window.

Using PDCA successfully

Even in the best places Adjust is always going to be the shortest of the four. The irony is that it is probably the most important. It is the step where reflection and improvement happen.

The truth is I’ve always struggled to apply the PDCA cycle formally. But when I look back at almost every single engagement the actual work can be mapped to the PDCA. It is fundamental, whether building product, running sprints, setting and executing OKRs or almost any other non-trivial work. Its just that the steps are not equally time consuming or equally respected.

And the secret to making it work well? Simple, go around fast. A little planning, a little doing, quick check, small adjustments and go again. Learn in the planning, learn in the doing, learn all the way round and put that learning into action.

What I’ve been getting wrong about PDCA Read More »