allan kelly

Agility over agile, more than word play

Is it just me or is the world moving away from agile and towards agility?

That may sound like a silly question. I’m prepared to admit it come as much from what I want to happen as to what is happening. Perhaps it is confirmation bias, but it feels like there is a change in the air. It’s a change I’m all for – I was talking about the need for agility over agile over 10 years ago (Objective Agility).

It might seem like a small semantic change to go from “agile” to “agility” but it is a change from “doing agile” to “having agility.” It is a move from “means” to the “end”. From “the way of doing things” to the “outcome.” Rather than emphasis “the way people work” the emphasis is “what is the end result?”

That is a good thing. By definition agile methods, and agile frameworks (Scrum, SAFe, Kanban, even Xanpan) describe how to work. There is an assumption that if one works that way one will achieve agility. In reality there are many different routes to agility. Some look like Scrum, some like Kanban, others don’t. Some people find their own way to agility.

I always introduce agile by asking: “What do you want agile to do for you?” The agile toolkit can be used for many ends.

With agility the answers are pre-defined: agility is both ability to move fast but also the ability change direction and manoeuvre with haste. In order to do that information is needed (learning), and that information needs to be acted on (decision making) – feedback loops again. Maximising agility means pushing that learning and decision making down to the lowest level: giving the people who do the work authority and trusting them. This is where digital tools come in and is why digital transformation demand agility.

Agile Beyond Software

There are two forces driving this change. First off is the expansion of agile beyond software development and into many other fields. As I’ve said before: digital tools spread the agile virus.

As other fields – marketing, law, research, and more – adopt methods which were originally for software engineering some tools need changing. Sure, some tools work just the same – think daily stand-up meetings. Others need rethinking: test first thinking needs a little work when testing is not the norm. And some don’t work at all: Unit testing.

A couple of years ago I saw Scrum forced on people not in software. These people did not always work in teams, they time sliced between different activities and who had to handle a lot of unplanned, urgent, work. The emphasis was on “doing agile” rather than “being agile”. Despite some valiant work it was a mess.

As we apply agile thinking away from software we need to emphasise the outcome rather than the method.

Business Agility demands more

Second, talking about agility, and in particular business agility, puts the emphasis on the whole – the wider context. That is to say: you can have the most agile team ever but the wider organization can stunt agility. The wider organization also needs to hear customers and adjust efforts: budgets, portfolio, governance and other teams also need to work agile so the whole enterprise can have agility.

Summary

Yes: agility over agile might be semantics but it is an opportunity to change the emphasis:

  1. Prioritise outcomes over methods
  2. Seeking agility outside of technologists means embracing more variation in how teams work
  3. It is not enough for teams to be agile, the wider enterprise needs to challenge how it works

Finally, agility is not binary. One might work agile or might not work agile, but agility is measured on a scale. How much agility does your company have? – it might be zero, it might be 10, it can always go higher.

Agility over agile, more than word play Read More »

10 rules of thumb for User Stories

There is a universe somewhere in which my blog is always on topic, I have a theme and I always produce posts which accord with that theme. However, in this universe my mind flits around and, for better or worse, this blog carries what is on my mind at the time. Sometimes that is grand strategy, sometimes hands-on-detail, sometimes the nature of digital work and sometimes frustration.

This week I’m dusting off my slides for next months User Stories workshop so I thought I’d share my User Stories 10 rules of thumb:

1 – Stories are usually written about a hands on user, someone who actually puts their hands on the keyboard

2 – If your story begins “As a user” then save the ink and delete the words, “as a user” adds nothing. “As a customer” isn’t a lot better. In both cases think harder, come up with a better “user”. Be as specific as you can.

3 – Stories which have systems in the user role should be rethought (“As Order Entry System I want the E-Mail System to send e-mail so that I can notify the humans”). Step back and think “Who benefits from System-E and System-F working together?” then rework the story with that person in mind seeing a combined system (e.g. “As a Branch Manager I want e-mail notifications sent when an order is entered.”)

4- Stories should be big enough to deliver business value but small enough to be complete in the near future, I’m prepared to accept a maximum of 2 weeks but others would challenge that and say “2 days is the max.” (Small and valuable actually constitute my 2 Golden Rules, where I also discuss Epics and Tasks.)

5 – There is no universal right size for a story. Teams differ widely in terms of the best (most efficient, most understandable, quickest to deliver, biggest bang for your buck…)

6 – In general the greater the gap (physical distance, cultural norms, history working in the domain, employment status, education level, etc. etc.) between the story writer (e.g. BA) and receiver (e.g. Tester or Coder) the more detail will be expected by one side or the other.

7 – If the User Story format “As a … I want to … So that …” isn’t readable then write something that it. Who, What and Why are really useful to know and he standard format usually works well but if it doesn’t write something that is. There are no prizes for “the perfect story.”

8 – Beware stories about team members: a story which begins “As a Tester I …” or “As a Product Owner …” are red flags and should be questioned. Unless you are actually building something for people like yourselves (e.g. a programming team writing an IDE) then stories should be able end users and customers. Very very occasionally it can make sense to write something for the team themselves (e.g. “As a Tester I want a log of all database action so I can validate changes”) but before you accept them question them.

9 – Stories should be testable: if you can’t see how the story can be tested think again. If you are asked to complete a story which you can’t test then simply mark it as done. If it can’t be tested then nobody can prove you haven’t done it. However, in most cases someone will soon report it as not working and you will know how to test it.

10 – Remember the old adage “Stories are a placeholder for a conversation” ? Well, if you have the conversation all sins are forgiven. No matter what the flaws are if you have the conversation you can address them.

Needless to say more about these topics in Little Book of Requirements and User Stories.

Now just because I go around saying radical things like “Nuke the backlog” does not mean I want to ditch the wonderful who-what-why structure of user stories. You might throw away a lot of user stories but when thinking about the work to be done in the post-nuclear supersprint then by all means: write user stories.

10 rules of thumb for User Stories Read More »

OKRs like its 2024 not 1974

Its not 1970 any more, even 1974 was 50 years ago. I used to make this point regularly when discussing the project model and #NoProjects. Now though I want to make the same point about OKRs.

Fashion in the 1970s

The way some people talk about OKRs you might think it was 1974: OKRs are a tool of command and control, they are given to workers by managers, workers have little or no say in what the objectives are, the key results are little more than a to-do list and there is an unwritten assumption that if key results 1, 2 and 3 are done then the objective will be miraculously achieved. In fact, those objectives are themselves often just “things someone thinks should be done” and shouldn’t be questioned.

I’d like to say this view is confined to older OKR books. However, while it is not so common in more recent books many individuals carry these assumptions.

A number of things have changed since OKRs were first created. The digital revolution might be the most obvious but actually digital is only indirectly implicated, digital lies behind two other forces here.

First, we’ve had the agile revolution: not only does agile advocate self-organising teams but workers, especially professional knowledge workers, have come to expect autonomy and authority over the work they do. This is not confined to agile, it is also true of Millennials and Generation-Z workers who recently entered the workforce.

Digital change is at work here: digital tools underpin agile, and Millennials and Gen-Z have grown up with digital tools. Digital tools magnify the power of workers while making it essential the workers have the authority to use the tool effectively and make decisions.

Having managers give OKRs to workers, without letting the workers have a voice in setting the OKRs, runs completely against both agile and generational approaches.

Second, in a world where climate change and war threaten our very existence, in a world where supposedly safe banks like Silicon Valley and Lehman Brothers have failed, where companies like Thames Valley Water have become a byword for greed over society many are demanding more meaning and purpose in their work—especially those Millennials.

Simply “doing stuff” at work is not enough. People want to make a difference. Which is why outcomes matter more than ever. Not every OKR is going to result in reduced CO2 emissions but having outcomes which make the world a better place gives meaning to work. Having outcomes which build towards a clear meaningful purpose has always been important to people but now it is more important than ever.

Add to that the increased volatility, uncertainty and complexity of our world, and the ambiguous nature of many events it is no longer good enough to tell people what to do. Work needs to have meaning both so people can commit to it and also so they can decide what the right thing to do is.

In 2024 the world is digital and the world is VUCA, workers demand respect, meaning and to be treated like partners not gophers.

OKRs are a powerful management tool but they need to be applied like it is 2024 not 1974.

OKRs like its 2024 not 1974 Read More »

This blog is back

Depending on how you follow my writing you might have noticed this blog burst back into life a couple of weeks ago. And those who are particularly observant might be thinking “Where did he go?” and “Why are these random posts appearing dated months ago?”

In fact, I was off experimenting with Medium for the last six months. To cut a long story short, the experiment didn’t show any great benefits and now looks like a “mistake.” Well, it was a mistake I had to make to understand Medium, if there was a mistake it was not returning here a couple of months ago.

So right now I’m slowly migrating the Medium posts to this blog, hence the back posts you might be seeing – including several about OKRs. (Actually, most of those posts are already included in Succeeding with OKRs in Agile Extra and I should add the others soon.)

I was intending to discontinue Medium completely but Russ Lewis has shown me an easy way to have my blog posts appear on Medium. So, for my next experiment I’ll try that.

Rest assured, things will get back to “normal” here on the blog. Thank you for following.

This blog is back Read More »

David v. Goliath, User Stories v. Use Cases

As I was saying, you forget about something, and then suddenly its everywhere. So it was the other day when I saw someone on LinkedIn asking:

        “Which is best, User Stories or Use Cases?”

Unlike the story points, use cases are an alternative to User Stories so this question at least makes sense. But, a bit like when your child asks “Who is the best Superman or Winston Churchill?” you have to say “What do you want them to do? Feature in a comic book or lead a country?”

In many ways Use Cases are better: they are better thought out, originally part of UML they have their own notation – they are almost a requirements analysis technique in their own right – and they are great for well rounded requirements gathering.

User Stories on the other hand were invented in the heat of work and the originators never thought they would last. It was an interesting idea which just caught on. Somehow they became a standard part of agile, Scrum and even SAFe.

On that basis there is no competition, David v. Goliath, User Stories are the plucky David but Goliath holds all the cards. However…

Use Cases have an Achilles heal: all those diagrams notation, specific language and thinking behind them mean they require effort to understand. Typical Use Case training courses last two or three days. Contrast that with User Stories. My online primer next month will give you 50% of what you need to know in a couple of hours. When I’ve run User Story training in the past one day is enough.

For professionals – business analysis, product managers, etc. – that isn’t really a big deal. But it is a big deal when talking to customers, users and all those people who want the thing you are building. Techniques like Use Cases create a barrier between the experts with their specialist notation and the end-users. That is a big problem.

Like so much else it is a question of context. If you are building the control software for a nuclear power station, something which must be exact and will have very few users, changes slowly over time, somewhere you can’t “move fast and break things” then go with Use Cases. But, if you are building the customer self-service portal for an electricity company go with User Stories.


Book now for User Stories Primer – places limited,

Book early for biggest discount and use the code Blog20 for an extra 20% off


Thanks to Kishorekumar 62 for the Use Case image, CCL license via WikiMedia Commons.

David v. Goliath, User Stories v. Use Cases Read More »

User Stories are not Story Points

Funny how somethings fade from view and then return. Thats how its been with User Stories for me in the last few weeks – hence my User Story Primer online workshop next month.

A few days ago I was talking to someone about the User Story workshop and he said “O, I don’t use Story Points any more.” Which had me saying “Erh, what?”

In my mind User Stories and Story Points are two completely different thing used for completely different purposes. It is not even the difference between apple and oranges, its the difference between apples and pasta.

User Stories are lightweight tool for capture what are generally called requirements. Alternatives to User Stories are Use Cases, Personas Stories and things like the IEEE 1233 standard. (More about Use Cases in the next post.)

Story Points are a unit of measurement for quantifying how much work is required to do something – that something might be a User Story but it could just as easily be a Use Case or a verbal description of a need.

So it would seem, for better or worse User Stories and Story Points have become entangled.

The important thing about Story Points is that they are a unit of measurement. What you call that unit is up to you. I’ve heard those units called Nebulous Units of Time, Effort Points or just Points, Druples, and even Tea Bags. Sometimes they measure the effort for a story, sometimes the effort for a task or even epic, sometimes the effort includes testing and sometimes not. Every team has its own unit, its own currency. Each team measures something slightly different. You can’t compare different teams units, you can only compare the results and see how much the unit buys you with that team.

To my mind User Stories and Story Points are independent of one another and can be used separately. But, it is also true that both have become very associated with Scrum. Neither is officially part of Scrum but it is common to find Scrum teams capture requirements as User Stories and estimate the effort with Story Points. It is also true that Mike Cohn has written books on both and both are contained in SAFe.

Which brings me to my next post, User Stories v. Use Cases.


(Images from WikiMedia on CCL license, Apple from Aron Ambrosiani and Pasta from Popo le Chien).

User Stories are not Story Points Read More »

Big and small, resolving contradition

Have I been confusing you? Have I been contradictory? Remember my blog from two weeks back – Fixing agile failure: collaboration over micro-management? Where I talked about the evils of micro-management and working towards “the bigger thing.” Then, last week, I republished my classic Diseconomies of Scale where I argue for working in the small. Small or big?

Actually, my contradiction goes back further than that. It is actually lurking in Continuous Digital were I also discuss “higher purpose” and also argue for diseconomies of scale a few chapters later. There is a logic here, let me explain.

When it comes to work, work flow, and especially software development there is merit in working in the small and optimising processes to do lots of small: small stories, small tasks, small updates, small releases, and so on. Not only can this be very efficient – because of diseconomies – but it is also a good way to debug a process. In the first instance it is easier to see problems and then it is easier to fix them.

However, if you are on the receiving end of this it can be very dispiriting. It becomes what people call “micro management” and that is what I was railing against two weeks ago. To counter this it is important to include everyone doing the work in deciding what the work is, give everyone a voice and together work to make things better.

Yet, the opposite is also true: for every micro-manager out there taking far too much interest in work there is another manager who is not interested in the work enough to consider priorities, give feedback or help remove obstacles. For these people all those small pieces of work seem like trivia and they wonder why anyone thinks they are worth their time?

When working in the small its too easy to get lost in the small – think of all those backlogs stuffed with hundreds of small stories which nobody seems to be interested in. What is needed is something bigger: a goal, an objective, a mission, a BHAG, MTP… what I like to call a Higher Purpose.

Put the three ideas together now: work in the small, higher purpose and teams.

There is a higher purpose, some kind of goal your team is working towards, perhaps there is more than one goal, they may be nested inside one another. The team move towards that goal in very small steps by operating a machine which is very effective at doing small things: do something, test, confirm, advance and repeat. These two opposites are reconciled by the team in the middle: it is the team which shares the goal, decides what to do next and moves towards it. The team has authority to pursue the goal in the best way they can.

In this model there is even space for managers: helping set the largest goals, working as the unblocker on the team, giving feedback in the team and outside, working to improve the machine’s efficiency, etc. Distributing authority and pushing it down to the lowest level doesn’t remove managers, like so much else it does make problems with it more visible.

Working in the small is only possible if there is some larger, overarching, goal to be worked towards. So although it can seem these ideas are contradictory the two ideas are ultimately one.

Big and small, resolving contradition Read More »

Software has diseconomies of scale – not economies of scale

“Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.”

John Maynard Keynes

Most of you are not only familiar with the idea of economies of scale but you expect economies of scale even if you don’t know any ecoomics. Much of our market economy operates on the assumption that when you buy/spend more you get more per unit of spending.

At some stage in our education — even if you never studied economics or operational research — you have assimilated the idea that if Henry Ford builds 1,000,000 identical, black, cars and sells 1 million cars, than each car will cost less than if Henry Ford manufactures one car, sells one car, builds another very similar car, sells that car and thus continues. The net result is that Henry Ford produces cars more cheaply and sells more cars more cheaply, so buyers benefit.

(Indeed the idea and history of mass production and economies of scale are intertwined. Today I’m not discussing mass production, I’m talking Economies of Scale.)

Software Milk
Software is cheaper in small cartons, and less risky too

You expect that if you go to your local supermarket to buy milk then buying one large carton of milk — say 4 pints in one go — will be cheaper than buying 4 cartons of milk each holding one pint of milk.

Back in October 2015 I put this theory to a test in my local Sainsbury’s, here is the proof:

Collage of milk prices
Milk is cheaper in larger cartons
  • 1 pint of milk costs 49p (marginal cost of one more pint 49p)
  • 2 pints of milk cost 85p, or 42.5p per pint (marginal cost of one more pint 36p)
  • 4 pints of milk cost £1, or 25p per pint (marginal cost of one more pint 7.5p) (January 2024: the same quantity of milk in the same store now sells for £1.50)

(The UK is a proudly bi-measurement country. Countries like Canada and Switzerland teach their people to speak two languages. In the UK we teach our people to use two systems of measurement!)

So ingrained is this idea that when it supermarkets don’t charge less for buying more complaints are made (see The Guardian.)

Buying milk from Sainsbury’s isn’t just about the milk: Sainsbury’s needs the store, the store needs staffing, it needs products to sell, and they need to get me into the store. All that costs the same for one pint as for four. Thats why the marginal costs fall.

Economies of scale are often cited as the reason for corporate mergers: to extract concessions from suppliers, to manufacture more items for lower overall costs. Purchasing departments expect economies of scale.

But…. and this is a big BUT…. get ready….

Software development does not have economies of scale.

In all sorts of ways software development has diseconomies of scale.

If software was sold by the pint then a four pint carton of software would not just cost four times the price of a one pint carton it would cost far far more.

The diseconomies are all around us:

Small teams frequently outperform large teams, five people working as a tight team will be far more productive per person than a team of 50, or even 15. (The Quattro Pro development team in the early 1990s is probably the best documented example of this.)

The more lines of code a piece of software has the more difficult it is to add an enhancement or fix a bug. Putting is fix into a system with 1 million lines can easily be more than 10 times harder than fixing a system with 100,000 lines.

Projects which set out to be BIG have far higher costs and lower productivity (per unit of deliverable) than small systems. (Capers Jones’ 2008 book contains some tables of productivity per function point which illustrate this. It is worth noting that the biggest systems are usually military and they have an atrocious productivity rate — an F35 or A400 anyone?)

Waiting longer — and probably writing more code — before you ask for feedback or user validation causes more problems than asking for it sooner when the product is smaller.

The examples could go on.

But the other thing is: working in the large increases risk.

Suppose 100ml of milk is off. If the 100ml is in one small carton then you have lost 1 pint of milk. If the 100ml is in a 4 pint carton you have lost 4 pints.

Suppose your developers write one bug a year which will slip through test and crash users’ machines. Suppose you know this, so in an effort to catch the bug you do more testing. In order to keep costs low on testing you need to test more software, so you do a bigger release with more changes — economies of scale thinking. That actually makes the testing harder but…

Suppose you do one release a year. That release blue screens the machine. The user now sees every release you do crashes their machine. 100% of your releases screw up.

If instead you release weekly, one release a year still crashes the machine but the user sees 51 releases a year which don’t. Less than 2% of your releases screw up.

Yes I’m talking about batch size. Software development works best in small batch sizes. (Don Reinertsen has some figures on batch size in The Principles of Product Development Flow which also support the diseconomies of scale argument.)

Ok, there are a few places where software development does exhibit economies of scale but on most occasions diseconomies of scale are the norm.

This happens because each time you add to software work the marginal cost per unit increases:

Add a fourth team member to a team of three and the communication paths increase from 3 to 6.

Add one feature to a release and you have one feature to test, add two features and you have 3 tests to run: two features to test plus the interaction between the two.

In part this is because human minds can only hold so much complexity. As the complexity increases (more changes, more code) our cognitive load increases, we slow down, we make mistakes, we take longer.

(Economies of scope and specialisation are also closely related to economies of scale and again on the whole, software development has diseconomies of scope (be more specific).)

However be careful: once the software is developed then economies of scale are rampant. The world switches. Software which has been built probably exhibits more economies of scale than any other product known to man. (In economic terms the marginal cost of producing the first instance are extremely high but the marginal costs of producing an identical copy (production) is so close to zero as to be zero, Ctrl-C Ctrl-V.)

What does this all mean?

Firstly you need to rewire your brain, almost everyone in the advanced world has been brought up with economies of scale since school. You need to start thinking diseconomies of scale.

Second, whenever faced with a problem where you feel the urge to go bigger run in the opposite direction, go smaller.

Third, take each and every opportunity to go small.

Four, get good at working in the small, optimise your processes, tools, approaches to do lots of small things rather than a few big things.

Fifth, and this is the killer: know that most people don’t get this at all. In fact it’s worse…

In any existing organization, particularly a large corporation, the majority of people who make decisions are out and out economies of scale believers. They expect that going big is cheaper than going small and they force this view on others — especially software technology people. (Hence Large companies trying to be Agile remind me of middle aged men buying sports cars.)

Many of these people got to where they are today because of economies of scale, many of these companies exist because of economies of scale; if they are good at economies of scale they are good at doing what they do.

But in the world of software development this mindset is a recipe for failure and under performance. The conflict between economies of scale thinking and diseconomies of scale working will create tension and conflict.


Originally posted in October 2015 and can also be found in Continuous Digital.

To be the first to know of updates and special offers subscribe — and get Continuous Digital for free.

Software has diseconomies of scale – not economies of scale Read More »

Why I don’t like pre-work (planning, designing, budgetting)

You might have noticed in my writing that I have a tendency to rubbish the “Before you do Z you must do Y” type argument. Pre-work. Work you should do before you do the actual work. Planning, designing, budgeting, that sort of thing.

Why am I such a naysayer?

Partly this comes from a feeling that given any challenge it is always possible to say “You should have done something before now” – “You missed a step” – “If you had done what you were supposed to do you wouldn’t have this problem.” Most problems would be solve already, or would never have occurred, if someone had done the necessary pre-work.

There is always something you should have done sooner but, without a time machine, that isn’t very useful advice. Follow this line of reasoning and before you know it there is a great big process of steps to be done. Most people don’t have the discipline, or training, to follow such processes and mistakes get made. The bigger the process you have the more likely it is to go wrong.

However, quite often, the thing you should have done can still be done. Maybe you didn’t take time to ask customers what they actually wanted before you started building but you could still go and ask. Sure it might mean you have to undo something (worst case: throw it away) but at least you stop making the wrong thing bigger. Doing things out of order may well make for more work, and more cost, but it is still better than not doing it at all.

Some of my dislike simply comes from my preference. Like so many other people, I like to get on and do something: why sit around talking about something when we should be doing! I’m not alone in that. While I might be wrong to rush to action it is also wrong to spend so long talking that you never do it, “paralysis by analysis.” Add to that, when someone is motivated to do something its good to get on and do something, build on the motivation. Saying “Hold on, before you …” may mean the moment is missed, the enthusiasm and motivation is lost.

So, although there is a risk in charging in there is also a risk in not acting.

Of all the things that you might do to make work easier once you start “properly” some will be essential and some till not. Some pre-work just seems like a good idea. One way to determine what is essential is to get on with the work and do the essentially when get to them. Just-in-time.

For example, before you begin a piece of work, it is a really good idea to talk about the acceptance criteria – “what does success look like?” If you pick up a piece of work and find that there are no acceptance criteria you could say “Sorry, I can’t do this, someone needs to set criteria and then I’ll do it” or you could go and find the right person and have the conversation there and then. When some essential pre-work is missing it becomes job number 1 to do when you do do the work.

Finally, another reason I dislike pre-work is the way it interacts with money.

There are those who consider pre-work unnecessary and will not allocate money to do it (“Software design costs time and money, just code.”) If instead of seeing pre-work as distinct from simply doing the work then it is all part of the same thing: rather than allocate a few hours for design weeks before you code simply do the design for the first few hours of the work. That way, making the pre-work into a just-in-time activity you remove the possibility the work is cancelled or that it changes.

My other gripe with money is the way, particularly in a project setting, pre-work is accounted for differently. You see this in project organizations where nobody is allowed to do anything practical until the budget (and a budget code) is created for the work. But the work that happens before then seems to be done for free: there is an unlimited budget for planning work which might be done.

Again, rather than see the pre-work – planning, budgeting, designing, etc. – as something distinct that happens before the work itself just make it part of the work, and preferably do it first.

Ultimately, I’m not out to bad pre-work, I can see that it is valuable and I can see that done in advance it can add more value. Its just that you can’t guarantee it is done, if we build a system that doesn’t depend on pre-work being done first, then the system is more robust.

Why I don’t like pre-work (planning, designing, budgetting) Read More »

Verified by MonsterInsights