agile

OKRs like its 2024 not 1974

Its not 1970 any more, even 1974 was 50 years ago. I used to make this point regularly when discussing the project model and #NoProjects. Now though I want to make the same point about OKRs.

Fashion in the 1970s

The way some people talk about OKRs you might think it was 1974: OKRs are a tool of command and control, they are given to workers by managers, workers have little or no say in what the objectives are, the key results are little more than a to-do list and there is an unwritten assumption that if key results 1, 2 and 3 are done then the objective will be miraculously achieved. In fact, those objectives are themselves often just “things someone thinks should be done” and shouldn’t be questioned.

I’d like to say this view is confined to older OKR books. However, while it is not so common in more recent books many individuals carry these assumptions.

A number of things have changed since OKRs were first created. The digital revolution might be the most obvious but actually digital is only indirectly implicated, digital lies behind two other forces here.

First, we’ve had the agile revolution: not only does agile advocate self-organising teams but workers, especially professional knowledge workers, have come to expect autonomy and authority over the work they do. This is not confined to agile, it is also true of Millennials and Generation-Z workers who recently entered the workforce.

Digital change is at work here: digital tools underpin agile, and Millennials and Gen-Z have grown up with digital tools. Digital tools magnify the power of workers while making it essential the workers have the authority to use the tool effectively and make decisions.

Having managers give OKRs to workers, without letting the workers have a voice in setting the OKRs, runs completely against both agile and generational approaches.

Second, in a world where climate change and war threaten our very existence, in a world where supposedly safe banks like Silicon Valley and Lehman Brothers have failed, where companies like Thames Valley Water have become a byword for greed over society many are demanding more meaning and purpose in their work—especially those Millennials.

Simply “doing stuff” at work is not enough. People want to make a difference. Which is why outcomes matter more than ever. Not every OKR is going to result in reduced CO2 emissions but having outcomes which make the world a better place gives meaning to work. Having outcomes which build towards a clear meaningful purpose has always been important to people but now it is more important than ever.

Add to that the increased volatility, uncertainty and complexity of our world, and the ambiguous nature of many events it is no longer good enough to tell people what to do. Work needs to have meaning both so people can commit to it and also so they can decide what the right thing to do is.

In 2024 the world is digital and the world is VUCA, workers demand respect, meaning and to be treated like partners not gophers.

OKRs are a powerful management tool but they need to be applied like it is 2024 not 1974.

OKRs like its 2024 not 1974 Read More »

User Stories are not Story Points

Funny how somethings fade from view and then return. Thats how its been with User Stories for me in the last few weeks – hence my User Story Primer online workshop next month.

A few days ago I was talking to someone about the User Story workshop and he said “O, I don’t use Story Points any more.” Which had me saying “Erh, what?”

In my mind User Stories and Story Points are two completely different thing used for completely different purposes. It is not even the difference between apple and oranges, its the difference between apples and pasta.

User Stories are lightweight tool for capture what are generally called requirements. Alternatives to User Stories are Use Cases, Personas Stories and things like the IEEE 1233 standard. (More about Use Cases in the next post.)

Story Points are a unit of measurement for quantifying how much work is required to do something – that something might be a User Story but it could just as easily be a Use Case or a verbal description of a need.

So it would seem, for better or worse User Stories and Story Points have become entangled.

The important thing about Story Points is that they are a unit of measurement. What you call that unit is up to you. I’ve heard those units called Nebulous Units of Time, Effort Points or just Points, Druples, and even Tea Bags. Sometimes they measure the effort for a story, sometimes the effort for a task or even epic, sometimes the effort includes testing and sometimes not. Every team has its own unit, its own currency. Each team measures something slightly different. You can’t compare different teams units, you can only compare the results and see how much the unit buys you with that team.

To my mind User Stories and Story Points are independent of one another and can be used separately. But, it is also true that both have become very associated with Scrum. Neither is officially part of Scrum but it is common to find Scrum teams capture requirements as User Stories and estimate the effort with Story Points. It is also true that Mike Cohn has written books on both and both are contained in SAFe.

Which brings me to my next post, User Stories v. Use Cases.


(Images from WikiMedia on CCL license, Apple from Aron Ambrosiani and Pasta from Popo le Chien).

User Stories are not Story Points Read More »

Big and small, resolving contradition

Have I been confusing you? Have I been contradictory? Remember my blog from two weeks back – Fixing agile failure: collaboration over micro-management? Where I talked about the evils of micro-management and working towards “the bigger thing.” Then, last week, I republished my classic Diseconomies of Scale where I argue for working in the small. Small or big?

Actually, my contradiction goes back further than that. It is actually lurking in Continuous Digital were I also discuss “higher purpose” and also argue for diseconomies of scale a few chapters later. There is a logic here, let me explain.

When it comes to work, work flow, and especially software development there is merit in working in the small and optimising processes to do lots of small: small stories, small tasks, small updates, small releases, and so on. Not only can this be very efficient – because of diseconomies – but it is also a good way to debug a process. In the first instance it is easier to see problems and then it is easier to fix them.

However, if you are on the receiving end of this it can be very dispiriting. It becomes what people call “micro management” and that is what I was railing against two weeks ago. To counter this it is important to include everyone doing the work in deciding what the work is, give everyone a voice and together work to make things better.

Yet, the opposite is also true: for every micro-manager out there taking far too much interest in work there is another manager who is not interested in the work enough to consider priorities, give feedback or help remove obstacles. For these people all those small pieces of work seem like trivia and they wonder why anyone thinks they are worth their time?

When working in the small its too easy to get lost in the small – think of all those backlogs stuffed with hundreds of small stories which nobody seems to be interested in. What is needed is something bigger: a goal, an objective, a mission, a BHAG, MTP… what I like to call a Higher Purpose.

Put the three ideas together now: work in the small, higher purpose and teams.

There is a higher purpose, some kind of goal your team is working towards, perhaps there is more than one goal, they may be nested inside one another. The team move towards that goal in very small steps by operating a machine which is very effective at doing small things: do something, test, confirm, advance and repeat. These two opposites are reconciled by the team in the middle: it is the team which shares the goal, decides what to do next and moves towards it. The team has authority to pursue the goal in the best way they can.

In this model there is even space for managers: helping set the largest goals, working as the unblocker on the team, giving feedback in the team and outside, working to improve the machine’s efficiency, etc. Distributing authority and pushing it down to the lowest level doesn’t remove managers, like so much else it does make problems with it more visible.

Working in the small is only possible if there is some larger, overarching, goal to be worked towards. So although it can seem these ideas are contradictory the two ideas are ultimately one.

Big and small, resolving contradition Read More »

Software has diseconomies of scale – not economies of scale

“Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.”

John Maynard Keynes

Most of you are not only familiar with the idea of economies of scale but you expect economies of scale even if you don’t know any ecoomics. Much of our market economy operates on the assumption that when you buy/spend more you get more per unit of spending.

At some stage in our education — even if you never studied economics or operational research — you have assimilated the idea that if Henry Ford builds 1,000,000 identical, black, cars and sells 1 million cars, than each car will cost less than if Henry Ford manufactures one car, sells one car, builds another very similar car, sells that car and thus continues. The net result is that Henry Ford produces cars more cheaply and sells more cars more cheaply, so buyers benefit.

(Indeed the idea and history of mass production and economies of scale are intertwined. Today I’m not discussing mass production, I’m talking Economies of Scale.)

Software Milk
Software is cheaper in small cartons, and less risky too

You expect that if you go to your local supermarket to buy milk then buying one large carton of milk — say 4 pints in one go — will be cheaper than buying 4 cartons of milk each holding one pint of milk.

Back in October 2015 I put this theory to a test in my local Sainsbury’s, here is the proof:

Collage of milk prices
Milk is cheaper in larger cartons
  • 1 pint of milk costs 49p (marginal cost of one more pint 49p)
  • 2 pints of milk cost 85p, or 42.5p per pint (marginal cost of one more pint 36p)
  • 4 pints of milk cost £1, or 25p per pint (marginal cost of one more pint 7.5p) (January 2024: the same quantity of milk in the same store now sells for £1.50)

(The UK is a proudly bi-measurement country. Countries like Canada and Switzerland teach their people to speak two languages. In the UK we teach our people to use two systems of measurement!)

So ingrained is this idea that when it supermarkets don’t charge less for buying more complaints are made (see The Guardian.)

Buying milk from Sainsbury’s isn’t just about the milk: Sainsbury’s needs the store, the store needs staffing, it needs products to sell, and they need to get me into the store. All that costs the same for one pint as for four. Thats why the marginal costs fall.

Economies of scale are often cited as the reason for corporate mergers: to extract concessions from suppliers, to manufacture more items for lower overall costs. Purchasing departments expect economies of scale.

But…. and this is a big BUT…. get ready….

Software development does not have economies of scale.

In all sorts of ways software development has diseconomies of scale.

If software was sold by the pint then a four pint carton of software would not just cost four times the price of a one pint carton it would cost far far more.

The diseconomies are all around us:

Small teams frequently outperform large teams, five people working as a tight team will be far more productive per person than a team of 50, or even 15. (The Quattro Pro development team in the early 1990s is probably the best documented example of this.)

The more lines of code a piece of software has the more difficult it is to add an enhancement or fix a bug. Putting is fix into a system with 1 million lines can easily be more than 10 times harder than fixing a system with 100,000 lines.

Projects which set out to be BIG have far higher costs and lower productivity (per unit of deliverable) than small systems. (Capers Jones’ 2008 book contains some tables of productivity per function point which illustrate this. It is worth noting that the biggest systems are usually military and they have an atrocious productivity rate — an F35 or A400 anyone?)

Waiting longer — and probably writing more code — before you ask for feedback or user validation causes more problems than asking for it sooner when the product is smaller.

The examples could go on.

But the other thing is: working in the large increases risk.

Suppose 100ml of milk is off. If the 100ml is in one small carton then you have lost 1 pint of milk. If the 100ml is in a 4 pint carton you have lost 4 pints.

Suppose your developers write one bug a year which will slip through test and crash users’ machines. Suppose you know this, so in an effort to catch the bug you do more testing. In order to keep costs low on testing you need to test more software, so you do a bigger release with more changes — economies of scale thinking. That actually makes the testing harder but…

Suppose you do one release a year. That release blue screens the machine. The user now sees every release you do crashes their machine. 100% of your releases screw up.

If instead you release weekly, one release a year still crashes the machine but the user sees 51 releases a year which don’t. Less than 2% of your releases screw up.

Yes I’m talking about batch size. Software development works best in small batch sizes. (Don Reinertsen has some figures on batch size in The Principles of Product Development Flow which also support the diseconomies of scale argument.)

Ok, there are a few places where software development does exhibit economies of scale but on most occasions diseconomies of scale are the norm.

This happens because each time you add to software work the marginal cost per unit increases:

Add a fourth team member to a team of three and the communication paths increase from 3 to 6.

Add one feature to a release and you have one feature to test, add two features and you have 3 tests to run: two features to test plus the interaction between the two.

In part this is because human minds can only hold so much complexity. As the complexity increases (more changes, more code) our cognitive load increases, we slow down, we make mistakes, we take longer.

(Economies of scope and specialisation are also closely related to economies of scale and again on the whole, software development has diseconomies of scope (be more specific).)

However be careful: once the software is developed then economies of scale are rampant. The world switches. Software which has been built probably exhibits more economies of scale than any other product known to man. (In economic terms the marginal cost of producing the first instance are extremely high but the marginal costs of producing an identical copy (production) is so close to zero as to be zero, Ctrl-C Ctrl-V.)

What does this all mean?

Firstly you need to rewire your brain, almost everyone in the advanced world has been brought up with economies of scale since school. You need to start thinking diseconomies of scale.

Second, whenever faced with a problem where you feel the urge to go bigger run in the opposite direction, go smaller.

Third, take each and every opportunity to go small.

Four, get good at working in the small, optimise your processes, tools, approaches to do lots of small things rather than a few big things.

Fifth, and this is the killer: know that most people don’t get this at all. In fact it’s worse…

In any existing organization, particularly a large corporation, the majority of people who make decisions are out and out economies of scale believers. They expect that going big is cheaper than going small and they force this view on others — especially software technology people. (Hence Large companies trying to be Agile remind me of middle aged men buying sports cars.)

Many of these people got to where they are today because of economies of scale, many of these companies exist because of economies of scale; if they are good at economies of scale they are good at doing what they do.

But in the world of software development this mindset is a recipe for failure and under performance. The conflict between economies of scale thinking and diseconomies of scale working will create tension and conflict.


Originally posted in October 2015 and can also be found in Continuous Digital.

To be the first to know of updates and special offers subscribe — and get Continuous Digital for free.

Software has diseconomies of scale – not economies of scale Read More »

Fixing agile failure: collaboration over micro-management

I’ve said it before, and I’m sure I’ll say it again: “the agile toolset can be used for good or evil”. Tools such as visual work tracking, work breakdown cards and stand-ups are great for helping teams take more control over their own work (self-organization). But in the hands of someone who doesn’t respect the team, or has micro-management tendencies, those same tools can be weaponised against the team.

Put it this way, what evil pointed-headed boss wouldn’t want the whole team standing up at 9am explaining why they should still be employed?

In fact, I’m starting to suspect that the toolset is being used more often as a team disabler than a team enabler. Why do I suspect this?

Reason 1: the increasing number of voices I hear criticising agile working. Look more closely and you find people don’t like being asked to do micro-tasks, or being asked to detail their work at a really fine-grained level, then having it pinned up on a visual board where their work, or lack of, is public.

Reason 2: someone I know well is pulling their hair out because at their office, far away from software development, one of the managers writes new task cards and inserts them on to the tracking board for others to do, hourly. Those on the receiving end know nothing about these cards until they appear with their name on them.

I think this is another case of “we shape our tools and then our tools shape us.” Many of the electronic work management tools originally built for agile are being marketed and deployed more widely now. The managers buying these tools don’t appreciate the philosophy behind agile and see these tools as simply work assignment and tracking mechanisms. Not only do such people not understand how agile meant these tools to be used they don’t even know the word agile or have a very superficial understanding.

When work happens like this I’m not surprised that workers are upset and demoralised. It isn’t meant to be this way. If I was told this was the way we should work, and then told it was called “agile” I would hate agile too.

So whats missing? How do we fix this?

First, simply looking at small tasks is wrong: there needs to be a sense of the bigger thing. Understand the overall objective and the you might come up with a different set of tasks.

Traditionally in agile we want lots of small work items because a) detailed break down shows we understand what needs to be done, b) creating a breakdown with others harnesses many people’s thinking while building shared understanding, c) we can see work flowing through the system and when it gets stuck collectively help.

So having lots of small work items is a good thing, except, when the bigger thing they are building towards is missing and…

Second, it is essential teams members are involved with creating the work items. Having one superior brain create all the small work items for others to do (and then assign them out) might be efficient in terms of creating all the small work items but it undermines collaboration, it demotivates workers and, worst of all, misses the opportunity to bring many minds to bear on the problem and solution.

The third thing which cuts through both of these is simple collaboration. When workers are given small work items, and not given a say in what those items are, then collaboration is undermined. When all workers are involved in designing the work, and understanding the bigger goals, then everyone is enrolled and collaboration is powerful.

Fixing this is relatively easy but it means making time to do it: get everyone together, talk about the goals for the next period (day, week, sprint, whatever) and collectively decide what needs doing and share these work items. Call it a planning meeting.

The problem is: such a meeting takes time, it might also require you to physically get people together. The payback is that your workers will be more motivated, they will understand the work better and are ready to work, they will be primed to collaborate and ready to help unblock one another. It is another case of a taking time upfront to make later work better.

Fixing agile failure: collaboration over micro-management Read More »

Nuke your backlog?

When I deliver “Honey, I shrunk the backlog” and when I tell people “Nuke the backlog” there are a few questions and talking points which come up again and again. So, if you read my last post and have been asking yourself how you live with a backlog you want to nuke… read on…

Lead with goals

“My boss won’t stand for me deleting the backlog” I empathise. I know it happens.

At the same time I wonder: does your boss really care? – I’m sure some of them do, I see many Product Owners who are really “Backlog administrators.” Their boss is certainly leaning over their shoulder checking on what being done. If this is your case then your boss is the real Product Owner, sorry to say this but you are really a goffer.

In this case you want to educate your boss, you want to start having discussions about a better way. This is going to be a long and hard path. There is no sure fire advice I could give you here, except to suggest you give me a call.

Assuming your boss give you some leeway then go and start a conversation about your bigger goals. Beyond “delivering backlog items” what are the goals your goals? More specifically, what are driving at for the next 10 weeks?

Start to have a conversation about goals bigger than backlog items. Build a routine, a super cycle, around your sprints with the boss and team to discuss bigger goals. Then, during the cycle drive with the goals. If there is a suitable backlog items that contributes towards the goal(s) then do it. If not, write it out and do it immediately.

Either way, whether you are doing this to work around a boss or as a mechanism to tradition to a backlog-less world the same idea applies: create a super cycle, set goals every 10 weeks (approximately) and then drive through the goals rather than the backlog.

Drive with the goals and put the backlog in the back seat, it is secondary. The aim is to avoid nuking the backlog by letting it fad into irrelevance.

Write an “use by” expiry date on new backlog items.

For any item you do add to the backlog make sure it has an expiry date. That is: a date after which it will be removed from the backlog. Giving every backlog item a life expectancy won’t help you today, but it does mean that in the months ahead some items will “self delete” from the backlog.

After a while you might want to visit older items in the backlog and (if you can’t delete them immediately) assign them “use by” date.

I am sure a few people will say “O this need will live for ever.” In which case you can put a long date on it, say 10 years. But that also tells you: there is no urgency. You can do this item anytime in the next 10 years and add value, it can wait.

Better still, write a “best before” and a “use by” date on the item.

The best before date will tell you the date by which an item should be done to maximise benefit. After that date the benefit will decline, it might still be worth doing but it is not as beneficial. The “use by” date now tells you the date on which it has no benefit.

Now when you are reviewing the backlog you can see which items can be pushed back and which items need doing sooner. This is a little bit of a double edged sword for the requester, if they say “If I don’t have it in 2 months it will start loosing value” then it is more urgent, but also if it doesn’t make the cut soon then it can be removed soon.

Keep an ideas list

When you have a great idea, or when someone suggests something you could do add it to an ideas list – not the backlog. In fact, don’t show the list to people, don’t promise anything to anyone. This is your list for things you don’t trust to your memory.

Some people operate their backlog like this already but many people assume that if an item is in the backlog it will be done one day. Others measure the backlog and forecast dates. But ideas may never be done so they complicate this thinking.

Personally I like to think that great ideas will either get remembered or rediscovered. That said, I also write down lots of ideas. However, I frequently throw my ideas lists away. So if at 11pm I think of something I’ll write it down, three weeks later if it is just moving from one list to another I’ll cross it off. I rewrite my “todo” and “priorities” lists regularly.

If it helps then just keep ideas some other place. One team I worked with created a “Sprint 99” – a sprint so far off in the future it was never going to done. They parked all the good ideas there so they had them for reference and if they needed them but there was no suggestion they would ever be done.

You will also want to think carefully about what you tell people when they say “I’ve got a great idea, can you add it to the backlog?” You want to be honest, but you don’t want to create a long conversation. So you might want to say something like “Great, thanks, I’ll add it to our ideas list, if it becomes a block please come back to me and we can talk some more.” In other words, put the ball back in their court to show the value of the idea.

I’ll admit I’m nervous about this suggestion. Part of me things it will inevitably become to be seen as a backlog. I think important things will get remembered, and things which aren’t important can just as easily get lost when put among 1000 other things. Still, I know some people will take comfort in this idea so give it a try and let me know.

This pieces was first published on Medium, Nuke the backlog.

Nuke your backlog? Read More »

Pull, don’t push: Why you should let your teams set their own OKRs

There is a divide in the way Objectives and Key Results (OKRs) are practiced. A big divide, a divide between the way some of the original authors describe OKRs and the way successful agile teams implement them. If you haven’t spotted it yet it might explain some of your problems, if you have spotted it you might be feeling guilty.

The first school of thought believes OKRs should be set by a central figure. Be it the CEO, division leadership or central planning department, the OKRs are set and then cascaded, waterfall style, out to departments and teams.

Some go as far as to say “the key results of one level are the objectives of the lower levels.” So a team receiving an OKR from on high take peels of the key results, promotes each to Objective status. Next they add some new key results to each objective and hand the newly formed OKR to a subordinate team. The game of pass the parcel stops when OKRs reach the lowest tier and there is no-one to subordinate.

The second school of thought, the one this author aligns with, notes that cascading OKRs in this fashion goes again agile principle: “The best architectures, requirements, and designs
emerge from self-organizing teams.” In fact, this approach might also reduce motivation and entrench the “business v. engineer” divide.

Even more worryingly, cascading OKRs down could reduces business agility, and eschew the ability to use feedback as a source of competitive advantage and feedback.

Cascading OKRs

Cascading OKRs are handed down from above

We can imagine an organization as a network with nodes and connecting edges. In the cascading model information is passed from the edge nodes to the centre. The centre may also be privy to privileged information not known to the edge teams. Once the information has been collected the centre can issue communicate OKRs back out to the nodes.

One of the arguments given for this approach is that central planning allows co-ordination and alignment because the centre is privy to the maximum amount of information.

A company using this model is making a number of implicit assumptions and polices:

  1. Staff at the centre have both the skills to collect and assimilate information.
  2. That information is received, decisions made and plans issued back in a timely fashion. Cost of delay is negligible.

However, in a more volatile environment each of these assumptions falls. Rapidly changing information may only be known to the node simply because the time it takes to codify the information — write it down or give a presentation — may mean the information is out of date before it is communicated. In fact the nodes may not even know they know something that should be communicated. Much knowledge is tacit knowledge and is difficult to capture, codify and communicate. Consequently it is excluded from formal decision making processes.

The loss of local knowledge represents a loss of business agility as it restricts team’s ability to act on changing circumstances. Inevitably there will be delays both gathering information and issuing out OKRs. As an organization scales these delays will only grow as more information must be gathered, interpreted and decisions transmitted out. Connecting the dots becomes more difficult when there are more dots, and exponentially more connection, to connect.

This approach devalues local knowledge, including capacity and ambition. Teams which have no say in their own OKRs lack the ability to say “Too much”, they goals are set based upon what other people think — or want to think — they are capable of.

Similarly, the idea of ambition, present in much OKR thinking, moves from being “I want to strive for something difficult” to “I want you to try doing this difficult thing.” Let me suggest, people are more motivated by difficult goals that they have set themselves more than difficult goals which are given to them.

Finally, the teams receiving the centrally planned OKRs are likely to experience some degree of disempowerment. Rather than being included and trusted in the decision making process team members are reduced to mere executers. Teams members may experience goal displacement and satisficing. Hence, this is unlikely to lead either to high performing teams or consciences, responsible employees.

Any failure in this mode can be attributed to the planners who failed to anticipate the response of employees, customers or competitors. Of course this means that the planners need more information, but then, any self-respecting planner will have factored their own lack of information into the plan.

Distributed OKRs

Distributed OKR setting

In the alternative model, distributed OKRs, teams to set their own OKRs and feed these into any central node and to leaders. This allows teams to factor in local knowledge, explicit and tacit, set OKRs in a timely fashion and determine their own capacity and ambitions.

One example of using local knowledge is how teams managing their own work load, for example balancing business as usual (or DevOps) work with new product development. As technology has become more common fewer teams are able to focus purely on new product development and leave others to maintain existing systems.

Now those who advocate cascading OKRs will say: “How can teams be co-ordinated and aligned if they do not have a common planning node?” But having a common planner is not the only way of achieving alignment.

In this model teams have a duty to co-ordinate with both teams they supply and teams which supply them. For example, a team building a digital dashboard would need to work with teams responsible for incoming data feeds and those administering the display systems. Consequently, teams do no need to information from every node in the organization — as a central planning group would — but rather only those nodes which they expect to interact with.

This responsibility extends further, beyond peer teams. Teams need to ensure that their OKRs align with other stakeholders in the organization, specifically senior managers. In the same way that teams will show draft OKRs to peer teams they should show managers what they plan to work on, and they should be open to feedback. That does not mean a manager can dictate an OKR to a team but it does mean they can ask, “You prioritising the French market in this OKRs, our company strategy is to prioritising Australia. Is there a reason?”

A common planner is but one means of co-ordination, there are other mechanisms. Allowing teams the freedom to set OKRs means trusting them to gather and interpret all relevant information. When teams create OKRs which do not align it is an opportunity not a failure.

When two teams have OKRs which contradict, or when team OKRs do not align with executive expectations there is a conversation to be had. Did one side know something the other did not? Was a communication misinterpreted? Maybe communication failed?

Viewed like this OKRs are a strategy debugger. Alignment is not mandated but rather emerged over time. In effect alignment is achieved through continual improvement.

These factors — local knowledge and decision making, direct interaction with a limited number of other nodes and continual improvement — are the basis for local agility.

Pull don’t push

Those of you versed in the benefits of pull systems over push systems might like toes this argument in pull-push terms. In the top down approach each manager, node, pushes OKRs to the nodes below them. As with push manufacturing the receivers have little say in what comes their way, they do their bit and push to the next in lucky recipient in the chain.

In the distributed models teams pull their OKRs from their stakeholders. Teams ask stakeholders what they want from the team and they agree only enough OKRs to do in the coming cycle.

This may well mean that some stakeholders don’t get what they wanted. Teams only have so much capacity and the more OKRs they accept the fewer they will achieve. Saying No is a strategic necessity, it is also an opportunity to explore different options.

Pull, don’t push: Why you should let your teams set their own OKRs Read More »

It’s the workflow, stupid

Sausages making illustrates workflow brilliantly! – For years I used this picture of sausages makers to describe the way teams work: meat goes in, sausages come out.

If you put pork in you get pork sausages out

If you put chicken in you get chicken sausages out

If you put beef in you get… in the aftermath of the 2013 horse meat scandal I used to joke “You out horse meat in you get beef sausages out.”

What comes out bears a strong relationship to what goes in.

If you put project A meat in in you get project A sausages out

If you put project B meat in in you get project B sausages out

Sure it works best if you have a dedicated team and you only put project A requests in. When A is finished the team switches and focuses exclusively on project B. But you know what it? It still kind of works if you mix as you go along.

When a team works on multiple different projects in parallel it is not so productive – reduced focus costs, switching between things costs too. It will be a damn site harder to make forecasts about what will be done when, answering the “when will it be done” question will be tougher. But it still works, you can still make forecasts just they will be even less reliable.

By extension, if you put business as usual meat in you get business as usual sausages out. If you put DevOps meat in you get DevOps sausages out. If you put company admin in you get company admin sausages out. Get the picture?

While it is great advice to “focus on just the project/product” the vast majority of teams I’ve ever worked with are not in a position to do that. Turning work down is above their pay grade.

Seeing the whole

In Xanpan I called this “Team centric”. The project you are doing is less relevant than the workflow you are operating. Xanpan explicitly discusses how to integrate “urgent but unplanned work” with planned “project” work, I’ve extended the thinking with OKR Zero.

When things go wrong teams become like a saturated sponge. People can’t see the correlation between what goes in and what comes out. Trust is reduced, more reporting and even policing is added. The workflow becomes more complicated, less predictable and more costly.

It is no use looking at the project. Each project is only part of the picture. Neither will looking at the BAU, DevOps or urgent but unplanned work help either. They are only pieces.

You need to look at the whole: the workflow, the sausage machine that makes the sausages.

It is of little use looking at the pork sausage project and asking how many pork sausages will come out next week if the team is also doing BAU and making some chicken sausages on the side.

Nor is it any use talking about the pork sausage project if every time the team turn the handle they have to stop. Check with accounts, check with the security team and check customers – all of whom have their own workflow. Customers who just want the team to “get on and deliver it.” Every time a team needs to interact, get permission, get feedback or anything else with another team, things slow down and grind to a halt. Other teams are most likely struggling with similar things so they all block one another.

Often when this happens, because people have the best intentions, and because they want to be productive, they start doing something else. Pork sausage production stops while they wait feedback on sausage sales. So they start producing chicken sausages. Then just as chicken sausages start coming out the pork feedback comes in and everything must switch back. But now the chicken meat is unwrapped and getting warm. By the time they get back chicken sausages it has gone off.

It’s the workflow, stupid. Let me suggest again: watch Stockless Production.

No one person

Everyone, and every team, is linked together in workflow so it is difficult for one alone to make a difference. Working harder, producing more often makes things worse not better. Individually people are pretty helpless.

Such workflow streams are full of work-in-progress, WIP, they are overloaded. This is really “work hopefully in progress” (WHIP). It is bad when one team is overloaded but when it is excess strategic wip the whole organization struggles. It is difficult to know where to begin fixing thing. You still have to start fixing it at the team level but until multiple teams start fixing there is not much improvement to show.

No one person can fix this. No single technology can change it. Maybe not even a single team. Everyone is connected. Only by looking at the whole can things be fixed.

Unfortunately this is where project warriors come along. They insist that everything is a project – which increases administration. One or two projects get expedited and are forced through but everything else deteriorates.

Saddest there are know solutions: work to completion, reduce workpiece size, operate a pull system not a push, work within capacity, allow for shit to happen (unplanned but urgent work) and don’t overload the team – in fact don’t load them more the 76%.

That is workflow management. The devil is in the detail, there are no big easy solutions – if there were they would have been applied already. Workflow management cuts across projects. Managers have a role to play here but not project managers. Project management is too narrow.

It’s the the workflow, stupid.

So finally, an advert: I’d love to help, call me, e-mail me, LinkedIn, WhatsApp, whatever medium you like, just ask.

It’s the workflow, stupid Read More »

Is all management bad? Or is it just bad management?

There is an interesting piece in this week’s Economist about the poor quality of management in the UK, “For Britain to grow faster it needs better managers” (paywall). It suggests bad management is a large part of the productivity gap between the UK and other developed nations. Living in the UK, and having seen inside many British companies this rings true with me. I’ve long thought it was less a case of “In search of excellence” and more a case of “In search of mediocracy.”

Now that said, I don’t have an objective point of view: I’ve been involved with many “project rescues” or “turn arounds” (actually I quite enjoy them, call me!) in the UK but my clients abroad are normally more stable. I suspect this is selection bias: because I’m UK based it is easy to ask for help. Flying someone in from abroad is a barrier so only the better managed companies in Europe and the USA would do it. So while I might think UK management is bad it is entirely possible that it is bad everywhere. Indeed, I am sure there is bad management everywhere, and there is good management too; but in the UK the ratio of bad to good is higher.

The international agile movement doesn’t do much to encourage management to improve. All the anti-manager talk (“self managing teams”, “no project managers”, etc.) creates a barrier. It has long been my view that such anti-manager talk is largely a reaction to bad management and it is entirely possible that “no management” is better than “bad management.”

Simultaneously, “good management” can be value adding, people don’t push back on “good management” (even if it gets branded as bad just because it is management). Sometimes, making things better for the many means being unpopular with a few. The few will voice their complaints more loudly than the many will voice their praise, and often it is hard to attribute success to managers anyway.

What we miss

The common agile view of management as “a bad thing” misses two points:

First off: removing the managers will remove some management work but will leave a lot. Removing managers does not remove management work. The work which remains either doesn’t get done (worse still) or is spread around those who remain so everyone’s work gets disrupted. On the whole these people don’t want to be managers, so they are unhappy and don’t have management skills. They do have other skills – business analysis, Java, support desk, whatever – so now they are not using their most productive skills and are unhappy with it.

Second, and more importantly: removing managers does’t do anything to improve the skills of those who do management work. Whether this is managers in place or people who have to step up when managers are fired. In other words, all this talk of “no managers” stops us from improving management skills one way or another.

Yes, I think workers and teams should have a voice in the work they do.

Yes, I think we should make group decisions and take into account diverse opinions.

Yes, managers sometimes need to use authority but good managers spend more time nudging, enthusing, guiding, structuring. Occasional use of authority can help, over use undermines.

Yes, I think people can take more responsibility. Some of what passes for management work is admin that could be dropped, information sharing which could be automated, or managers making work for other managers.

But I happen to think good management recognises all those things and respects the expert workers.

Bad management ignores all those things and subscribes to the “Action Hero model of management”: you do this, you do that, I’ll siege the bridge, if I’m not back in 10 blow it all to kingdom come, move it!

The irony is, those who subscribe strongest to the “no management” meme will say “Let the engineers (or doctors, or designers, or whatever) run things” but when you do that you find a management style cadre arises who are experts in their own field. Being a senior engineer (or whatever profession) often means being a type of manager, they need their original skills but they also need some management skills. If they don’t learn new skills those people become bad managers.

Is all management bad? Or is it just bad management? Read More »

Verified by MonsterInsights