Succeeding with OKRs in Agile is on sale this week only – September 9, 2024 at Amazon.
For this week only the e-book version of Succeeding with OKRs in Agile is $2.99 / €2.99 / £2.99 instead of $9.99 / €9.95 / £9.95 – that is 70% off.
The print version is reduced by 50% or more depending on market, from $19.95 to $9.95 – although some of these cuts price have yet to come through on the print version.
Other markets have similar massive price reducting this week only – there are a few exceptions because Amazon set a minimum price in some markets. And price-cuts are limited to Amazon, princes on Kobo and elsewhere remain the same because of the mechanics of the price cut.
1) My OKR Writing workshop is back on October and is booking now. This time I’m running with the help of my friends at Avanscoperta so all ticket bookings are through their platform.
All the details – and a discount is on the Avanscoperta website – where you will also find a recording of a OKRs webinar I ran last month.
In short we look at what makes a good OKR and what makes a bad OKR, then we get people writing OKRs and move the discussion onto how to measure key results. Simple really.
2) Meanwhile, I appeared on the Agile in Action podcastlast week and discussed how to introduce OKRs, how to use OKRs and my famous call to “Nuke the backlog”.
Small, small, small – I have spent a lot of my career arguing for small: small tasks, small user stories, small teams, small releases, small funding increments, small “projects”. I argue we should get good at small and optimise our systems for doing lots of small. I can justify my arguments – Project Myopia or Diseconomies of Scale. Small makes sense.
But…
While focusing on small is good for delivery it creates other problems. In truth, no solution is ever without consequences and few have no negative consequences, all we can strive for is more positive consequences than negative.
It is not enough to get good at small in delivery, one needs to couple that with an understanding of what is commonly called the bigger picture and which I prefer to think of as the broader picture.
The failure to situate small in the broader context underlies many of the problems we see in the work place today. Take work management, ignoring the broad leads to micro-management, disempowered staff, frustrated employees and collaboration failure.
Product Owners – I include Product Managers here – are failing because they are failing to see the broader picture: what is the problem we are trying to solve? how can we bring value to customers?
Product people are too often too focused on features. While I’ve recently seen some point the finger of blame at product owners/managers I think they are only responding to their environment. Companies are operating feature factors and sales are made on features, people think more when they should think better. Product people need to get out and meet customers and bring what they learn back to they can try and change the inside.
The feature, feature, feature attitude is also behind the backlog fetish which leads to backlogs stuffed full of ideas which are never, ever, going to be implements.
The discussion needs to be broadened. We need to get away from quick-wins and features, we need to think more broadly. We need to think about the big things: goals, objectives, purpose and even meaning.
Post pandemic it is common to hear of people seeking meaning in their work, no wonder so many people are dropping out of the workforce when the best they are offered is “more stuff to do.” In looking at broader goals we also need to recognise goals within goals, we need to uncover the hierarchy of (possibly competing) goals, call them out and work with them.
Thats is why I am keen to emphasise outcomes over outputs and its why its tempting to think of a great big funnel containing a machine for breaking the big into small (or Rock Crushing as my old friend Shane Hastie would put it.)
The challenge is to combine the need to focus on the small for delivery while also being able to think broadly. In part it is this challenge that has caused me to focus more on agility over agile.
Another part of the solution is iteration: we spend a lot of our time in the small focusing on delivery, but from time to time we surface and consider the wider context. Thats why I embrace the OKR cycle, it gives everyone a chance to understand both and take part in both discussions.
Underlying so much of my work over the years has been a desire to remove intermediate pieces: like having a coder speak directly to a user rather than through a BA, its one of my objections to projects (which claim to show the big picture but actually represent a restricted view), its lurking in my dislike of estimates, and its part of my dislike of backlogs.
Asking people to carry the broad picture in their minds while working in the small is asking a lot. Thats why the cycle of thinking broad, setting goals, then switching into narrow mode for delivery works so well. Its fair, it includes everyone and it gives everyone the reason why we do what we do.
In short, we need to think broadly when deciding “what is the right thing to do”, then switch into the small to deliver. Importantly, we need to share not just that thinking but also the discussion. Everyone has a right to be heard.
Its not 1970 any more, even 1974 was 50 years ago. I used to make this point regularly when discussing the project model and #NoProjects. Now though I want to make the same point about OKRs.
The way some people talk about OKRs you might think it was 1974: OKRs are a tool of command and control, they are given to workers by managers, workers have little or no say in what the objectives are, the key results are little more than a to-do list and there is an unwritten assumption that if key results 1, 2 and 3 are done then the objective will be miraculously achieved. In fact, those objectives are themselves often just “things someone thinks should be done” and shouldn’t be questioned.
I’d like to say this view is confined to older OKR books. However, while it is not so common in more recent books many individuals carry these assumptions.
A number of things have changed since OKRs were first created. The digital revolution might be the most obvious but actually digital is only indirectly implicated, digital lies behind two other forces here.
First, we’ve had the agile revolution: not only does agile advocate self-organising teams but workers, especially professional knowledge workers, have come to expect autonomy and authority over the work they do. This is not confined to agile, it is also true of Millennials and Generation-Z workers who recently entered the workforce.
Digital change is at work here: digital tools underpin agile, and Millennials and Gen-Z have grown up with digital tools. Digital tools magnify the power of workers while making it essential the workers have the authority to use the tool effectively and make decisions.
Having managers give OKRs to workers, without letting the workers have a voice in setting the OKRs, runs completely against both agile and generational approaches.
Second, in a world where climate change and war threaten our very existence, in a world where supposedly safe banks like Silicon Valley and Lehman Brothers have failed, where companies like Thames Valley Water have become a byword for greed over society many are demanding more meaning and purpose in their work—especially those Millennials.
Simply “doing stuff” at work is not enough. People want to make a difference. Which is why outcomes matter more than ever. Not every OKR is going to result in reduced CO2 emissions but having outcomes which make the world a better place gives meaning to work. Having outcomes which build towards a clear meaningful purpose has always been important to people but now it is more important than ever.
Add to that the increased volatility, uncertainty and complexity of our world, and the ambiguous nature of many events it is no longer good enough to tell people what to do. Work needs to have meaning both so people can commit to it and also so they can decide what the right thing to do is.
In 2024 the world is digital and the world is VUCA, workers demand respect, meaning and to be treated like partners not gophers.
OKRs are a powerful management tool but they need to be applied like it is 2024 not 1974.
One of the claims made of OKRs is they solve the problem of alignment, strategic alignment specifically. So, anyone reading Pull don’t push OKRs post might be wondering how this can be. I’m sure many people will ask how alignment can be achieved without some master plan and planner?
The obvious way to ensure teams are aligned with company strategy is clearly to tell them what to do. Obviously, if we have some master planner sitting at the centre they can look at the strategy, decide what needs doing and issue command, using OKRs, to teams.
Think of this like programming: there is a core controller and each team is a sub-routine which is called to do a bit. Alignment can be programmed through step-wise refinement with each layer elaborating on the ask and passing instructions down.
Obvious really. Why did even bother describing it?
Obvious and wrong
Obvious too is that this is not Agile. But heck, we’re all “pragmatic” and obviously achieving this kind of co-ordination requires some compromise and in this case commands must be passed around.
Personally, I have an aversion to any scheme that involves telling others what to do. There are just so many things that could go wrong, far better that to come up with a scheme that is failure tolerant.
Rather than spend my time explaining why this approach is wrong let us try a thought experiment: for the next few minutes accept I am right and we can’t tell others what to do. (Leave me a comment if you want me to set out the reasons why.)
Now the question is: what does work does?
Emergent alignment.
This is a design problem (“how do we are design work so disparate teams work harmoniously to a common goal?”) so the principle of emergent design should work too. We want to create a mechanism which allows design to emerge.
Now OKRs have a role to play.
By setting out what a team intend to achieve in a short, standardised, format OKRs allow intentions to be communicated and shared. Thus OKRs can be placed next to other OKRs and compared. Sharing in a standardised format allows misalignment to become clear. Once the problem can be seen action can be take to realign: OKRs build a feedback loop.
OKRs are another example an agile tool which allows problems to be seen more clearly. Someone once said “Agile is a problem detector”, for me OKRs are a strategy debugger. Simply using OKRs does not automatically solve the problem, but it makes the problem to be solved clearer.
The organisation sets out its goal(s) and strategy. Teams are asked to produce OKRs to advance on that goal. If the strategy incorporates all necessary information, is communicated clearly and teams are completely focused on the strategy then everything will work.
But, if strategy is absent, if the strategy has overlooked some key piece of information, if the strategy is miscommunicated (or not communicated at all) or teams have other demands then things will not align. When this happens there is work to do, now we want to correct problems and create OKR-Strategy alignment.
Step 1: the centre, the senior leaders set out the strategy and goals – which should themselves align with the purpose and history of the organization.
Step 2: teams look at these goals, look at the other demands on them and the resources they have, and ask themselves: what outcomes do we need to bring about to move towards those goals while following the strategy?
They write OKRs for the next period based on their understanding.
Step 3: members of the centre look at those OKRs and talk to the team. Everyone seeks to understand how the OKRs will advance the overall goal. If everything aligns then great, start work!
If alignment is missing then work is required: perhaps work on the OKRs, or perhaps the strategy needs clarifying or the goals adjusting.
Because this approach is feedback based it is self-correcting. The catch is, for the feedback loop to work people need to invest time in reading OKRs and looking for alignment, and misalignment, and correcting.
In the “obvious solution” you don’t need these time consuming steps because the centre assumes it is right and anything which goes wrong is someone else’s fault.
By the way, a smaller version of the alignment problem is sometimes called “co-ordination.” This is where two, or more teams, need to align/co-ordinate their work to create an outcome, e.g. Team A needs Team B to do something for them. The same principles apply as before, only here it is the team members which need to compare the OKRs.
So there you have it. OKRs are a strategy debugger and they create alignment by building a feedback loop to promote emergent alignment.
There is a divide in the way Objectives and Key Results (OKRs) are practiced. A big divide, a divide between the way some of the original authors describe OKRs and the way successful agile teams implement them. If you haven’t spotted it yet it might explain some of your problems, if you have spotted it you might be feeling guilty.
The first school of thought believes OKRs should be set by a central figure. Be it the CEO, division leadership or central planning department, the OKRs are set and then cascaded, waterfall style, out to departments and teams.
Some go as far as to say “the key results of one level are the objectives of the lower levels.” So a team receiving an OKR from on high take peels of the key results, promotes each to Objective status. Next they add some new key results to each objective and hand the newly formed OKR to a subordinate team. The game of pass the parcel stops when OKRs reach the lowest tier and there is no-one to subordinate.
The second school of thought, the one this author aligns with, notes that cascading OKRs in this fashion goes again agile principle: “The best architectures, requirements, and designs emerge from self-organizing teams.” In fact, this approach might also reduce motivation and entrench the “business v. engineer” divide.
Even more worryingly, cascading OKRs down could reduces business agility, and eschew the ability to use feedback as a source of competitive advantage and feedback.
Cascading OKRs
We can imagine an organization as a network with nodes and connecting edges. In the cascading model information is passed from the edge nodes to the centre. The centre may also be privy to privileged information not known to the edge teams. Once the information has been collected the centre can issue communicate OKRs back out to the nodes.
One of the arguments given for this approach is that central planning allows co-ordination and alignment because the centre is privy to the maximum amount of information.
A company using this model is making a number of implicit assumptions and polices:
Staff at the centre have both the skills to collect and assimilate information.
That information is received, decisions made and plans issued back in a timely fashion. Cost of delay is negligible.
However, in a more volatile environment each of these assumptions falls. Rapidly changing information may only be known to the node simply because the time it takes to codify the information — write it down or give a presentation — may mean the information is out of date before it is communicated. In fact the nodes may not even know they know something that should be communicated. Much knowledge is tacit knowledge and is difficult to capture, codify and communicate. Consequently it is excluded from formal decision making processes.
The loss of local knowledge represents a loss of business agility as it restricts team’s ability to act on changing circumstances. Inevitably there will be delays both gathering information and issuing out OKRs. As an organization scales these delays will only grow as more information must be gathered, interpreted and decisions transmitted out. Connecting the dots becomes more difficult when there are more dots, and exponentially more connection, to connect.
This approach devalues local knowledge, including capacity and ambition. Teams which have no say in their own OKRs lack the ability to say “Too much”, they goals are set based upon what other people think — or want to think — they are capable of.
Similarly, the idea of ambition, present in much OKR thinking, moves from being “I want to strive for something difficult” to “I want you to try doing this difficult thing.” Let me suggest, people are more motivated by difficult goals that they have set themselves more than difficult goals which are given to them.
Finally, the teams receiving the centrally planned OKRs are likely to experience some degree of disempowerment. Rather than being included and trusted in the decision making process team members are reduced to mere executers. Teams members may experience goal displacement and satisficing. Hence, this is unlikely to lead either to high performing teams or consciences, responsible employees.
Any failure in this mode can be attributed to the planners who failed to anticipate the response of employees, customers or competitors. Of course this means that the planners need more information, but then, any self-respecting planner will have factored their own lack of information into the plan.
Distributed OKRs
In the alternative model, distributed OKRs, teams to set their own OKRs and feed these into any central node and to leaders. This allows teams to factor in local knowledge, explicit and tacit, set OKRs in a timely fashion and determine their own capacity and ambitions.
One example of using local knowledge is how teams managing their own work load, for example balancing business as usual (or DevOps) work with new product development. As technology has become more common fewer teams are able to focus purely on new product development and leave others to maintain existing systems.
Now those who advocate cascading OKRs will say: “How can teams be co-ordinated and aligned if they do not have a common planning node?” But having a common planner is not the only way of achieving alignment.
In this model teams have a duty to co-ordinate with both teams they supply and teams which supply them. For example, a team building a digital dashboard would need to work with teams responsible for incoming data feeds and those administering the display systems. Consequently, teams do no need to information from every node in the organization — as a central planning group would — but rather only those nodes which they expect to interact with.
This responsibility extends further, beyond peer teams. Teams need to ensure that their OKRs align with other stakeholders in the organization, specifically senior managers. In the same way that teams will show draft OKRs to peer teams they should show managers what they plan to work on, and they should be open to feedback. That does not mean a manager can dictate an OKR to a team but it does mean they can ask, “You prioritising the French market in this OKRs, our company strategy is to prioritising Australia. Is there a reason?”
A common planner is but one means of co-ordination, there are other mechanisms. Allowing teams the freedom to set OKRs means trusting them to gather and interpret all relevant information. When teams create OKRs which do not align it is an opportunity not a failure.
When two teams have OKRs which contradict, or when team OKRs do not align with executive expectations there is a conversation to be had. Did one side know something the other did not? Was a communication misinterpreted? Maybe communication failed?
Viewed like this OKRs are a strategy debugger. Alignment is not mandated but rather emerged over time. In effect alignment is achieved through continual improvement.
These factors — local knowledge and decision making, direct interaction with a limited number of other nodes and continual improvement — are the basis for local agility.
Pull don’t push
Those of you versed in the benefits of pull systems over push systems might like toes this argument in pull-push terms. In the top down approach each manager, node, pushes OKRs to the nodes below them. As with push manufacturing the receivers have little say in what comes their way, they do their bit and push to the next in lucky recipient in the chain.
In the distributed models teams pull their OKRs from their stakeholders. Teams ask stakeholders what they want from the team and they agree only enough OKRs to do in the coming cycle.
This may well mean that some stakeholders don’t get what they wanted. Teams only have so much capacity and the more OKRs they accept the fewer they will achieve. Saying No is a strategic necessity, it is also an opportunity to explore different options.
A few weeks ago I had a conversation with a potential client about OKRs. They started talking about “initiatives.” In fact, they talked about “initiatives” as a standard part of OKRs, one of those moments when self-doubt set in. I started wondering “What do they mean?” And more worryingly, “How do I not know about initiatives?”
When I did some digging it turns out that one, or possibly more, OKR consultancies talk about “initiatives” as a third level of OKR. For these consultants there is a hierarchy, Objective at the top, Key results below that and then initiatives as the “things you will do to deliver the key results and therefore the objective.”
Umm, maybe
In one way, I like the thinking. I agree that “what we will do” is not part of the objective and it’s not the key results. (A common mistake with OKRs, one I made myself years back, is seeing them as the to-do list.) So I can see why they label the things to do as another level. At the same time, I see two problems.
First is the hierarchical decomposition. Again, the idea that an initiative builds towards one key result which builds towards one objective. Once you start viewing key results as acceptance criteria which describe the post-objective world, this breaks down – the key results become cross-cutting. If your key result is “Customer receives their order within 48 hours”, for an objective of “Satisfied customers”, there is probably not just one thing to do. That goal may cut across lots of other pieces of work.
Is an initiative big or small?
Second, and perhaps more importantly, the word “initiative” is already widely used and means different things to different people, creating a recipe for confusion.
Specifically, although the #NoProjects community never standardised on the word, it is widely used as an alternative to “project” to describe a stream of work, an endeavour, a mission, a programme, or an ongoing effort. So for many of us, an initiative is not a small piece of work sitting below key results, but rather a big stream of work sitting above objectives.
This also hints at the reason why “initiative” was never agreed on. For many of us “initiative” has overtones of “beginning” – indeed my Apple dictionary uses words like “originate”, “before” and “fresh” when defining “initiative.” (In Dungeons and Dragons players roll “initiative” at the start of a fight to see who goes first).
So what do you think? Am I too sensitive? Have I missed something critical? – let me know in the comments or drop me a mail.
Still, there is most definitely a need to decide what actions are needed to deliver OKRs. When and how to do that will be in future posts, stay tuned. In the meantime, if you use the word initiative make sure you clearly tell people what you mean by the work.
Intended strategy goes wrong in one of two ways. First: it is the wrong strategy. It might be badly conceived, it might aiming at the wrong target, it might be based on weak analysis or mis-understanding.
Or it might go wrong because it is poorly executed. Strategy execution begins as soon as strategy is decided on because it needs to be communicated. Bungle the strategy communication and you are unlikely to recover.
Indeed communication issues run all the way through strategy execution because the meaning of any message is decided not by the speaker but by the listener. The executive charged with explaining strategy might think it is clear but each received understands the message in their own context. Even if the executive uses clear language – and lets face it, many don’t – they have no way of knowing what the receiver takes away.
The only way to know if a message is received correctly is with feedback. Similarly, feedback is needed if bad strategy is to be exposed. The executives setting strategy may lack information which those on the ground have and which undermines strategy. Executives think the strategy is great but to everyone else, the emperor has no clothes.
In other words: Linus’ Law applies to business strategy as much as computer software: “given enough eyeballs, all bugs are shallow”. Exposing more people to a strategy allows more brain power anymore information to be applied. But those brains can only have an influence if there is some means of feedback.
This is where OKRs come in. OKRs make strategy visible.
Remember that in my formulation teams set their own OKRs. This is the first test. Leaders set out the organization purpose, missions, visions, grand goals and strategic intent then ask the teams: “how can you help?”
Teams respond by setting OKRs which will advance on those goals. OKRs, written in a standardised format, are feedback from the teams to the upper echelons.
In my model leaders and managers are stakeholders in teams. While teams, which may include managers, are as autonomous as possible they are not free to do what they like. They are part of a system and exist to deliver to stakeholders. As such I expect team OKRs to be reviewed by leadership and those leaders to provide feedback. This is a debugging loop.
1. Leadership sets the strategy and intent, then communicates it out.
2. Team members hear and formulate OKRs to deliver that strategy and intent as they understand it.
3. Leadership reviews OKRs and will expect to find OKRs which, well, support their intent.
This “OKR sandwich” is fits with the Strategy Rethink I’ve talked about before. The sandwich creates two opportunities for misalignment (bugs) to be exposed. First when the team sets the OKRs, they might find the strategic intent does not match their world. That might be a misunderstanding or it might be because team members have information leadership does not.
Second, when leadership reviews OKRs they have the opportunity to find bugs. Discrepancies found at this point might indicate communication has failed, or it might indicate the team have additional information.
To be clear: OKRs which don’t match expected strategy are not the cause of problems but a symptom. Noticing the discrepancy early means corrective action can be taken quickly. Yes the OKR needs, but so too does the cause of the discrepancy.
Even after the OKR setting and review process, during execution, OKRs continue to play a debugging role. It is at this point that the “rubber hits the road” for the first time. Thus it is necessary for executives to keep feedback channels open.
It came up again today, actually, once you delve into OKR setting:
“When writing OKRs, and specifically the key results, how can we set a target if we don’t know the baseline?
For example, suppose we want to increase the number of site visitors, if we have 1,000 visitors a day then the target would be 5,000 but if we only have 200 a day then the target would be 1,000. But if we don’t know how many we have to start with, how can we set a target?”
I call it the baseline problem. I know it troubles many teams but it shouldn’t. There are actually, two, or possibly three answers.
The easy answer, and one I don’t like, is to sidestep the problem: instead of saying “increase views from 1,000 to 5,000” say “increase views 500%”, or, to take another example, “reduce run time from six hours to one hour” say “reduce runtime to one sixth.”
Replacing initial and final numbers with a multiplier doesn’t solve the whole problem because you need to start by finding out where you are before you change anything. So task 1 becomes: measure status quo, whether that is eyeballs, run-time or whatever.
Another way around this problem is that you could change your OKR after setting. The initial OKR could say “increase views from to .” Again task #1 is to measure and at that point you simply revisit the OKR and fill in the blanks.
Granted some people might take exception to you tweaking an OKR after the start of the cycle but personally I don’t see that as a big issue.
Baselines solve the wrong problem
Now sidestepping the problem like this might keep you moving but it is not very satisfying. That is because: it is solving the wrong problem. The real problem is not that you don’t know the baseline but that you don’t know what the number should be. When you set the target by starting with the baseline you are putting technology first and you are limiting your ambitions.
The question is not “How many visitors do we have now?” and “What target might be achieve?” but “What visitor numbers do we need to have?” Or maybe “What visitor number do we need to make us number #1in our market?” or “What numbers do we need to break even? get investment? impress clients?”
Or take the run-time example, don’t ask “How fast are we?” and “How fast can we be?”. Instead ask “How fast do we need to be?” If you don’t know the answer then ask “Why do we want to be fast?”. Or ask, “What advantage will being twice as fast bring us?”
Rather than start with the target in mind start with the outcome in mind, then ask what you need to achieve that.
If you set the target by reference to the current baseline you are going to limit ambitions and let the current technology drive. Instead think about the outcome and what that outcome looks like, then work back to understand what needs to be done and options for doing it.
“There are no silver bullets” wrote the late, great, Fred Brooks. Consequently most writers and evangelists avoid claiming they have a silver bullet even when it sounds like they are claiming just that. I’ve cautioned against silver bullets several times in this blog. I often find myself telling clients “The devil is in the detail.” Meaning: there are lots of things to address and no single big fixes.
Anyone claiming to have found a silver bullet deserves to be faced by scepticism. Still…
I find myself coming back to two solutions again and again. They are the closest thing to silver bullets I know. Even if these are not silver bullets in their own right applying either makes it easier to work the detail and tackle problems.
Yet these silver bullets dare not speak their name. To do so risks endless debate and damaging your own leverage.
Keep reading for the Silver Bullets
Writing this I’m avoiding naming the potential silver bullets because I even feel many readers will stop reading the moment I name them. Please, keep reading.
Both are widely discussed by the agile cognoscenti but both are controversial. Suggesting either may lead people to think you fail to comprehend the situation or are just stupid. Thus in both cases you are likely to end up in long discussions.
Both are disliked by opposite ends of the organizational hierarchy. Both ends are optimistic, and prefer to believe people should be able to rectify problems themselves (the people problem problem again.)
Both bullets are rarely applied with vigour. Perhaps because of the previous points. When I raise the points people plea helplessness, “My boss doesn’t understand.”
Both bullets scale: in the small and large, although they manifest themselves in different ways at different scale points.
Neither is an instant fix but both start to deliver returns relatively quickly. The problem with both is you need to keep the faith and keep applying them for weeks to see a difference. In both cases, people often give up before they see the benefit.
Test first will not fix all problems but it will remove a substantial number of problems. That makes managing the remaining ones easier. And as to where the “extra time” comes from, that is easy: the time you don’t spend fixing defects and misunderstandings. Test first is faster than debug later.
I’ve written a lot about test first so I won’t say more just now.
Bullet #2: reduce Work in Progress (WIP)
Most commonly WIP limits are applied through limited columns on a Kanban board or limiting the work taken into a sprint. Done right OKRs limit WIP too. However, reducing WIP requires discipline.
At the higher levels companies and Government entities seem quick to reduce staffing numbers but slow to reduce work. Accepting new “projects” is easy but resourcing them difficult. Rather than prioritise, say “No” and push back on work, everything is taken on. Individuals who push back as seen as pessimists, “not team players” and “obstacles.” Pushing back does not help your chance of getting promoted so leadership ranks tend to be populated by those who accept.
The result is salami sliced people and slow progress across a broad front rather than rapid progress across a narrow range. But, reducing WIP also seems to be the hardest medicine to administer. In fact, I sometimes find myself hiding WIP reduction measures.
As an aside, I feel WIP has gotten worse since the pandemic struck, the move to remote working means every conversation is now a meeting in the diary. Our diaries are now overrun with “Meeting WIP”. An ad hoc 10 minute conversation at the coffee machine is now a 30 minute Zoom call planned days in advance.
WIP reduction is applicable at all levels: reduce the number of pieces of work the individual is switching between, reduce the number of pieces of work teams are tackling, reduce the number of projects a department has in flight, and most of all: reduce strategic WIP.
Finally I’ll just note: I sometimes wonder why I stop being an OKR-cynical to learned to like them. The thing is, I see OKRs as a tool to address both these problems: OKRs are test first, and limiting OKRs is WIP limiting.