Agile

Tax the data

iStock_000008663948XSmall-2017-10-6-13-19.jpg

If data is the new oil then why don’t we tax it?

My data is worth something to Google, and Facebook, and Twitter, and Amazon… and just about every other Internet behemoth. But alone my data is worth a really tiny tiny amount.

I’d like to charge Google and co. for having my data. The amount they currently pay me – free search, free email, cheap telephone… – doesn’t really add up. In fact, what Google pays me doesn’t pay my mortgage but somehow Larry Page and Sergey Brin are very very rich. Even if I did invoice Google, and even if Google paid we are talking pennies, at most.

But Google don’t just have my data, they have yours, your Mums, our friends, neighbours and just about everyone else. Put it all together and it is worth more than the sum of the parts.

Value of my data to Google < 1p
Value of your data to Google < 1p
Value our combined data to Google > 2p

The whole is worth more than the sum of the parts.

At the same time Google – and Facebook, Amazon, Apple, etc. – don’t like paying taxes. They like the things those taxes pay for (educated employees, law and order, transport networks, legal systems – particularly the bit of the legal system that deals with patents and intellectual property) but they don’t want to pay.

And when they do pay they find ways of minimising the payment and moving money around so it gets taxed as little as possible.

So why don’t we tax the data?

Governments, acting on behalf of their citizens should tax companies on the data they harvest from their users.

All those cookies that DoubleClick put on your machine.

All those profile likes that Facebook has.

Sure there is an issue of disentangling what is “my data” from what is “Google’s data” but I’m sure we could come up with a quota system, or Google could be allowed a tax deduction. Or they could simply delete the data when it gets old.

I’d be more than happy if Amazon deleted every piece of data they have about me. Apple seem to make even more money that Google and make me pay. While I might miss G-Drive I’d live (I pay DropBox anyway).

Or maybe we tax data-usage.

Maybe its the data users, the Cambridge Analyticas, of this world who should be paying the data tax. Pay for access, the ultimate firewall.

The tax would be levied for user within a geographic boundary. So these companies couldn’t claim the data was in low tax Ireland because the data generators (you and me) reside within state boundaries. If Facebook wants to have users in England then they would need to pay the British Government’s data-tax. If data that originates with English users is used by a company, no matter where they are, then Facebook needs to give the Government a cut.

This isn’t as radical as it sounds.

Governments have a long history of taxing resources – consider property taxed. Good taxes encourage consumers to limit their consumption – think cigarette taxes – and it may well be a good thing to limit some data usage. Anyway, thats not a hard and fast rule – the Government taxes my income and they don’t want to limit that.

And consider oil, after all, how often are we told that data is the new black gold?
– Countries with oil impose a tax (or charge) on oil companies which extract the oil.

Oil taxes demonstrate another thing about tax: Governments act on behalf of their citizens, like a class-action.

Consider Norway, every citizen of Norway could lay claim to part of the Norwegian oil reserves, they could individually invoice the oil companies for their share. But that wouldn’t work very well, too many people and again, the whole is worth more than the sum of the parts. So the Norwegian Government steps in, taxes the oil and then uses the revenue for the good of the citizens.

In a few places – like Alaska and the Shetlands – do see oil companies distributing money more directly.

After all, Governments could do with a bit more money and if they don’t tax data then the money is simply going to go to Zuckerberg, Page, Bezos and co. They wouldn’t miss a little bit.

And if this brings down other taxes, or helps fund a universal income, then people will have more time to spend online using these companies and buying things through them.

Read more? Subscribe to my newsletter – free updates on blog post, insights, events and offers.

Tax the data Read More »

MVP is a marketing exercise not a technology exercise

MVP-2017-10-2-11-03.jpg
… Minimally Viable Product

Possibly the most fashionable and misused term the digital industry right now. The term seems to be used by one-side-or-other to criticise the other.

I recently heard another Agile Coach say: “If you just add a few more features you’ll have an MVP” – I wanted to scream “Wrong, wrong, wrong!” But I bit my tongue (who says I’m can’t do diplomacy?)

MVP often seems to be the modern way of saying “The system must do”, MVP has become the M in Moscow rules.

Part of the problem is that the term means different things to different people. Originally coined to describe an experiment (“what is the smallest thing we could build to learn something about the market?”) it is almost always used to describe a small product that could satisfy the customers needs. But when push comes to shove that small usually isn’t very small. When MVP is used to mean “cut everything to the bone” the person saying it still seems to leave a lot of fat on the bone.

I think non-technical people hear the term MVP and think “A product which doesn’t do all that gold plating software engineering fat that slows the team down.” Such people show how little they actually understand about the digital world.

MVPs should not about technology. An MVP is not about building things.

An MVP is a marketing exercise: can we build something customers want?

Can we build something people will pay money for?

Before you use the language MVP you should assume:

  1. The technology can do it
  2. Your team can build it

The question is: is this thing worth building?and before we waste money building something nobody will use, let alone pay for, what can we build to make sure we are right?

The next time people start sketching an MVP divide it in 10. Assume the value is 10% of the stated value. Assume you have 10% of the resources and 10% of the time to build it. Now rethink what you are asking for. What can you build with a tenth?

Anyway, the cat is out of the bag, as much as I wish I could erase the abbreviation and name from collective memory I can’t. But maybe I can change the debate by differentiating between several types of MVP, that is, several different ways the term MVP is used:

  • MVP-M: a marketing product, designed to test what customers want, and what they will pay for.
  • MVP-T: a technical product designed to see if something can be build technologically – historically the terms proof-of-concept and prototype have been used here
  • MVP-L: a list of MUST HAVE features that a product MUST HAVE
  • MVP-H: a hippo MVP, a special MVP-L, that is highest paid person’s opinion of the feature list, unfortunately you might find several different people believe they have the right to set the feature list
  • MVP-X: where X is a number (1,2, 3…), this derivative is used by teams who are releasing enhancements to their product and growing it. (In the pre-digital age we called this a version number.) Exactly what is minimal about it I’m not sure but if it helps then why not?

MVP-M is the original meaning while MVP-L and MVP-H are the most common types.

So next time someone says “MVP” just check, what do they mean?

Read more? Join my mailing list – free updates on blog post, insights, events and offers.

MVP is a marketing exercise not a technology exercise Read More »

Utilisation and non-core team members

SplitMan-2017-09-25-10-06.png

“But we have specialists outside the team, we have to beg… borrow… steal people… we can never get them when we want them, we have to wait, we can’t get them enough.”

It doesn’t matter how much I pontificate about dedicated, stable, consistent teams I hear this again and again. Does nobody listen to me? Does nobody read Xanpan or Continuous Digital?

And I wonder: how much management time is spent arguing over who (which individuals) is doing what this week?

Isn’t that kind of piecemeal resourcing micro-management?

Or is it just making work for “managers” to do?

Is there no better use of management time than arguing about who is doing what? How can the individuals concerned “step up to the plate” and take responsibility if they are pulled this way and that? How can they really “buy in” to work when they don’t know what they doing next week?

Still, there is another answer to the problem: “How do you manage staffing when people need to work on multiple work streams at once?”

Usually this is because some individuals have specialist skills or because a team cannot justify having a full time, 100%, dedicated specialist.

Before I give you the answer lets remind ourselves why the traditional solution can make things worse:

  • When a resource (people or anything else) is scarce queues are used to allocate the scarce resources and queues create delays
  • Queues create delays in getting the work done – and are bad for morale
  • Queues are an alternative cost: in this case the price comes from the cost-of-delay
  • Queues disrupt schedules and predictability
  • In the delay people try to do something useful but the useful thing is lower value, and may cause more work for others, it can even create conflict when the specialist becomes available

The solution, and it is a generic solution that can be applied whenever some scarce resource (people, beds, runways):

Have more of the scarce resource than is necessary.

So that sounds obvious I guess?

What you want is for there be enough of the scarce resource so that the queues do not form. As an opening gambit have 25% resource more than you expect to need, do not plan to use the scarce resource more than 75% of the available time.

Suppose for example you have several teams, each of who needs a UX designer 1-day a week. At first sight one designer could service five teams but if you did that there would still be queues.

Why?

Because of variability.

Some weeks Team-1 would need a day-and-a-half of the designer, sure some week they would need the designer less than a day but any variability would cause a ripple – or “bullwhip effect”. The disruption caused by variation would ripple out over time.

You need to balance several factors here:

  • Utilisation
  • Variability
  • Wait time
  • Predictability

Even if demand and capacity are perfectly matched then variability will increase wait time which will in turn reduce predictability. If variability and utilisation are high then there will be queues and predicability will be low.

  • If you want 100% utilisation then you have to accept queues (delay) and a loss of predicability: ever landed at Heathrow airport? The airport runs at over 96% utilisation, there isn’t capacity to absorb variability so queues form in the air.
  • If you want to minimise wait time with highly variable work you need low utilisation: why do fire departments have all those fire engines and fire fighters sitting around doing nothing most days?

Running a business, especially a service business, at 100% utilisation is crazy – unless your customers are happy with delays and no predicability.

One of the reasons budget airlines always use the same type of plane and one class of seating is to limit variation. Only as they have gained experience have they added extras.

Anyone who tries to run software development at 100% is going to suffer late delivery and will fail to meet end dates.

How do I know this?

Because I know a little about queuing theory. Queuing theory is a theory in the same way that Einstein’s Theory of Relativity is theory, i.e. it is not a theory, its proven. In both cases mathematics. If you want to know more I recommend Factory Physics.

So, what utilisation can you have?

Well, that is a complicated question. There are a number of parameters to consider and the maths is very complicated. So complicated in fact that mathematicians usually turn to simulation. You don’t need to run a simulation because you can observe the real world, you can experiment for yourself. (Kanban methods allow you to see the queues forming.)

That said: 76% (max).

Or rather: in the simplest queuing theory models queues start to form (and predictability suffers) once utilisation is above 76%.

Given that your environment is probably a lot more complicated then the simplest models you probably need to keep utilisation well below 76% if you want short lead times and predictability.

As a very broad rule of thumb: if you are sharing someone don’t commit more then three-quarters of their time.

And given that, a dedicated specialist doesn’t look so expensive after all. Back to where I came in.

Read more? Join my mailing list – free updates on blog post, insights, events and offers.

Utilisation and non-core team members Read More »

Continuous Digital & Project Myopia

Reprojects-2017-09-11-10-56.png

This seems a little back to the future… those of you who have been following the evolution of Continuous Digital will know the book grew out of the #NoProjects meme and my extended essay.

I think originally the book title was #NoProjects then was Correcting Project Myopia, then perhaps something else and finally settled down to Continuous Digital. The changing title reflected my own thinking, thinking that continued to evolve.

As that thinking has evolved the original #NoProjects material has felt more and more out of place in Continuous Digital. So I’ve split it out – Project Myopia is back as a stand alone eBook and you can buy it today.

More revisions of Continuous Digital will appear as I refactor the book. Once this settles down I’ll edit through Project Myopia. A little material may move between the two books but hopefully not much.

Now the critics of #NoProjects will love this because they complain that #NoProjects tells you what not to do, not what to do. In a way I agree with them but at the same time the first step to solving a problem is often to admit you have a problem. Project Myopia is a discussion of the problem, it is a critique. Continuous Digital is the solution and more than that.

Splitting the book in two actually helps demonstrate my whole thesis.

For a start it is difficult to know when a work in progress, iterating, self-published, early release book is done. My first books – Business Patterns and Changing Software Development – were with a traditional publisher. They were projects with a start and a finish. Continuous Digital isn’t like that, it grows, it evolves. That is possible because Continuous Digital is itself digital, Business Patterns and Changing Software Development had to stop because they were printed.

Second Continuous Digital is already a big book – much bigger than most LeanPub books – and since I advocate “lots of small” over “few big” it makes sense to have two smaller books rather than one large.

Third, and most significantly, this evolution is a perfect example of one of my key arguments: some types of “problem” are only understood in terms of the solution. Defining the solution is necessary to define the problem.

The solution and problem co-evolve.

In the beginning the thesis was very much based around the problems of the project model, and I still believe the project model has serious problems. In describing a solution – Continuous Digital – a different problem became clear: in a digital age businesses need to evolve with the technology.

Projects have end dates, hopefully your business, your job, doesn’t.

If you like you can buy both books together at a reduced price right now.

Read more? Join my mailing list – free updates on blog post, insights, events and offers.

Continuous Digital & Project Myopia Read More »

Definition of Ready considered harmful

Small-iStock-502805668-2017-09-6-12-58.jpg

Earlier this week I was with a team and discussion turned to “the definition of ready.” This little idea has been growing more and more common in the last couple of years and while I like the concept I don’t recommend it. Indeed I think it could well reduce Agility.

To cut to the chase: “Definition of ready” reduces agility because it breaks up process flow, assumes greater role specific responsibilities, introduces more wait states (delay) and potentially undermines business-value based prioritisation.

The original idea builds on “definition of done”. Both definitions are a semi-formal checklists agreed by the team which are applied to pieces of work (stories, tasks, whatever). Before any piece of work is considered “done” it should satisfy the definition of done. So the team member who has done a piece of work should be able to mentally tick each item on the checklist. Typically a definition of done might contain:

 

  • Story implemented
  • Story satisfies acceptance criteria
  • Story has been seen and approved by the product owner
  • Code is passing all unit and acceptance tests

Note I say “mentally” and I call these lists “semi formal” because if you start having a physical checklist for each item, physically ticking the boxes, perhaps actually signing them, and presumably filing the lists or having someone audit them then the process is going to get very expensive quickly.

So far so good? – Now why don’t I like definition of ready?

On the one hand definition of ready is a good idea: before work begins on any story some pre-work has been done on the story to ensure it is “ready for development” – yes, typically this is about getting stories ready for coding. Such a check-list might say:

 

  • Story is written in User Story format with a named role
  • Acceptance criteria have been agreed with product owner
  • Developer, Tester and Product owner have agreed story meaning

Now on the other hand… even doing these means some work has been done. Once upon a time the story was not ready, someone, or some people have worked on the story to make it ready. When did this happen? Getting this story ready has already detracted from doing other work – work which was a higher priority because it was scheduled earlier.

Again, when did this happen?

If the story became “ready” yesterday then no big deal. The chances are that little has changed.

But if it became ready last week are you sure?

And what if it became ready last month? Or six months ago?

The longer it has been ready the greater the chance that something has changed. If we don’t check and re-validate the “ready” state then there is a risk something will have changed and be done wrong. If we do validate then we may well be repeating work which has already been done.

In general, the later the story becomes “ready” the better. Not only does it reduce the chance that something will change between becoming “ready” and work starting but it also minimises the chance that the story won’t be scheduled at all and all the pre-work was wasted.

More problematic still: what happens when the business priority is for a story that is not ready?

Customer: Well Dev team story X is the highest priority for the next sprint
Scrum Master: Sorry customer, Story X does not meet the definition of ready. Please choose another story.
Customer: But all the other stories are worth less than X so I’d really like X done!

The team could continue to refuse X – and sound like an old style trade unionist in the process – or they could accept X , make it ready and do it.

Herein lies my rule of thumb:

 

If a story is prioritised and scheduled for work but is not considered “ready” then the first task is to make it ready.

Indeed this can be generalised:

 

Once a story is prioritised and work starts then whatever needs doing gets done.

This simplifies the work of those making the priority calls. They now just look at the priority (i.e. business value) or work items. They don’t need to consider whether something is ready or not.

It also eliminates the problem of: when.

Teams which practise “definition of ready” usually expect their product owner to make stories ready before the iteration planning meeting, and that creates the problems above. Moving “make ready” inside the iteration, perhaps as a “3 Amigos” sessions after the planning meeting, eliminates this problem.

And before anyone complains saying “How can I estimate something thing that is not prepared?” let me point out you can. You are just estimating something different:

 

  • When you estimate “ready” stories you are estimating the time it takes to move a well formed story from analysis-complete to coding-complete
  • When up estimate an “unready” story you are estimating the time it takes to move a poorly formed story from its current state to coding-complete

I would expect the estimates to be bigger – because there is more work – and I would expect the estimates to be subject to more variability – because the initial state of the story is more variable. But is still quite doable, it is an estimate, not a promise.

I can see why teams adopt definition of ready and I might even recommend it myself but I’d hope it was an temporary measure on the way to something better.

In teams with broken, role based process flows then a definition of done for each stage can make sense. The definition of done at the end of one activity is the definition of ready for the next. For teams adopting Kanban style processes I would recommend this approach as part of process/board set-up. But I also hope that over time the board columns can be collapsed down and definitions dropped.

Read more? Join my mailing list – free updates on blog post, insights, events and offers.

Definition of Ready considered harmful Read More »

Documentation is another deliverable and 7 other rules

8295173-2017-08-30-13-43.jpg

“Working software over comprehensive documentation.” The Agile Manifesto

Some have taken that line to mean “no documentation.” I have sympathy for that. Years ago I worked on railway privatisation in the UK. We (Sema Group) were building a system for Railtrack – remember them?

We (the programmers) had documentation coming out our ears. Architecture documents, design documents, user guides, functional specifications, program specifications, and much much more. Most of the documentation was worse than useless because it gave the illusion that everything was recorded and anyone could learn (almost) anything any time.

Some of the documentation was just out of date. The worst was written by architects who lived in a parallel universe to the coders. The system described in the official architecture and design documents bore very little relevance to the actual code but since the (five) architects never looked at the code the (12) programmers could do what they liked.

Documentation wasn’t going to save us.

But, I also know some documentation is not only useful but highly valuable. Sometimes a User Manual can be really useful. Sometimes a developers “Rough Guide” can give important insights.

So how do you tell the difference?

How do you know what to write and what to forego?

Rule #1: Documentation is just another deliverable

Time spent writing a document is time not spent coding, testing, analysing or drinking coffee. There is no documentation fairy and documentation is far from free. It costs to write and costs far more to read (if it is ever read).

Therefore treat documentation like any other deliverable.

If someone wants a document let them request it and prioritise it against all the other possible work to be done. If someone is prepared to displace other work for documentation then it is worth doing.

Rule #2: Documentation should add value

Would you add a feature to a system if it did not increase the value of the system?

The same is true of documentation. If it adds value then there is every reason to do it, if it doesn’t then why bother?

Rule #3: Who will complain if the document is not done?

If in doubt identify a person who will complain if the document is not done. That person places a value on the document. Maybe have them argue for their document over new features.

Rule #4: Don’t write documents for the sake of documents

Don’t write documents just because some process standard somewhere says a document needs to be written. Someone should be able to speak to that standard an explain why it adds more value than doing something else.

Rule #5: Essential documents are part of the work to do

There are some documents that are valuable – someone will complain if they are absent! User Manuals are one reoccurring case. I’ve also worked with teams that need to present documentation as part of a regulatory package to Federal Drug Administration (FDA) and such.

If a piece of work requires the documentation to be updated then updating the documentation is part of the work. For example, suppose you are adding a feature to a system that is subject to FDA regulation. And suppose the FDA regulation requires all features to be listed in a User Manual. In that case, part of the work of adding the feature is to update the user manual. There will be coding, testing and documentation. The work is not done if the User Guide has not been updated.

You don’t need a separate User Story to do the documentation. In such cases documentation may form part of the definition of done.

Rule #6: Do slightly less than you think is needed

If you deliver less than is needed then someone will complain and you will need to rework it. If nobody complains then why are you doing it? And what value is it? – Do less next time!

This is an old XP rule which applies elsewhere too.

Rule #7: Don’t expect anyone to read what you have written

Documentation is expensive to read. Most people who started reading this blog post stopped long ago, you are the exception – thank you!

The same is true for documentation.

The longer a document is the less likely it is to be read.
And the longer a document is the less of it will be remembered.

You and I do not have the writing skills of J.K.Rowling, few of us can hold readers attention that long.

Rule #8: The author learns too

For a lot of documentation – and here I include writing, my own books, patterns, blogs, etc. – the real audience is the author. The process of writing helps the author, it helps us structure our thoughts, it helps us vent our anger and frustration, it forces us to rationalise, it prepares us for an argument, and more.

So often the person who really wants the document, the person who attaches value to it, the person who will complain if it is not done, and the person who will learn most from the document is the author.

Join my mailing list to get free updates on blog post, insights, events and offers.

Documentation is another deliverable and 7 other rules Read More »

Principles of software development revisited

BeachSpainLite-2017-08-15-10-04.jpg

Summer… my traditional time for doing all that stuff that requires a chunk of time… erh… OK, “projects” – only they aren’t well planned and they only resemble projects in the rear-view mirror.

Why now? Why summer? – Because my clients are on holiday too so I’m quiet and not delivering much in the way of (agile) training or consulting. Because my family is away visiting more family. Thus I have a chunk of time.

This year’s projects include some programming – fixing up my own Twitter/LinkedIn/Facebook scheduler “CloudPoster” and some work on my Mimas conference review system in preparation for Agile on the Beach 2018 speaker submissions.

But the big project is a website rebuild.

You may have noticed this blog has moved from Blogger to a new WordPress site, and attentive readers will have noticed that my other sites, allankelly.net and SoftwareStrategy.co.uk have also folded in here. This has involved a lot of content moving, which also means I’ve been seeing articles I’d forgotten about.

In particular there is a group of “The Nature of Agile” articles from 2013 which were once intended to go into a book. Looking at them now I think they still stand, mostly. In particular I’m impressed by my 2013 Principles of Software Development.

I’ll let you can read the whole article but here are the headlines:

Software Development Principles

  1. Software Development exhibits Diseconomies of Scale
  2. Quality is essential – quality makes all things possible
  3. Software Development is not a production line

Agile Software Principles

  1. Highly adaptable over highly adapted
  2. Piecemeal Growth – Start small, get something that works, grow
  3. Need to think
  4. Need to unlearn
  5. Feedback: getting and using
  6. People closest to the work make decisions
  7. Know your schedule, fit work to the schedule not schedule to work
  8. Some Agile practices will fix you, others will help you see and help you fix yourself

The article then goes on to discuss The Iron Triangle and Conway’s Law.

I think that essay might be the first time I wrote about diseconomies of scale. Something else I learned when I moved all my content to this new site is that the Diseconomies of Scale blog entry is by far my most read blog entry ever.

I’m not sure if I’m surprised or shocked that now, four years later, these still look good to me. I really wouldn’t change them much. In fact these ideas are all part of my latest book Continuous Digital.

I’m even starting to wonder if I should role those Nature of Agile essays together in another book – but thats bad of me! Too many books….

Join Allan Kelly’s mailing list to get free updates on blog post, insights and events.

Principles of software development revisited Read More »

Programmer’s Rorschach test

The picture above, I recently added this picture to Continuous Digital for a discussion of teams. When you look at it what do you see:

An old style structure chart, or an organization chart?

It could be either and anyone who knows of Conway’s Law shouldn’t be surprised.

When I was taught Modula-2 at college these sort of structure charts were considered superior to the older flow charts. This is functional decomposition, take a problem, break it down to smaller parts and implement them.

And that is the same idea behind traditional hierarchical organizational structure. An executive heads a division, he has a number of managers under him who manage work, each one of these manage several people who actually do the work (or perhaps manage more manager who manage the people who do the work!)

Most organizations are still set up this way. It is probably unsurprising that 50 years ago computer programmers copied this model when designing their systems – Conway’s Law, the system is a copy of the organization.

Fast forward to today, we use object oriented languages and design but most of our organizations are still constrained by hierarchical structure, that creates a conflict. The company is structurally decomposed but our code is object oriented.

The result is conflict and in many cases the organization wins – who hasn’t seen an object oriented system that is designed in layers?

While the conflict exists both system and organization under perform because time and energy are spent living the conflict, managing the conflict, overcoming the conflict.

What would the object-oriented company look like?

If we accept that object oriented design and programming are superior to procedural programming (and in general I do although I miss Modula-2) then it becomes necessary to change the organization to match the software design – reverse Conway’s Law or Yawnoc. That means we need teams which look and behave like objects:

  • Teams are highly cohesive (staffed with various skills) and lightly coupled (dependencies are minimised and the team take responsibility)
  • Teams are responsible for a discrete part of the system end-to-end
  • Teams add value in their own right
  • Teams are free to vary organizational implementation behind well defined interface
  • Teams are tested, debugged and maintained: they have been through the storming phase, are now performing and are kept together

There are probably some more attributes I could add here, please make your own suggestions in the comments below.

To regular readers this should all sound familiar, I’ve been exposing these ideas for a while, they draw on software design and Amoeba management, I discuss them at greater length Xanpan, The Xanpan Appendix and now Continuous Digital – actually, Continuous Digital directly updates some chapters from the Appendix.

And like so many programmers have found over the years, classes which are named “Manager” are more than likely poorly named and poorly designed. Manager is a catch all name, the class might well be doing something very useful but it can be named better. If it isn’t doing anything useful, then maybe it should be refactored into something that is. Most likely the ManagerClass is doing a lot of useful stuff but it is far from clear that it all belongs together. (See the management mini-series.)

Sometimes managers or manager classes make sense, however both deserve closer examination. Are they vestige from the hierarchal world? Do they perform specialist functions which could be packaged and named better? Perhaps they are necessary, perhaps they are necessary for smoothing the conflict between the hierarchal organization and object oriented world.

Transaction costs can explain both managers and manager classes. There are various tasks which require knowledge of other tasks, or access to the same data. It is cheaper, and perhaps simpler, to put these diverse things together rather than pay the cost of spreading access out.

Of course if you accept the symbiosis of organization and code design then one should ask: what should the organization look like when the code is functional? What does the Lisp, Clojure or F# organization look like?

And, for that matter, what does the organization look like when you program in Prolog? What does a declarative organization look like?

Finally, I was about to ask about SQL, what does the relational organization look like, but I think you’ve already guessed the answer to this one: a matrix, probably a dysfunctional matrix.

Programmer’s Rorschach test Read More »

iStock_000012937008XSmall-2017-07-5-13-13.jpg

Medium range capacity planning

iStock_000012937008XSmall-2017-07-5-13-13.jpg

My last blog, “Creating Beautiful Roadmaps – NOT!”, highlighted how the term “roadmap” is used and abused. While I’m probably in the minority in seeing a roadmap as long range scenario planning exercises to enhance learning there isn’t really a consensus on what a roadmap is…

To some a roadmap is a plan of how a product will evolve in the next few years. To others a roadmap is a set of promises with dates made to customers, higher management and others. Most of the roadmaps I see are little more than bucket lists with dates, fantasy dates at that.

Increasingly “roadmap” is used to describe a “release plan”, that is, a list of release dates and for each date a list of features that will be included in that release. Since these are usually based on estimates they are doomed to failure because very few teams can produce meaningful estimates.

Release plans made sense when releases were occasional, say every quarter but once they became fortnightly they loose logic. What is the point of planning releases if you release when the feature is ready? Many times a day?

So imaging how my heart sank a few weeks ago when a client said: “And I’d like you to run a road mapping session for the team next week.”

Worse still, there was no escape, I was visiting a client in Australia, I suddenly wondered if I’d done the right thing flying around the world!

So although I smiled and nodded inside I thought “Bugger!”

A day or two later I enquired about expectations, what did they hope to get from this exercise? Fortunately the answer was “An understanding of what the team will be doing in the next six months.”

OK, this wasn’t a full blown scenario planning exercise it was a mid-range scheduling plan. Normally I see little point in scheduling more than three months in advance – at most! Since this team was changing their work processes, personnel had been in a state of flux and there was no historical data this was going to be difficult.

My next stop was the Product Owner. He had a long list of things the customers – internal and external – wanted. Most of these were little more than headlines and they were all a long way from stories or epics so there was little to go on. The sheer quantity of ideas was so long that saying “No” and deciding what ideas not to work on was clearly going to be essential.

Given the vagueness and unreliability of inputs one could be forgiven for despairing but instead I saw opportunity. The problem was not so much one of road mapping or scheduling but deciding how the team were to divide their time up for the next six months. This was more big ticket prioritisation.

The vagueness of the requirements and specifications meant that once the time was divided up the team would mostly have a free hand in deciding what needed to be done and what they could do in the time allowed.

Without performance data there was no way estimation could be a reliable guide so there was no point in even thinking about it. But if one was to try estimation one would need something to estimate, which means the requirements would need fleshing out and with so many possible options that would be a lot of work. Not only did we not have the time to do such work but since we knew some items would not make the cut fleshing them out, let alone estimating them, was work which would be wasted.

Just as this was looking like mission impossible I remembered Chris Matts Capacity Planning approach. The roadmap request was actually a capacity planning exercise. Chris’ approach would need a few tweaks but it might just work…

In Chris’ version the product owners estimated the work. At first this sounds awful, product people estimating how long technical work will take! Help!

While Chris says this I’ve never seen these “estimates” as real effort estimates. I’ve always viewed them as “bids.” Each product owner wants to get as much time as they can for their chosen work. But they also know that the higher their bid the less likely they are to get what they ask for. So in “estimating” they are both bidding and self-limiting. Once they have a work allocation they will have to adjust what they ask for to get it into the time.

Rather than ask the product folk to estimate or bid I decided to ask them how many nominal days they would like to allot to each item. How would they divide the capacity between competing requests?

Secondly I decided to ask them to order request items. Thus, a line item could have a very high allocation of time but be late in the scheduling order; conversely a small effort item could be high up the priority list.

So…

The first task was to knock the wish list into shape as a series of line items.

For each line item we gave a short description and statement of anticipated benefits in a spreadsheet. Each item had a box for “days” and “order.”

Next we took a stab at calculating how many days team P had: six months for 10 people equated to approximately 1280 working days.

My plan was to have each manager allocate 1280 between the line items and assign each item a priority position. Then we would average the numbers and use the combined result as the basis of a discussion before finalising the allocations. Finally we would turn the allocated days into percentages and hand them to the team together with the priority order.

So far, so good. The meeting commenced: three managers, three techies, one team product owner and me. (Only the managers got to vote.)

To my surprise the managers readily accepted the approach: I had feared they would want detail on the line items and effort estimates.

While I had intended to hand out the printed spreadsheet, collect the numbers and average them someone decided we should put them on screen. So the whole spreadsheet was projected on screen and it became a collective exercise.

Very quickly about half the items were dismissed and allocated zero time. They were all good ideas but given capacity constrains they clearly weren’t going to win any time.

Some of the remaining items were split up and others combined. Before hand I had worried whether we had the right granularity and even the right items but the group quickly addressed such concerns without me needing to mention them.

(Anyway, I didn’t want fine granularity because that would hinder the teams ability to decide what it wanted to do.)

Next each line item was discussed and assigned an order. I had intended to order them into strict 1, 2, 3, …. but these quickly became buckets, there were several #1 items, several #2 and finally some #3.

With the “order buckets” decided we moved onto allocating nominal days. We quickly decided to simply work with 1,000 nominal days. I quickly calculated how many points equated to one developer for one sprint, how many points equalled the whole team for one sprint and a couple of other reference points to help guide the process.

The days were allocated – including an allowance for technical investment – and we were at 1300 nominal days committed, 300 over budget. Now things got hard so we stopped for lunch.

After lunch I expected a fight. But I knew that if the worst happened when we converted to percentages everything would be cut by 30%. Moving to percentages would force any cuts the meeting couldn’t make.

Maybe it was the effect of lunch but these hard decisions were taken quickly, to my immense surprise. The managers concerned could easily see the situation. Once the capacity constrain and work requests were clear the competition became clear too.

Most of all I had succeeded in preventing managers from arguing over who was doing what. There was no attempt to say “Peter will do X and Fred will do Y.” The team had a free hand to decide how to organise themselves to meet the managers’ priorities.

I was delighted with the way the whole exercise went, having tried capacity planning once I expect I’ll be doing it again – I may even build it into some of my training courses!

What do you think? Anyone else done this? – please leave me a note in the comments

Medium range capacity planning Read More »

Houston-2017-06-26-16-55.gif

Create Beautiful Roadmaps in Minutes – NOT!

Houston-2017-06-26-16-55.gif

Subject: Create Beautiful Roadmaps in Minutes

Body:

Hi << Test Name >>,

I hope you enjoyed reading our guide to product roadmaps.

If you haven’t already built a roadmap in _____, I’d encourage you sign up for a free trial.

Why did this email annoy me so? – apart from the failed name substitution that is.

This e-mail, perhaps spam or perhaps something I did sign up to, landed in my e-mail box the other day and set my blood boiling.

Roadmaps should most definitely NOT be created in minutes. Roadmap should take time, roadmaps should take lots of people’s time.

  • Roadmaps should NOT be a list of features with dates
  • Roadmaps are NOT promises
  • Roadmaps based on work effort estimates will undoubtedly fail
  • Roadmaps describe what MIGHT happen in the future to your product(s) and company
  • Roadmaps should be informed by known events (upcoming shows, purchase needs of potential customers, legislation changes, etc.)
  • Roadmaps should consider how technology changes might play out
  • Roadmaps should consider how your technology can help your (potential) customers with the problems they will have tomorrow
  • Roadmaps should be informed by company strategy and roadmaps will feedback to company strategy
  • Roadmaps should be flexible because the future will be different to any predictions you make

The value of roadmaps is not in what they say. There is a lot of value in what they don’t say – the things that the organization will not do. But the real value of the roadmap is in the creation of the roadmaps.

The act of creating the roadmap brings key people together and exposes them information about the future.

Peter Drucker once said: “We have no facts about the future” but there are probabilities: population changes is one that can be accurately predicted years ahead (e.g. we know how many 20 year olds there will be in 10 years time.) Most legal changes can be seen coming at least a few months out. There are things we can take a view on, there are things we know we don’t know, and things were we need to keep options open.

When key people are brought together they can exchange information and views. They can share their views of the future, they can talk about how they see the future, they can build a shared model and out of these can come a possible future, a possible scenario, a roadmap, which shows how your product will add value to some customers.

Because that’s what a roadmap is: a scenario plan, one or more possible descriptions of the future for your product.

Like so much planning it is the exercise of creating the plan that is the valuable bit rather than the resulting plan itself. This is especially true for roadmaps which are attempting to look far into the future. Roadmaps need to be a mix of prediction and opinion.

Roadmaps should bring people together for a discussion, to hear each others voice and opinion, creating the roadmap should be discussion.

Reducing roadmaps to a few minutes work with a tool destroys the value of the exercise. Any organization that does this has a pretty screwed up idea of collaboration.

Next time you need to create a roadmap:

  • Gather relevant information about the future
  • Look at company cycles, look at competitor and customer cycles, what can you see?
  • Look at technology trends
  • Bring together a mix group of people: senior executives, technologists, sales people and product managers
  • Spend time building your view of the world
  • Consider how your product can help that world and help your customers solve the problems they will have

And repeat the process regularly, say every six months.

You might even build several scenarios about the future and consider the implications for your product in each. Such scenario roadmaps are there to shape peoples view of the future. Your scenarios will probably be wrong and the roadmap will play out differently in the end but allowing people to consider alternative futures challenges them to think differently. When real events happen the fact that people have considered alternatives will mean they are better prepared – even though the future will be different to all your scenarios!

(Museums of the future, future memories, retrospectives and change is a blog post about future memories from 10 years back.)

 

 

 

Create Beautiful Roadmaps in Minutes – NOT! Read More »

Verified by MonsterInsights