They say a picture is worth a thousand words but, honestly; I think they are worth much more. They help you build a common understanding and remove much of the complexity and nuance that comes with written and verbal language.
I wanted to share 6 diagrams I find myself frequently using to when discussing Product Management ideas. These are a few drawings are well-received and convey the point well. The 6 diagrams are:
- The “Product Manager Bottleneck”
- The “Delivery Size Throughput”
- The Classic “Waterfall vs Agile”
- The “Initiative Size, Risk and Leadership Involvement”
- The “Knowledge Silos”
- The “Segmentation Value”
The “Product Manager Bottleneck”
One of the most common mistakes I see Product Managers make is feeling the need to be a part of every discussion. I understand there is a positive intention behind it — you’re the PM and you need to be across everything in case you’re asked about it.
Unfortunately, this has many drawbacks. First, it’s not practical. You will rapidly become overwhelmed — negatively impacting not only the effectiveness of your team but also your own well-being. Trust me, I’ve done it. Second, you undermine the autonomy of your team.
Great Product Managers know when to be involved and when to step back. They know when to let conversations happen without them. The purpose of an autonomous team is to remove as many dependencies as possible.
In the example below, you can imagine a situation on the left where the Web Engineer asks about a tracking concept to be implemented, the PM then approaches the Product Analysts who said it should match the iOS implementation. The PM then approaches the iOS Engineer in order to collect the details and goes back to the Web Engineer to explain them. Not only does this add unnecessary work to the PM, but it also delays the resolution the Web Engineer is seeking.
In comparison, in the example on the right, the Web Engineer directly approaches the Analyst who explains the situations. They then align with the iOS Developer. Note how many fewer interactions (red arrows) need to happen.
If we extend this example and add another couple of topics (green and blue) which is probably more representative of the concurrent number of initiatives a team will have. The number of interactions rapidly increases and these are all dependent on the PM.
How to use this: If you’re constantly overwhelmed, reflect on how your team interacts with each other — do you need to be in every meeting? Does your team operate the same when you’re on vacation or does everything stop? If it’s the latter, you need to make a conscious effort to facilitate interactions without you. (More extensive article on this topic coming soon!)
The “Delivery Size Throughput”
This is one of my favourite diagrams for explaining the throughput of teams and size of initiatives being worked on. I often come across frustrations both from business partners and Product Teams about their time-to-market — they feel it’s too slow.
The problem is usually caused by only working on larger chunks of work (funnel on the left). As a result, a team can only work on one topic at a time. This approach may be acceptable if we’re certain something is the right thing to build, but this is very rarely the case. If something drops out the bottom of our pipeline and it doesn’t work, you spent more effort than required in order to get to that learning.
The reason the agile approach promotes smaller chunks of work is because value delivered quicker and with less risk. The funnel to the right gives us much more flexibility. Smaller pieces of work (blue dots) can move through the funnel at a rapid pace to be validated. If they’re successful, we can invest more effort (small pink circle). If it’s unsuccessful, we iterate again but with limited investment. Each validation allows us to continue to increase our investment. This results in many small projects, some medium projects and a few large projects — off-setting risk and improving ROI.
How to use this: Look back on what you’ve worked on over the last few months (not just what you shipped). Were all the topics large in scale and complexity or were there a mix of ongoing pieces of work? These may all be in the same theme or different. Assign a basic size to each piece of work (S, M, L) and reflect on what your funnel would look like.
The Classic “Waterfall vs Agile”
There have been various examples of this around the internet, but I wanted to emphasis the point of effort. Many Product organisations are not explicit about the fact that the time of the team itself is an investment. If all Product Managers owned the Profit & Loss statements for their product, the wage line item would often be the largest expense. You should look for every opportunity to increase Profit (Return).
When teams ship after a large release, you hope to get an immediate hit of value. Even operating on the assumption that you release the perfect solution with no technical issues first time (spoiler alert: it’s unlikely). There has been a large investment made by your team (in terms of time) that has not returned anything (left diagram).
By releasing small and often, you’re shipping incremental pieces of work. This begins to pay off because you can realise value sooner and learn from mistakes faster. Value trends above effort almost consistently in the second graph and this is what teams should be striving for.
How to use this: I often use these diagrams as a way of explaining why “just adding one more thing” to the scope probably isn’t in our organisations best interest. It’s also extremely helpful to remind yourself and your team the commercial aspects of your roles.
The “Initiative Size, Risk and Leadership Involvement”
There are two aspects to this chart. On the left, is the initiative pyramid itself. The width of the pyramid shows how many initiatives should be on-going at once. The wide base signifies many topics and the narrow point means few. In order to make this viable, the higher risk topics are at the top (few) with lower risk ones at the bottom (many).
On the right, is a gauge for leadership involvement. The wider the gauge, the more involvement should be expected or required. In this case, more leadership should be consulted. The narrower the gauge gets, the less involvement there should be.
You should be doing many tiny initiatives, these are items like copy or image tests. They are very low risk and can be constantly optimised. This isn’t the place your Leadership team are likely to want to spend time, nor is there much value in them doing so. However, a topic near the top of the pyramid will have a higher degree of risk (perhaps launching a brand new product) and you’re going to want their involvement and support.
How to use this: I have found this a great tool to use both with leadership and teams. It explains why leadership absolutely must be involved in some topics and probably shouldn’t be involved in others.
The topic of Leadership Involvement is quite complex. I explored it in more detail in my article Why the ‘Spotify Model’ won’t solve all your problems.
The “Knowledge Silos”
This diagram came from a quarterly team health-check a colleague of mine used to run. It’s a fantastic way of visualising the impact of the department and team silos with knowledge sharing. It isn’t possible to have ???? knowledge of all areas, but having a conscious awareness of your lack of knowledge will help you operate in with a focus on communication.
You as an individual have a very high awareness of what you’re working on. This awareness decreases quite rapidly the further away a team is from you.
How to use this: Remind yourself and others that your organisational is constantly slowed by the fact no-one can know everything but even more usefully, when there is a conflict or a clash, it’s probably due to a lack of information and not an intention of malice (See: Misalignment, not Malice)
The “Segmentation Value”
One of the common mistakes I see when companies view initiatives and experimentation is an optimisation for the average instead of a segment. My favourite example of “average” skewing perceptions is, “the average human has fewer than two legs”.
When hypotheses and focuses are too broad they will naturally limit the impact teams can have. Essentially, you’re trying to appease many people at once and it’s unlikely to work. In the diagram below, case 3 (on the right) where there is no significant change is the most common.
The diagram below looks at 3 hypothetical experiments. The first, there was an uplift, the second a drop and the third, no change. However, often when you dig into these results you will find further opportunities or limitations. In Case 1, although overall the experiment was successful, Segment B has actually underperformed. In this situation, I’d look at understanding more why this segment underperformed and perhaps removing them from the rollout.
Similarly, in Case 3, although overall there has been no significant change Segments B and C are showing positive results. These results are offset by the drop in Segment A and D so there is a good line of inquiry to explore further.
How to use this: Be specific with your hypotheses and dig into your results to see if there are additional opportunities or drawbacks. I highly recommend anything by Rik Higham when it comes to hypotheses and experimentation — see Experimentation Hub. Use User Research and demographic data to build personas (via Nikki Anderson) so you can better understand who you are targetting.
That’s it. I hope they above diagrams help you visualise some concepts or articulate them to those around you! Please get involved in the discussion either by adding comments below or reaching out to me directly.
Disclaimer: Although I created all the graphics above, I can’t claim to have invented all of these concepts. We are fortunate that the Product Management community is open and shares knowledge through presentations, webinars, Medium posts and Podcasts and many of these are an amalgamation of all of those. If you know of any original sources for the above — please let me know and I’ll happily credit!
Originally posted on Curtis’ Medium page