6. Identify key intersection points, critical paths, and integration strategies

Acknowledge What Matters to Different Perspectives

Complex initiatives usually have many stakeholders. They each have their own interests and concerns. If you want them to behave as a team, then start with acknowledging each of their interests and concerns.

Sponsors usually are concerned with these things:

  • Can they do it in time to meet the opportunity window?

  • Will it cost more than it is worth?

End users have this concern:

  • Will the product actually help me to do what I want to do?

and if the end user is a paying customer, then of course they are also concerned with cost and support.

All these concerns are valid, and a product manager owes it to the stakeholders to update them on progress with respect to their concerns. If sponsors lose confidence, then funding and support will dry up. If prospective users lose confidence, then the market could evaporate as customers choose alternatives.

A product manager therefore needs to aggregate the above concerns into the following perspective:

  1. Will we succeed in creating each promised capability that we need for our next release?

  2. Will we meet milestone deadlines that we promised our stakeholders?

This sounds like a waterfall approach, but it is not because it is not based on tasks, and it is not based on a complete system delivery. Rather, we are describing the demonstration of capabilities as they are created.

Care must be taken when promising which capabilities will be established, and by when, because stakeholders share those expectations with others. Meeting promised capabilities by promised dates is important, so don’t over-promise.

This means that a product manager is very concerned with dependencies, because if a capability depends on another, and that other one slips, then there is a domino effect, and promises will not be kept. In step 7 we will explain more fully why it is critical to focus on capabilities, rather than features. That gives you the tactical flexibility to alter features if that helps to realize the desired capability.

Keep the Stakeholders Involved

The key to successful relationships with stakeholders is to establish a partner-like relationship. That is not always possible, but it is worth the effort.

Some ways to make relationships with stakeholders more partner-like include:

  • Be transparent about progress: never spin situations. That does not mean that all details need to be shared, but do make progress appear better than it is.

  • Educate them on the process. They need to understand that the engineering process is one in which most of the time things are not working. Progress is a trajectory, marked by milestones in which each capability is established.

  • Invite them to major test events, including ones that you expect to fail. Then interpret those events for them, so that they can see that the whole point of testing is to learn, which eventually leads to success.

  • Give them access to up-to-date information on progress.

  • Update them personally on the things that they are most concerned about. Their concerns are not interference—their concerns are valid.

  • Try to shift them to a mindset in which they define smaller product increments, instead of a big release. Those increments should be capability-based, rather than feature-based: the increments should be usable or marketable product versions if at all possible.

  • Become an investor. Ideally, every supplier should have money at risk in the initiative. That is the best way to get everyone into the mindset of “being in this together”. If you are building something for someone, consider pricing it below cost, but with an arrangement where you benefit greatly if the initiative succeeds. That makes you a partner, by definition. If all suppliers are partners, none will have an incentive to stretch out the work or inflate costs.

Identify Key Capability Metrics

Each capability usually has one or a very small number of metrics that indicate progress in attaining the capability. In our example, the Run capability might be measured by the speed at which someone moves. We know they can Walk, since that is a precursor capability; but how fast can they go? If they exceed a certain speed, then they are running. The metric you choose should be indicative of the actual usefulness of the capability.

A capability metric is trailing: it is a kind of technical outcome. There might also be leading metrics that reflect your approach to implementing the capability. For example, if you believe that a wider gait is critical for the Run capability, you might measure that. But it is not an outcome, and it reflects your current approach to achieving the capability. The metric that matters is the outcome.

Each capability’s key metric—its “one metric that matters”—should be prominent on people’s minds and also on the product’s development dashboard. And once the capability is achieved, that metric transitions to being a performance metric that can be continuously improved on.

Continuously Revise the Critical Path

A critical path is the sequence of activities that set a lower limit on when a capability can be realized. It is the “long pole in the tent”. It is essential to pay attention to critical paths: otherwise you cannot properly balance priorities when some capabilities take longer than others: it might then be necessary to leave out some features in a capability if it will otherwise delay everything.

See the topic Watching Critical Path Dependencies for more information.

Manage Dependencies

We can see from the dependency graph that the Carry capability is dependent on the Walk capability. After all, carrying something implies holding it while in motion. If one is developing these two capabilities, one approach is to first develop the Walk capability, and then develop the Carry capability.

That is really slow however: it means that you will not start working on Carry until Walk has been demonstrated. Instead, you could work on each in parallel, anticipating how each will work, and designing them to be compatible. Ideally, you will design tests to make sure that the two capabilities will integrate. For example, you could have a test that ensures that Carry uses Walk in the way that Walk expects to be used. You could also have a test that verifies that Walk works in the way that Carry expects.

These tests will fail until both capabilities have been created. However, your Carry capability can use a “mock” for Walk—that is, it can use a simplified version of Walk that pretends to work. This might be, say, a puppet that simulates walking, so that Carry can be tested.

The advantage of that approach is that development on the different capabilities does not have to wait. They can all be worked on in parallel, and integrated for real as they each become available. Until then, mocks can be used. The result is that the overall critical path becomes greatly reduced, because developers can work on the many capabilities in parallel, frequently integrating what they have to make sure that the capabilities work together as expected.

There are many different techniques for managing dependencies between capabilities. We have described only one. A summary article of a range of techniques is here.

Continuously Integrate

It is important to frequently integrate all the capabilities. Integration with mocks should be highly frequent: it should be routine in the course of development every day. Integration without mocks – linking the incomplete capabilities together – should be less frequent but still be a recurring activity, to assess how the system performs as a whole, even if many tests fail or must be skipped.

However, some tests will pass – hopefully more each time. Tests that used to pass but suddenly fail reveal that the design of the various components has drifted apart, and needs to be realigned. But another important benefit is that one can harden the integration and deployment process. Integration and deployment should both be automated, and those processes should be robust and highly repeatable.

Manage Spend Rate, Not Cost

It is impossible to accurately price something that has never been built before. It is therefore foolhardy to even ask someone for a fixed price on a new digital initiative. They might give you a price, but right from the outset there is built-in risk that the promised price cannot be met. It also creates an adversarial relationship between the purchases and the supplier.

Instead, it is far better to face the reality that a price for a digital construct cannot be determined. Instead, ask for a non-binding educated guess, and enter into an agreement on spend rate and capability demonstration milestones. If milestones are not achieved then spending can be stopped.

Also insist that all intellectual artifacts be shared throughout development, so that if spending is stopped, the assets can be transferred to another supplier. Throughout, take steps to make sure that the intellectual assets are usable and understandable.

This is a “Lean startup” approach. Each development milestone is an “experiment”. If a milestone is reached, that experiment succeeded, and investment continues. Otherwise it does not.

Don’t Equate Money Spent to “Value”

Some techniques tabulate money spent on a capability and report that as “value earned”. That is illogical: money spent is not value—it is cost. One might as well equate one’s home mortgage payment with value, when in fact value accrues only through the principal payoff and market appreciation. Value and spend are two completely different things.

If a task is completed, there is no reason to treat that as a value. Product development value accrues when useful capabilities have been demonstrated. Technically, the value of a capability equates to the increase in the expected future market value of the total system capability.* More tangibly, the value is how important the capability is for the market potential of the product, and shortening the time to create the capability represents an opportunity value. All that is theoretical, however, and is not useful except when the spend levels are in the many millions of dollars or euros.

* See Value-Driven IT: Achieving Agility and Assurance Without Compromising Either, by Cliff Berg, 2008, p 253. On Amazon here.

 

Step 5 <—

—> Step 7