UNDERSTAND THE PROBLEM DOMAIN IN SUFFICIENT DETAIL.
As far as inception goals go, we have until now, just set the scene: we’ve understood what the opportunity is, why it exists and where the organisation sees itself in the future.
Now we get into the nitty gritty of our problem. We analyse the problem domain in enough detail to be able to make a decision on: whether we can deliver a desirable, viable and feasible solution; what such a solution could look like; and how we could deliver it.
Note that the individual activities in this step (and their sequencing) very much depend on whether we are building a solution from scratch, or evolving or fixing a solution within an existing domain. In the case of the former ,we start building from a blank slate, for the latter we have to do as-is analysis and build on top of that.
The activities will also be influenced by the ‘type’ of initiative. A product build will need technical analysis in form of technical architecture, while a change initiative may require no technical analysis, or do so in the form of process modelling.
While we believe in an agile approach, we also recognise that we’re most likely to succeed in delivering valuable outcomes when building on solid foundations.
Many inexperienced or misguided teams jump to solutions too early, which are then based on risky or random assumptions and impacted by unknowns and unmanaged risk. While agile ways of working allow us to manage some level of uncertainty, it would be neglectful to do no analysis or preparation at all.
We want a sufficient understanding of the domain so we can outline a solution for the opportunity at hand – and subsequently assess whether it is truly desirable, viable and feasible. We adopt lean principles, which in practice means doing the necessary minimum to learn or achieve specific outcomes: by working breadth over depth and focusing on areas of risk and complexity.
What does the organisation do and how do they do it?
We investigate what the business does, how it delivers products, services and value and how it is affected by external factors.
Businesses are often complex organisms, embedded in an even more complex environment. Following systems thinking and domain-driven design, we focus on the smallest relevant subdomain (which may be an entire organisation, a business unit, a department or an individual team). While we want to keep the boundaries of our domain as small as possible to achieve focus, the domain we really need to look at to truly deliver value (to account for all dependencies, risks, design operable solutions etc)
is often a bit bigger than clients may initially believe. Sometimes, also the opposite can hold true: areas highlighted as needing a detailed understanding are not always relevant.
Generally speaking, we look at the business model, the top level value and supply chain, then start identifying stakeholders. From this we can understand how the organisation is embedded into the wider context that affects it.
Who is my target audience and what do they find desirable?
We identify users, what they desire and expect from a solution, and where we can provide value to them. It’s important that we recognise internal and external, primary, secondary and supporting users. We must also remember that even the most technical problem ultimately has a ‘user’. Arguably this is the single most important step – ultimately, every bit of value created by or for a business stems from satisfying the needs of users.
By default our thinking should be informed by market and user research, though subsequent experiments and testing the solution will provide the most robust feedback.
Who is important in the delivery of this initiative?
In addition to identifying system users, we look at the wider picture of stakeholders that affect, impact or are interested in the initiative. This helps us validate that we have identified all users, be they individuals or organisations that we need to recognise as part of analysis, experiments definition or delivery.
What does this user experience look like?
We model how the target audience will be using the solution in the wider context of the customer - or more generic ‘user’ - lifecycle.
To map the user experience across the relevant parts of the customer lifecycle, we identify the flow of activities that relevant users conduct at the various touch points they have with our domain. We also note their experience (emotional, social, functional) at each stage.
Once completed by adding capabilities (see next step) the resulting model(s) is possibly the most important tool we use to understand the domain, communicate context and use as the basis for solution design.
In the case of a brown-field initiative we will usually start modelling the existing experience, then identify gaps, opportunities, strengths, weaknesses and issues and use this to inform our target experience. For a green-field initiative we would model our vision of the target experience.
As before, our thinking should be informed by market and user research, though subsequent experiments and operation of the solution will provide the most robust feedback.
As part of this we start eliciting and engineering wider (business) requirements.
What capabilities are needed to support the user experience?
We identify and model which capabilities are required (i.e. features, systems, processes, people, data) to provide the target user experience.
In this activity, we bring together the user-centric view with the business and technology perspectives. We extend the user experience model by mapping existing and required capabilities that support the various user activities. This includes internal processes, relevant internal users, the systems and data captured and/or used. This is the time to also look into aspects of operations and support (i.e. how will the business provide the end-to-end experience from an internal perspective?) Accordingly, we often involve service designers in these activities. In a second step, we can then identify gaps, opportunities for improvement and issues that need addressing.
Dependent on the size of the domain, we may end up with a number of models which focus on different parts of the domain at different levels of granularity.
In addition, we conduct further analysis on the details of these capabilities and any related requirements the organisation may have. This can include (but is not limited to) enterprise / technology architecture, system interfaces and the technology stack, as well as infrastructure, tooling, etc.
We continue to elicit and engineer requirements as they come to light. Note: at this stage we have not yet examined the solution as our focus is still on the as-is domain and solution-neutral requirements. In practice, we will update these models as part of solution design.
What qualities and characteristics must the solution have?
We elicit and agree expectations towards the non-functional qualities of the solution.
It is vital that we elicit and agree on the non-functional requirements or qualities of our solution early in the process, as this will affect solution design and delivery. We should consider that these will change over time (e.g. expected throughput), so we should allow our system to evolve too – not only functionally, but also in relation to its non-functional characteristics.
Do we believe this will lead to success?
We define what we believe to be valuable ‘experiments’ to run.
Based on our understanding of desirability (what users need), viability (what will contribute to longer term business success) and feasibility (what the business can provide), we define a hypothesis (or potentially several) against which we will build ‘experiments’ to validate our thinking.
Please note that we use the term ‘experiment’ in a very wide sense: an experiment can be something we want to try out, or it can be a feature or broader solution which we actually implement (based on high confidence that it will be valuable) to test and validate during live operation.
At this stage we will have arrived at a list of hypotheses which we prioritise based on associated value (to user and business) and, as far as we can tell at this stage, cost, complexity and risk. We will update these priorities as we come to solution design – when feasibility and viability become more concrete.