10 Small Steps
to Better Requirements

Requirements Column, IEEE Software, 23, 2, pages 19-21, Mar/Apr, 2006.

 Ian Alexander


Does 'RE' need to be complicated?

The journey of 1000 miles begins with a single step. The Chinese proverb assists people and projects by focusing attention on the here and now, rather than worrying about the unmanageable future.

Luckily, projects can easily take several small and easy steps to improve their requirements – to the point where they are good enough. But projects are all different. Yours may need to take steps that would not be right in other situations.

The basic steps listed here – roughly in order – demand nothing more elaborate than a whiteboard, or a pen and paper; though Requirements Management tools and specialised models may help later. To help you put this into practice, a paper or book chapter that describes each step clearly and practically is referenced at the end of the step.

Step 1.                 Mission & Scope

If you don’t know where you’re going, you’re not likely to end up there, said Forrest Gump. Is your Mission clear? Write a short statement of what your project is meant to achieve. Out of that will grow an understanding of the Scope of the project. The scope is bound to grow as you come to understand more clearly exactly what is entailed in achieving the mission. [1, chapter 4]

A good tool for describing scope in simple outline is a traditional context diagram. It seems to be better for the task than a use case summary diagram, as it indicates interfaces and the information or materials that must come in or out across them, rather than just roles (UML actors). This becomes specially important when specifying physical systems where software is just a (large) component. Later, each interface can be analysed in detail. [2, chapter 3]

Step 2.                 Stakeholders

Requirements come from people, not books. Sure you can look in old system documents, read change requests, and the like – but you still need to validate any candidate requirements you discover from such sources with people. Missing out a group of stakeholders means missing a whole chunk of functionality or other requirements from your system.

Stakeholders are far more diverse than can be understood by listing the operational roles. Every system has a varied set of beneficiaries. The role of regulator is often critical. The opinions and actions of negative stakeholders can be show-stopping if not taken into account.

At the very least, make a list of who has an interest in the system, and consider what degree of involvement each person or group should have in the project. A template may help to identify some stakeholders you’d otherwise overlook. [3, 4]

Step 3.                 Goals

Once you know who your stakeholders are, you can start to identify their goals.

This can begin with a list, or better perhaps, a hierarchy (you could use an outliner like Microsoft Word’s Outline View, or a Mind-mapping tool). That enables you to capture the basic things that people want.

Goals are allowed to be optimistic, even unrealisable: a product to be universally available, a railway to be perfectly safe, a power source to last for ever. In this they are unlike requirements, which must be definitely realisable and verifiable. Goals can be transformed into requirements by breaking them down into measurable functions.

Large initial goals are often vague qualities, meant to be experienced by product users. Such aspirations can usually be broken down into more concrete, more realisable goals. The car is to ‘feel luxurious’. Maybe this can be achieved by ‘leather upholstery’, ‘air conditioning’, ‘soundproofing’, ‘walnut veneer’, ‘hi-fi music’. At once this second tier of goals suggests possible features and even software functions to help to meet the initial goal. [5, chapter 3]

Step 4.                 Goal Conflicts

Goals often conflict: they are allowed to, but requirements must not. So it’s essential to discover and resolve any conflicts, as early as possible.

Some conflicts will be obvious from a plain list of goals; others are easier to see on a goal model (Figure 1). Draw each goal as a bubble; label it with its type (e.g. Commercial, Safety, Functional) if that helps. Draw an arrow marked “+” to show that one goal supports another; draw an arrow marked “–” to show a negative effect. [6]

Sometimes it helps to identify which stakeholder wants each goal; you can draw an association line from the role symbol (e.g. a stickman or mask) to the goal.

Figure 1: Analysis of goal and threat interactions quickly transforms vague targets into concrete functions.
Here it is being applied to explore the core goals of a large public-service company which needs to upgrade its existing client-server system. The next stage of analysis discovered 12 more goals, nearly all functional.

Step 5.                 Scenarios

Scenarios add the element of time to translate goals into connected stories. At least, describe each operational situation as a brief story or numbered list of steps. Head each scenario with the name of the functional goal that it implements. Write each step as a short statement in the form <role> <performs an action>. 

When this is no longer sufficient, switch to writing use cases, at a level of detail that suits your project. For example, Cockburn-style use cases include pre- and post-conditions, making the transition to code far more controllable. [7]

Step 6.                 Shall-Statements

Scenarios don’t cover everything [8, chapter 24]. Other techniques are needed for interfaces, constraints (like time, cost, size) and qualities (like dependability, security).

Traditional “The system shall…” statements [9] are still often convenient, not least because suppliers and standards expect them. Of course, you may feel that the “requirements” consist of all the things listed here, and more. But not everyone shares that understanding.

Use Cases do more than anything else to bridge the gap between the “now we do this” world of scenarios, and the “you shall do that” world of traditional requirements. Neil Maiden has championed the systematic use of Use Cases to discover missing requirements [8, chapter 9].

Step 7.                 Justifications

If you only write down what your system has to do, and not why, developers have to guess what is really important. When resources start to run out, or when planners can see that demand will exceed available effort, something has to be cut or at least postponed.

A justification (like a priority, see below) helps to protect vital requirements from being dropped. It also helps developers to understand how each requirement fits in and why it’s needed.

Justifications can be constructed in many ways: most simply, a few words explaining what a requirement is for, attached to individual requirements as attributes, or (avoiding duplication) listed separately and linked to requirements. More elaborate justifications can be constructed as argumentation models [10], goals (see above), risk models, or cost/benefit models.

Step 8.                 Assumptions

A system specification is a combination of things that you need to be true already, and things that you want to become true. The second group are requirements; the first are assumptions.

Assumptions can include beliefs about existing systems, interfaces, competition, the market, the law, safety hazard mitigations, etc. The most dangerous assumptions are tacit: the ones you didn’t know you were making.

Like justifications, assumptions can be modelled as any kind of argumentation. The simplest approach is again to state each assumption as a piece of text.

It can be valuable when planning a project to examine the assumptions on which it depends, and to consider what to do if any of these should fail, threatening the project’s survival. [11]

Step 9.                 Agreed Priorities

Some requirements are so obviously vital you should just do them. Some are so obviously unnecessary they should at once be dropped. The rest need to be prioritised more closely. Division into these three categories is a short brutal process called triage [12].

Then rank the remaining requirements into categories like vital, desirable, nice to have, luxury. You can do this by voting, by consensus among a panel, by allocating notional money, etc.

Step 10.            Acceptance Criteria

Finally, requirements don’t finish when you’ve completed the spec. Rather, their work is just beginning: to guide development all the way through to acceptance into service.

Identify how you’ll know when the system meets each requirement. This isn’t a full description of each test case, but the criteria for deciding whether the system passes or fails against the requirement [2, chapter 9].

Later, plan the test campaign based on the scenarios (amongst other things), and trace the test cases to the requirements to demonstrate coverage. But that’s out of our scope here. 

The Result

I can’t guarantee that after taking these 10 steps, your project’s requirements will be perfect. But if you take the steps carefully, I can say with confidence that your requirements will be a whole lot better.

References

1.    Dean Leffingwell and Don Widrig, Managing Software Requirements, Addison-Wesley, 2000.

2.    Suzanne and James Robertson, Mastering the Requirements Process, Addison-Wesley 1999.

3.    Ian Alexander, A Taxonomy of Stakeholders, International Journal of Technology and Human Interaction, Vol 1, 1, 2005, 23-59.

4.    Ian Alexander & Suzanne Robertson, Stakeholders Without Tears, IEEE Software, Vol 21, 1, Jan/Feb 2004, 23-27.

5.    Alistair Sutcliffe, User-Centred Requirements Engineering, Springer, 2002.

6.     Ian Alexander, Misuse Cases, Use Cases with Hostile Intent, IEEE Software, Vol 20, 1, Jan/Feb 2003, 58-66.

7.    Alistair Cockburn, Writing Effective Use Cases, Addison-Wesley, 2001.

8.     Ian Alexander & Neil Maiden, Scenarios, Stories, Use Cases, John Wiley, 2004.

9.     Ian Alexander & Richard Stevens, Writing Better Requirements, Addison-Wesley, 2002.

10.   Paul Kirschner, Simon Buckingham Shum, Chad Carr: Visualizing Argumentation, Springer, 2003.

11.   James Dewar, Assumption-Based Planning, Cambridge University Press, 2002.

12.   Al Davis, The Art of Requirements Triage, IEEE Computer, Vol 36, 3, March 2003, 42-49.

 

More Papers, Consultancy and Training on
Ian Alexander's Home Page