Discovering Requirements by Trade-offs

A version of this article was published in IET Engineering & Technology, 10 October 2009 pp 54-55
under the title "From Wish List to Want List".

It was said in the 20th Century that a lot of things had been invented in the past hundred years, and most of them plugged into a wall.

Even more things have been invented in the past few decades, and many of them only plug in to the wall very occasionally. But almost all of them contain software.

Business people expect to be able to consult secure databases from anywhere in the world, while they are on the move. Even on holiday, people expect to be able to phone, email, text, navigate, and listen to their home radio station wherever they are. It really is an information revolution. Software forms a large and growing percentage of every product.

Projects are becoming riskier, too. Few projects have an empty “green field” to work in. Usually, they are tightly constrained by the need for compatibility with the legacy systems that they are replacing. Even apparently stand-alone embedded software products like MP3 players and GPS receivers are constrained by an increasing range of interfaces, such as USB connectors, Bluetooth and WiFi, and the MP3 and GPS standards themselves.

Despite all these constraints, projects have to succeed, and quickly. Roll-out problems are likely to cause major expense and embarrassment. And a sure-fire way of getting roll-out and “integration” problems is to go right ahead and develop without an adequate grasp of the real requirements.

As requirements become more complex, conflicts grow more likely: for instance, mobility and security often pull in opposing directions. Trade-offs between such conflicting factors often govern system design.

The systematic discovery of requirements [1] is increasingly recognised as a key element in project management. IT, familiar with specifying products and services, can help to guide the rest of the organisation towards good requirements discovery practice.

Trade-offs

A curious software development dogma asserts that requirements must be complete before design begins, just as design must be complete before testing begins. But there is little basis for these beliefs: useful testing can begin almost from day 1, with requirements being validated with paper prototypes.

The trade-off approach to requirements discovery similarly turns dogma on its head: you don’t know what your requirements really are until you have matched what people want against what you can possibly provide.

Stakeholder Goals

The trade-off cycle begins with people, the stakeholders in your project. They each state their goals: for specific functions; for operability; for performance; for safety, security, reliability; for compliance with standards; for low cost, low weight (if it’s a portable device), low power consumption, and so on.

Candidate Solutions

The designers on your project can then get to work, identifying candidate solutions – software architectures; specific algorithms; alternative platforms; off-the-shelf components.

Very likely, more than one approach is imaginable. If so, the approaches will differ in how well they meet the different goals.

For example, a handheld device can do most of its processing locally, or access a remote service. If you go local, you have limited processing and battery power, but results can be delivered quickly and reliably – until the battery is exhausted – with no communications overhead. If you go remote, you can have all the server horsepower you want, but you may have to wait for results to be delivered over a slow network: and if the connection fails, you get no service at all.

These considerations are important trade-offs. The right choice depends strongly on the context. If the purpose is safety-related, for instance, reliability is extremely important. On the other hand, if a service is for entertainment, then performance may greatly outweigh reliability.

Evaluation

So, once your project has identified its stakeholder goals and candidate solutions, you have to evaluate the trade-offs. Mathematically, goals like performance, operability, reliability and so on represent different dimensions. You can’t add operations per second to mean time between failures, for instance: they are independent measures of quality. If you have just two such factors to consider, you can plot each candidate solution on an X-Y chart: perhaps the ideal solution lies at the top right of the chart, with say both high reliability and high performance.

But what if you have 10 or more dimensions? There are already 10 listed above, and those could be broken down much further. The multi-dimensional space is hard to visualise.

There are many possible approaches. Some of these have become popular in certain industries.

A simple first step is just to make a table or matrix of your candidate solutions against your project’s goals, used as evaluation criteria. Each cell contains a score (maybe a number, maybe just Yes or No, maybe a value between √√√ and xxx) showing how well a given solution meets a given goal.

The many demands on a product’s hardware and software
must be traded-off to find a workable solution

If scoring is tricky, you can ask each of several independent experts to evaluate each solution in their area – your security specialist evaluates each solution’s security, and so on. If you are worried about subjectivity, you can average scores provided by several security specialists, but obviously this increases your costs.

Now you have a more-or-less objective, more or less detailed evaluation of all the options on all the goals. If you are lucky, an obvious winner will emerge – if Option A scores √√√ on nearly everything, while no other option scores more than √, then no clever analysis is needed.

But usually, you need to do more to reveal the patterns in the evaluation.

A simple trick is to plot a histogram for each candidate solution against all the goals. You get a set of little graphs; if you display these in a column, one above the other so the histogram bars for factor 1 are on the left, and so on, you may be able to pick out the winner visually.

If that’s not enough, you need a means of reducing the number of dimensions to a manageably small number (no more than 2 or 3) so you can identify the winner more easily.

Can Weighting be Justified?

At this point, many projects try adding up the scores on the different criteria. For example, suppose your project awards scores for each candidate solution from 1 (worst) to 7 (best) as follows:

 

Goals (Criteria)

Option A

Option B

Option C

Power consumption

5

Performance

7

Operability

7

Safety

1

Average

5.0

That gives Option A a good-looking average of 5.0, despite a worst-possible showing on Safety.

Clearly this approach is unsatisfactory, so projects try to remedy the situation by assigning weights to each goal. Suppose Safety is our most important goal, so it gets a weight of 0.7; all the others get 0.1. That pushes the score for Option A down to a less impressive 2.6 (out of 7).

But if a score of 1 on Safety means xxx (“Very Unsatisfactory”), then it is not right to average this with other scores, weighted or not. You could try to patch up the approach with the rule “reject any option that scores 1 on Safety”, but it is hard to avoid the conclusion that weighting is unjustifiable.

 

What's that Word?

Requirements Discovery: the work of finding out what has to be done by a product, service, or system.

Stakeholder: person with a valid interest in a project. Important stakeholder roles include product operator, beneficiary, regulator, interface owner, marketing/product manager.

Goal: something (not necessarily attainable) wanted by a stakeholder.

Candidate Solution: one of several alternative design approaches that may be able to satisfy the project’s goals to some extent.

Trade-off: the process of selecting a candidate solution for a set of goals, given that none of the candidates satisfy all the goals perfectly.

Requirement: a measurable and attainable goal, as demonstrated by trade-off analysis.

Naïve weighting is open to a range of abuses, including the temptation to adjust the weights until the “right” answer is obtained. This distorts the process. In the worst case, it is equivalent to ignoring the stakeholders’ goals. The success of the project then depends not on the supposedly mathematical trade-off, but on whether people’s gut instincts are correct. The likely outcome is a product that fails to perform as the stakeholders wanted, and may be entirely unfit for purpose. Trade-offs matter, and must be done properly.

AHP – Sophisticated Weighting

One approach that is certainly more defensible than naïve weighting is AHP, the Analytic Hierarchy Process [2]. This essentially asks the user to compare each factor (criterion) with each other factor, pairwise, indicating which is more important. AHP then calculates weights, reducing the risk of subjective bias.

The scores are combined into a single league-table style result. A good quality of AHP is that it can detect inconsistencies: if you say Safety is much more important than Performance, and Performance is more important than Cost, but Cost is as important as Safety, then AHP points out the problem, and indeed can measure the size of the disparity. But it remains a weighting approach: apples are still being added to oranges, however subtly and indirectly.

PCA and Human Decision-Making

A completely different approach is PCA, Principal Component Analysis [3]. This works by finding the line (an eigenvector) in the multi-dimensional space that explains as much of the difference between the candidate solutions as possible. This line is a new dimension, the first component of a flattened-down space. The next component is the line that explains as much as possible of the rest of the difference between the candidates: it is by definition at right angles to the first component. If PCA goes well, the first 2 or 3 components explain perhaps 75% of the total variance.

You can then safely choose between candidate solutions on the basis of these 2 or 3 readily-graphed new dimensions (see [1] for a fully-worked example). PCA doesn’t work every time: there may simply not be much difference between the candidates. But when PCA works, it gives a simple, clear picture of the differences. Reassuringly, the components are mathematically guaranteed to spread out the candidates as widely as possible. It is then up to you to decide, given the differences, which candidate is best. In other words, with PCA it remains a human responsibility to interpret the data. This is as it should be.

Goals into Requirements

Trade-offs both identify the best solution approach, and determine which project goals are feasible requirements (workable, affordable, measurable, verifiable, at acceptable risk) given the chosen solution approach. Some goals may be weakened or sacrificed entirely. Trading-off is a crucial step in requirements discovery.

This process does not depend on exhaustive documentation “up front” – in advance of design. Rather, it consists of a dialogue between stakeholders’ goals and possible designs, ending with a well-considered choice of the best design approach.

Effective IT leaders invariably already put into practice many aspects of requirements discovery and trade-offs, whether they use those names or not. In truth, relating problems to solutions is a vital element of software and systems engineering. Trade-offs are here to stay.

References

[1] Discovering Requirements, How to Specify Products and Services, Ian Alexander & Ljerka Beus-Dukic, Wiley, 2009

[2] http://en.wikipedia.org/wiki/Analytic_Hierarchy_Process

[3] http://en.wikipedia.org/wiki/Principal_component_analysis

 

© Ian Alexander 2009

 Home Page

Books you may like: