A Historical Perspective on Requirements

Ian F Alexander

from Requirenautics Quarterly,
the Newsletter of the BCS RESG, October 1997

If I were writing an article on this topic today (in 2011), it would certainly come out differently.
Formal Methods still remain mostly academic; large projects are still mostly Waterfall;
Object-oriented and Scenario approaches have merged into the UML / RUP status quo;
older approaches (those before 2008) are largely forgotten;
there would certainly now be sections on Agile / User Stories,
and what should have been the central focus all along, Goals and Goal/Architecture Trade-offs.

Summary

This article takes a look at the background of requirements engineering, using examples from the computer science literature. Extensive quotations from named original sources are included to illustrate how thinking has developed. The article is structured as follows:

Introduction

The chosen examples, while diverse in style and content as well as in epoch, together give an impression of a very particular kind of progress: a movement back through the systems development life-cycle towards the user. The dates of the stages along the braided stream of this movement are to a large extent arbitrary. I have included at least one extract to support each date, though both earlier and later sources could be found.

Early systems, especially software (where most of the complexity now lies) were focused on the machine as a scarce and expensive resource.

Attention gradually moved, in software terms, from code to design, and then on to specification. This was understood initially as the precise description of components-to-be-built; gradually this understanding too broadened to encompass entire systems.

Finally, with input from the human-centred sciences (psychology, sociology, ethnology...) specification has come to include a definition of the problem to be solved, as seen by the human users of any putative system. Even the term "users" contains within it a trace of the old assumption that the system, the virtual machine, is the centre of attention: in Julian Hilton's ringing phrase, the user is a computer peripheral.

With the current emphasis on dialogue and human-computer (or human-machine) interaction, the movement can be seen to be continuing towards a proper balance between people and systems.
 
Several major trends in technology have driven this progress in understanding the place of requirements in development:
Any one of these trends could have had a powerful impact on development methodology. Together they have forced system and requirements engineering to transform themselves into full engineering disciplines.

Top The Architecture of Complexity (1962)

It is an odd paradox that the principles underlying the management of complex systems were clearly stated before the software crisis, the chaos that results when complexity runs riot during development.  Herbert A. Simon wrote his paper The Architecture of Complexity: Hierarchic Systems (Proceedings of the American Philosophical Society, 106, Dec 1962, 467-482) while computers were still scarce, and programs for them short by today's standards. Simon has spoken out clearly on the logical structure of systems for much of the 20th century:

Symbolic Systems

One very important class of systems has been omitted from my examples thus far: systems of human symbolic production. A book is a hierarchy in the sense in which I am using that term. It is generally divided into chapters, the chapters into sections, the sections into paragraphs, the paragraphs into sentences, the sentences into clauses and phrases, the clauses and phrases into words. We may take the words as our elementary units, or further subdivide them, as the linguist often does, into smaller units. If the book is narrative in character, it may divide into "episodes" instead of sections, but divisions there will be.

The hierarchic structure of music, based on such units as movements, parts, themes, phrases, is well known. The hierarchic structure of prod- ucts of the pictorial arts is more difficult to characterize, but I shall have something to say about it later.
Plainly, Simon could have added Software to the list of classes of complex systems produced symbolically by humans, as the next section of his paper indicates.
The Evolution of Complex Systems

Let me introduce the topic of evolution with a parable. There once were two watchmakers, named Hora and Tempus, who manufactured very fine watches. Both of them were highly regarded, and the phones in their workshops rang frequently - new customers were constantly calling them. However, Hora prospered, while Tempus became poorer and poorer and finally lost his shop. What was the reason?

The watches the men made consisted of about 1,000 parts each. Tempus had so constructed his that if he had one partly assembled and had to put it down - to answer the phone, say - it immediately fell to pieces and had to be reassembled from the elements. The better the customers liked his watches, the more they phoned him and the more difficult it became for him to find enough uninterrupted time to finish a watch.

The watches that Hora made were no less complex than those of Tempus. But he had designed them so that he could put together subassemblies of about ten elements each. Ten of these subassemblies, again, could be put together into a larger subassembly; and a system of ten of the latter subassemblies constituted the whole watch. Hence, when Hora had to put down a partly assembled watch to answer the phone, he lost only a small part of his work, and he assembled his watches in only a fraction of the man-hours it took Tempus'.

It is rather easy to make a quantitative analysis of the relative difficulty of the tasks of Tempus and Hora: suppose the probability that an interruption will occur, while a part is being added to an incomplete assembly, is p. Then the probability that Tempus can complete a watch he has started without interruption is (1 - p)1000 - a very small number unless p is 0.001 or less. Each interruption will cost on the average the time to assemble 111 parts (the expected number assembled before interruption). On the other hand, Hora has to complete 111 subassemblies of ten parts each. The probability that he will not be interrupted while completing any one of these is (1 - p)10, and each interruption will cost only about the time required to assemble five parts.

Now if p is about 0.01 - that is, there is one chance in a hundred that either watchmaker will be interrupted while adding any one part to an assembly - then a straightforward calculation shows that it will take Tempus on the average about four thousand times as long to assemble a watch as Hora.
Hierarchical structuring controls the complexity of programs, of designs, and of specifications. It is sad that the software industry had to go through so much pain to rediscover what was already known. 

Top The Software Crisis (1972)

The high price of computers created a virtual priesthood of technically-minded specialists who knew how to build and maintain large systems.  Enid Mumford's book Systems Design: ethical tools for ethical change (Macmillan, 1996) is a study of and a reaction against the technocentric thinking that characterised the first generation (perhaps 1955 to 1985, for the sake of argument) of software developers. She quotes Harold Sackman (Mass Information Utilities and Social Excellence, Auerbach, 1971) as explaining "the ideology of technical systems analysts of the time:"
Early computers were virtually one of a kind, very expensive to build and operate. Computer time was far more expensive than human time. Under these constraints, it was essential that computer efficiency came first, with people last.... Technical matters turned computer professionals on; human matters turned them off. Users were troublesome petitioners somewhere at the end of the line who had to be satisfied with what they got.
With this unhappy phrase, Sackman surely captured the unspoken thought of many technicians. Here is part of Mumford's account of the development process in that period:
Because systems design has been defined as a technical activity, traditional methods have been tools and procedures for designing technical systems. The systems analyst has not seen his role as helping with the management of complex change. He has restricted this to providing technical solutions.

Because of this narrow view the practice of systems design has generally been broken down into a number of sequential operations. Typically, these have included analysis - gaining an understanding of the problem that has to be addressed and describing the activities, data and information flow associated with it. This led to a requirements definition. Next came a specification or description of the functions to be performed by the system to process the required data.

Design followed, and this covered the development of the internal structure of the software which would provide the functions that had been specified. Implementation was the development of the computer code that would enable the system to produce data. Validation checked that each stage was successfully accomplished and 'evolution' was the correction of errors or modification of the system to meet new needs.

Until recently, the human user of the system figured very little in this technical approach. Consideration was not given to issues which are regarded as of prime importance today - business goals, needs and structures, competing demands for information, multiple interest groups and dynamic and complex business environments.
Clearly, if users were such a nuisance, it is hardly surprising that their needs were not often met. In case you are minded to believe that negative attitudes to users are a thing of the remote past, reflect on this short extract from  Steve Maguire's Debugging the Development Process (Microsoft Press, 1994):
When Microsoft first begain conducting usability studies in the late 1980s to figure out how to make their products easier to use, their researchers found that 6 to 8 out of 10 users couldn't understand the user interface and get to most of the features. When they heard about those findings, the first question some programmers asked was "Where did we find eight dumb users?" They didn't consider the possibility that it might be the user interface that was dumb. If the programmers on your team consciously or unconsciously believe that the users are unintelligent, you had better correct that attitude - and fast.
Perhaps the classical text of the crisis era is Structured Programming, (Academic Press, 1972) edited by Dahl, Dijkstra, and Hoare. The title is worth reflecting on: the best minds of the day were focused on the problem of making programs, i.e. on the actual implementation; the idea that the real problem lay well before that phase took another decade to arrive. The book's first section, the modestly-entitled Notes on Structured Programming is written by E.W.Dijkstra, which he introduces as follows:
ON OUR INABILITY TO DO MUCH

I am faced with a basic problem of presentation. What I am really concerned about is the composition of large programs, the text of which may be, say, of the same size as the whole text of this chapter. Also I have to include examples to illustrate the various techniques. For practical reasons, the demonstration programs must be small, many times smaller than the "life-size programs" I have in mind. My basic problem is that precisely this difference in scale is one of the major sources of our difficulties in programming!

It would be very nice if I could illustrate the various techniques with small demonstration programs and could conclude with "..and when faced with a program a thousand times as large, you compose it in the same way." This common educational device, however, would be self-defeating as one of my central themes will be that any two things that differ in some respect by a factor of already a hundred or more, are utterly incomparable. History has shown that this truth is very hard to believe. Apparently we are too much trained to disregard differences in scale, to treat them as "gradual differences that are not essential". We tell ourselves that what we can do once, we can also do twice and by induction we fool ourselves into believing that we can do it as many times as needed, but this is just not true! A factor of a thousand is already far beyond our powers of imagination!

Let me give you [an] example to rub this in. A one-year old child will crawl on all fours with a speed of, say, one mile per hour. But a speed of a thousand miles per hour is that of a supersonic jet. Considered as objects with moving ability the child and the jet are incomparable, for whatever one can do the other cannot and vice versa.

To complicate matters still further, problems of size do not only cause me problems of presentation, but they lie at the heart of the subject: widespread underestimation of the specific difficulties of size seems one of the major underlying causes of the current software failure. to all this I can see only one answer, viz. to treat problems of size as explicitly as possible...

To start with, we have the "size" of the computation, i.e. the amount of information and the number of operations involved in it. It is essential that this size is large, for if it were really small, it would be easier not to use the computer at all and to do it by hand. The automatic computer owes its right to exist, its usefulness, precisely to its ability to perform large computations where we humans cannot. We want the computer to do what we could never do ourselves and the power of present-day machinery is such that even small computations are by their very size already far beyond the powers of our unaided imagination.
Complexity was for Dijkstra the major issue. There is no indication that he had heard of Herbert Simon's work; nor that he thought questions of problem definition important.

We cannot end this section without at least a small extract from perhaps the most famous description of the software crisis,  Frederick P. Brooks' Mythical Man-Month (Addison-Wesley, 1975). The "tar-pit" is a masterpiece of rueful expression of experience, and it illustrates the book's cover.

 
C.R.Knight, Mural of La Brea Tar Pits.
Los Angeles County Natural History Museum. 
No scene from prehistory is quite so vivid as that of the mortal struggles of great beasts in the tar pits. In the mind's eye one sees dinosaurs, mammoths, and sabertoothed tigers struggling against the grip of the tar. The fiercer the struggle, the more entangling the tar, and no beast is so strong or so skillful but that he ultimately sinks.

Large-scale programming has over the past decade been such a tar pit, and many great and powerful beasts have thrashed violently in it. Most have emerged with running systems - few have met goals, schedules, and budgets. No one thing seems to cause the difficulty - any particular paw can be pulled away. But the accumulation of simultaneous and interacting factors brings slower and slower motion...
It took at least another twenty years before the most important factor was seen to be accuracy of specification of requirements. But Brooks was also far ahead of his time in his understanding of the importance of consulting users when writing the "specifications" for a system:
The manual, or written specification, is a necessary tool, though not a sufficient one. The manual is the external specification of a product. It describes and prescribes every detail of what the user sees. As such, it is the chief product of the architect.

Round and round goes its preparation cycle, as feedback from users and implementers shows where the design is awkward to use or build... The manual must.. describe everything the user does see, including all interfaces.. but must not attempt to dictate the implementation.
This is fascinating both for its clarity and for its confusion. The principles of iterative development, of system life-cycle, of interface definition, of modularity, of review, and of independence of specification from design, are all clearly enunciated in these few lines. At the same time, there is no distinction drawn between user manual and system specification (though Apple Corporation have long made a virtue of writing the user manual in advance, and using it as the specification); nor, more seriously, is there any separation between user and system requirements. 

Top The Structured Response (1980)

The first practical response to the so-called Software Crisis was the introduction of structured methods, first to design and later to analysis (of requirements). Meilir Page-Jones' Practical Guide to Structured Systems Design (Yourdon Press, 1980) was one of the key books of the movement. Until then, designers had largely ignored the problem of defining software requirements:
The designer does not talk directly to the user; the analyst is the go-between linking the designer and the user. On one side, he has to converse with the user to elicit exactly what system the user needs. On the other side, he has to communicate the user's requirements to the designer so that the designer can plan a computer implementation of the user's system. To do this, the traditional analyst writes a document called a functional specification.

Many systems analysts were once designers or programmers, and are more familiar with the world of EDP and with its jargon than with the user's world and his business jargon. The result of this has all too often been doubly disastrous: First, the user is bewildered by a specification buzzing with EDP terms - such as disk drives, job steps, and HIDAM - whose meaning he can only guess at. Yet the user is expected to peruse, correct, and finally accept this specification as being a true representation of his needs.

Second, the designer, whose job it is to decide the best implementation for the user's business system, is pre-empted by the analyst in his decisions. The analyst may frustrate the designer by imposing premature and often arbitrary constraints upon him. Indeed, in many a shop where I have worked or consulted, the user community and the design/programming community are two peoples separated by a common document - the functional specification.

Structured Analysis is a response to the fact that a specification that improperly records the user's needs, more than any other single factor, is likely to jeopardize a project's success.
In this clear and lively style, Page-Jones persuaded a generation of software designers to produce decent dataflow diagrams and data dictionaries before building their systems. The problems didn't all go away, though. 

Top The Waterfall Model (1984)

The structured approach clearly implied a development life-cycle that ran analysis ... design ... build ... test ... operate. The most basic interpretation of this cycle is that each activity or phase happens exactly once, starting when the previous one has finished. The process was visualised as a waterfall splashing down from one rock (analysis) to the next (design) until the product appeared at the bottom.

One of the clearest statements of the model remains the  European Space Agency's software engineering standards, which were first issued in 1984 as BSSC(84)1, but are now better known as PSS-05 (Issue 2, 1991). Behind the dry and repetitive style of a standards document, the reader can perceive the real concern to improve the structure and quality of often mission-critical software:
UR phase: user requirements definition

The UR phase is the 'problem definition phase' of a software project. The scope of the system must be defined. The user requirements must be captured. This may be done by interview or survey, or by building prototypes. Specific user requirements must be identified and documented in the User Requirements Document (URD).

The involvement of the developers in this phase varies according to the familiarity of the users with software. Some users can produce a high quality URD, while others may need help from the developers.

The URD must always be produced. The review of the URD is done by the users, the software and hardware engineers and the managers concerned. The approved URD is the input to the SR phase.

SR phase: software requirements definition

The SR phase is the 'analysis' phase of a software project. A vital part of the analysis activity is the construction of a 'model' describing 'what' the software has to do, and not 'how' to do it. Building prototypes to clarify the software requirements may be necessary.

The principal deliverable of this phase is the Software Requirements Document (SRD). The SRD must always be produced for every software project. Implementation terminology should be omitted from the SRD. The SRD must be reviewed formally by the users, by the computer hardware and software engineers, and by the managers concerned...

Top The Formal Methods Argument

Vagueness, inaccuracy and ambiguity have long been perceived as major problems in specifications. Bertrand Meyer's On Formalism in Specifications (IEEE Software, Jan 1985, 6-26) attempted to solve the real problems of writing requirements by moving to the precise language of mathematics:
A critique of a natural-language specification, followed by presentation of a mathematical alternative, demonstrates the weakness of natural language and the strength of formalism in requirements specifications
Meyer identified 'seven sins of the specifier', which he listed as follows:
Noise
the presence in the text of an element that does not carry information relevant to any feature of the problem.
Silence
the existence of a feature of the problem that is not covered by any element of the text.
Overspecification
the presence in the text of an element that corresponds not to a feature of the problem but to features of a possible solution.
Contradiction
the presence in the text of two or more elements that define a feature of the system in an incompatible way
Ambiguity
the presence in the text of an element that makes it possible to interpret a feature of the problem in at least two different ways
Forward reference
the presence in the text of an element that uses features of the problem not defined until later in the text
Wishful thinking
the presence in the text of an element that defines a feature of the problem in such a way that a candidate solution cannot reasonably be validated
Every requirements engineer will recognise these 'sins' as ever-present dangers (though forward reference is not necessarily a sin if used explicitly). They are avoided by careful wording, discussion with users, and most importantly review. The text can be supported where necessary by diagrams and (in technical domains) by equations. They cannot be replaced by specifications written entirely in set-theoretic or logical notation, for the sufficient reason that users (and probably also programmers) cannot be expected to understand it.

A deeper reason is that given by Einstein in his aphorism
'As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.'
(quoted by Fritjof Capra, The Tao of Physics, Wildwood 1975). For real-time systems, Einstein's aphorism means that no mathematical model can be perfect, for it would have to model the entire universe (in which all particles are connected by forces and quantum effects). For example, turbulent flows over the wing of an aircraft can be described well but never perfectly by simulation and modelling. Hence a formal specification of such a system cannot hope for mathematical exactness. This does not mean that precision is not worth striving for, in the interests of safer and better systems.

Meyer is unusual in recognising quite clearly the difference between the problem space and the solution space (as in his definition of overspecification, above). Unfortunately, his proposed formalisms apply in large part to precise specifications of design features, such as "sequences", which belong usually to the solution space. For example, the maximum line length of a sequence s is defined as
max_line_length(s) =
   max ({j-i|
     0 <= i <= length(s) and
     (forall k in i+1 .. j,
       s(k)
 new_line})
This kind of low-level detail is far closer to code than to users, even if it is argued that it constitutes a specification rather than a solution. In fact, even the assumption that there will be a fixed line-length is part of a solution: as a counter-instance, lines on Web pages are as long as users make them when they resize browser windows and change font sizes. I would therefore assert that all such detail is a priori likely to be part of a design rather than part of a problem statement. The underlying user requirement for Meyer's sequences might be "Text must not disappear off the right of the screen", or "Text should wrap normally", if indeed it was expressed at this level at all. Formal methods have a definite and necessary place in system engineering, but that place is in the (later) solution phases of the life-cycle.

Even practitioners in real-time safety-critical systems design have moved towards this point of view. Martyn Thomas, for example, in an article Can Software Be Trusted? (Physics World, Oct 1989, 30-33), wrote:
...if you can monitor more variables and take action quickly on what you find, you can control your process so that it is nearer its optimum efficiency. With cruder control mechanisms you may have to allow much greater safety margins, or to tolerate many more occasions when the controlled process is shut down by the safety system, either of which would cost a lot of money.
Thomas has here identified the driving force in real-time systems design: improved automation saves money. For example, aircraft can be made lighter and more fuel-efficient, but at the same time more dependent on their control systems. He goes on:
Unfortunately, programmable systems also bring novel problems of design, verification and assessment...

[The] specification may be wrong. It may inadequately reflect the task which, with hindsight, we would have wished the computer system to perform. If the specification is contradictory, we should be able to detect it with analysis techniques; if it is incomplete, we may have enough information to discover where (for example, it may not specify the required behaviour for all possible input values). If, however, the specification is functionally wrong, we can only hope to realise it by prototyping or testing (or by some other form of review by experienced humans). Otherwise our implementation will seek to satisfy the given specification as accurately as possible, with the result that the computer system will be 'correct' (with regard to the specification) but the overall system behaviour will be wrong.

When society asks how safe a system is, what can we say?
Thomas' perspective is important as it sets limits on the effectiveness of design methods. Even if these were perfect, safety-critical systems such as aircraft would continue to fail. Thomas is a definite proponent of formal methods, in their proper place:
It is mathematics which models precisely the nature of digital systems, and it offers a scientific basis for the development of computer systems. These methods are the way of the future, offering far greater certainty throughout the development process than the traditional 'craft' methods can provide.

However, formal methods do not address the whole problem, because there is clearly still the problem of understanding the controlled process, developing an adequate specification, analysing the system for potential hazards and the consequences of failure, and so on. And, of course, even a mathematical analysis or proof has some probability of being incorrect. Formal methods should be viewed as a way of strengthening the development process.

Top The Object-Oriented Challenge

In a famous paper, Structured Analysis and Object-Oriented Development are not Compatible (ACM Ada Letters, Nov/Dec 1991, 56-66), Donald Firesmith (author of the OPEN Process Framework) argued powerfully against the mixing of techniques, and claimed that functional analysis was obsolete:
It is the thesis of this paper that traditional Structured Analysis, even when modified for real-time systems, is a technically obsolete way to specify software requirements, and that Structured Analysis and Object-Oriented Development are fundamentally incompatible and unnecessarily difficult for software engineers to combine effectively.
I believe that this claim is based on a failure to distinguish between user and system (or software) requirements. It is certainly true that it is awkward to move from a set of (functional) dataflow diagrams describing a system to a set of objects implementing that system. Firesmith believes, with reason, that it is better to describe the system as a set of objects, identifying interfaces between such objects rather than between functions.

The approach breaks down, though, with user requirements. Users cannot immediately identify objects which will persist throughout the system life-cycle. They must first describe the problem which the system has to solve, before anyone can sensibly propose a system, even in outline, which might be decomposed into objects which can solve the problem. For example, it has been argued for the case of a traffic automation system that objects such as "Traffic Signals" and "Crossroads" can be assumed to persist throughout development. This confuses the solution (a crossroads, safeguarded by automatic signals) with the problem (to get cars and people across a road safely). Objects such as signals cannot be identified in the problem space, because they do not exist there. Alternative solutions such as roundabouts (circular junctions with no lights) or highway junctions (overpass/underpass with slip roads) address the same problem with entirely different objects.

It is thus not easy to see how object-oriented analysis could ever be extended to cover the start of the system life-cycle, the user requirements phase. As for the question of whether object-oriented methods will completely supersede functional analysis in system and software requirements phases, time will tell. Now, several years after Firesmith's paper, the methods coexist. Object-orientation has delivered excellently on some projects and poorly on others; projects that plan to use objects in design do well to use them for system analysis also. 

Top The Ethnological Approach

As with all trends, it is hard to put a precise date to the use of ethnological techniques in system development. The classical text is perhaps H. Garfinkel's Studies in Ethnomethodology (Prentice-Hall, 1967), though Enid Mumford's sociotechnical work, including her ETHICS development method (1979 and later) may have been more influential.

A fascinating paper on the use of ethnology in the development of air traffic control systems, Faltering from Ethnography to Design (John A. Hughes, David Randall and Dan Shapiro, CSCW 92 Proceedings, 115-129, Nov 1992, (c) ACM 1992; extract used by permission of the Association for Computing Machinery) shows how the approach has deepened since the 1980s, and the problems that remain:
The aim of this paper is to explore some ways of linking ethnographic studies of work in context with the design of CSCW systems. It uses examples from an interdisciplinary collaborative project on air traffic control. Ethnographic methods are introduced, and applied to identifying the social organization of this cooperative work, and the use of instruments within it. On this bais some metaphors for the electronic representation of current manual practices are presented, and their possibilities and limitations are discussed.

There is an almost universal agreement in CSCW on the desirability of interdisciplinary design. While the integration of cognitive psychology in HCI substantial body of achievements on record, the objective now is to incorporate perspectives on the social organization of work - at a theoretical but, more importantly, at a practical level - into the canon. With some exceptions, however, these remain pious hopes, or else independent analyses of work at some remove from the context of real system design. This paper makes a modest contribution to realising the goal of integration. It does so by taking a particular design project: a prototype system for air traffic control; and by carrying an ethnographic analysis of the social organization of the work involved through to specific implications for the design of a computer-based system to support that work...

The [air traffic controller's cardboard] strip turns out to be a vital instrument for ATC. It is possible, and on occasion necessary, to perform ATC with the strips but without radar. To do so with radar but no strips, however, would pose extreme difficulties. When a controller gives an instruction to a pilot, for example to ascend to flight level 220, s/he marks this on the strip: in this case, with an upwards arrow and the number 220...

Our work has now reached a stage where we are generating system interfaces whose design has been informed by the ethnographic observations. We have found that the information provided by ethnography is essentially background information which has provided a deeper understanding of the application domain. The ethnography did not result in specific, detailed systems requirements; rather it provided pointers to appropriate design decisions...

As well as influencing the systems design process, we believe that the ethnographer has a further role as substitute user during initial system validation. User-centred design where users participate in the interface design process from an early stage in that process is likely to lead to more effectively usable user interfaces. However, a serious constraint in the practice of user-centred design is the availability of users, particularly when these users are an expensive and scarce resource. This is a particular problem during early stages of the design process where the design is unlikely to satisfy the user's requirements so the user gets little reward from participation...

Software engineers and sociologists can work together effectively. However, there is a wide gulf between these disciplines and entrenched philosophical positions will probably ensure that that gulf cannot be bridged. Effective inter-disciplinary cooperation requires much flexibility on both sides and requires both sides to question their own assumptions and working methods.

Top The Scenario Approach

The idea of using scenarios, physical sequences of activities actually carried out to achieve some desired goal, is surprisingly recent. The idea plainly has its origins in the Taylorian time-and-motion studies to improve efficiency in early 20th century mass-production factories. It is possible that a reaction against Taylorism has delayed the use of scenarios in discovering and prioritising requirements.

A text devoted entirely to the approach is Scenario-Based Design, edited by John M. Carroll (Wiley, 1995). Despite its title, it provides quite a rich discussion of the use of scenarios in requirements analysis and dialogue between developers and users: both critical topics. The chapter Rapid Prototyping of User Interfaces Driven by Task Models by Peter Johnson, Hilary Johnson and Stephanie Wilson introduces the subject in this way:
In the fictional country of Erewhon, Samuel Butler presents many scenarios of life and work in a society in which people serve the needs of machines rather than machines serving the needs of people. In his novel Butler warned of the dangers of isolating the design and development of technology from people and society, and from any considerations of the effects of technology on people's lives. While in Erewhon the technology was that of steam engines and mechanical devices, the serious consequences of focusing design solely on machines rather than on people's needs, and how those needs might be best served by the design and development of machines, were clearly envisioned. People became the servants of machines. The task of looking after the machines became the prime focus of work, with little consideration for subsequent effects on the quality of the work experience or indeed the quality of the end product. The design and development of technology to improve the quality of work and the quality of the products of work require us to pay close attention to the nature of the work, and to be explicit about how any technology that we design might affect people and their work. The mistake is to believe that computer system designers are only responsible for the software and hardware that they produce. System design must include taking responsibility for the total system, comprising people, software, and hardware. Understanding how the software and hardware will affect people in terms of how they could use it, what tasks it will and will not be good for, what changes it will require from users, and how it might improve, impair, or otherwise change work, are all issues that must be addressed in design and are fundamentally the responsibility of the system designer. Taking seriously the concerns of people who use or are otherwise affected by a computer system will require changes to the practices and methods of system design. Understanding users and their tasks is a central concern of the system designer. Scenarios provide explicit user and task information, which can better equip the designer to accommodate the rich perspective of the people and work for which he or she is designing computer systems.

In developing the technology of computer systems it is often necessary to focus upon properties of the technology. However, in developing systems that are intended to be used by people in the varied contexts of their work, private, social, and leisure activities, the focus of design must be on the suitability of the designed artifact to support and complement human activity. Developing technology that serves the needs of people, rather than vice versa, requires a revised conception of computer system design. The activities of design must still allow systems to be well engineered such that they are reliable, efficient, and easily maintained; however, over and above this, the people who will use and be affected by the design in the context of its usage must be sharply in focus in the design process. Scenarios that include rich information about users and their tasks can bring such a focus to the design. Scenarios are snapshots of human activity from which the designer can gain understanding about users and tasks, and through which users can see how any design is likely to affect them and their work. Users and designers need to be able to communicate with each other to understand the domain, users, and tasks and the possible design-induced changes that may come about.
The style of such writing (sociological and literary) surely forms part of the psychological barrier to the adoption of the techniques advocated.

R.J. Wieringa's more conventional textbook Requirements Engineering: Frameworks for Understanding (Wiley, 1996) mentions scenarios in only a few places and is almost dismissive of the approach, but still manages to be characteristically clear in its description:
The client often finds it easy to describe scenarios that the SuD [System under Design] will have to participate in. These are a rich source of material for the construction of behavior specifications and, at a later stage, for the validation of specifications as well as for acceptance testing of the product.
Richard Stevens' Structured Requirements course ( (c) 1993-5 QSS/REL Ltd) takes a more positive attitude, placing scenarios firmly as the basic method of structuring user requirements (whereas system requirements are organised primarily as a hierarchy of the system's functions).
Structure is critical for handling all complex elements in the whole lifecycle... The basic structure of the user requirements is an operational scenario... the sequence of results produced through time for the users.
Stevens prescribes a procedure for obtaining a scenario:
The idea is to take a goal, such as "get $50 from teller machine", and work back through the subgoals which lead up to it, such as "identify oneself to machine" and "specify amount wanted". These steps can in turn be analysed further if necessary. The method then takes a twist:
Some results... do not occur in a linear sequence. While there is a baseline [sequential] process, there will also be conditionality, repetition, and asynchronous aspects to be considered. ... [Such] factors are links between requirements, representing time or logical relationships.
The idea that requirements may be expressed informally (in English or other natural languages) but linked with precision in a variety of ways seems to be unique. Users can effectively specify not only that a requirement normally follows another, but that certain requirements apply WHENEVER some condition occurs. The emphasis on sequence also makes necessary the possibility that some requirements are asynchronous, inherently parallel. For example, the engine and the gearbox of a car must both be assembled and tested before being mounted in the car's frame, but there is no implied sequence between the two processes. It is no accident that QSS' requirements engineering tool, DOORS, has been designed to organise requirements hierarchically with cross-links.

It seems a safe bet that we will hear much more about scenarios in requirements engineering as more organisations latch on to the power and economy of the scenario approach. 

Top The Management of Change

More recently, Ed Yourdon writing in Byte's State of the Art pages, When Good Enough is Best (Byte, Sept 1996, 85-90), pointed up the huge problem of specifying and building systems in the face of constant change. A change in a requirement may have impact throughout a system; if that system is not even complete, the combined effect of multiple changes may be disruptive. It may be best to build systems rapidly and imperfectly (to meet an early specification) rather than to attempt to build a perfect system, only to have to modify it constantly before release:
Our expectations for bug-free software developed on time, within budget, and with every feature that end users have asked for is unrealistic. By definition, these goals impose four interrelated constraints on developers, and by attempting perfection in one area, developers will likely create havoc in at least one other area...

That's why just producing an up-front specification for good-enough software isn't enough. You and your end-users must engage in a dynamic reevaluation of the specification, because most development projects take place in what author James Bach justly calls a "mad world". On a daily basis, the project team may find that the hardware has changed, the tool vendors have changed (some bankrupt, others brand new), government regulations have changed, the economy has changed. Consequently the user's requirements for the system change, and the design and implementation must change, too.
Apart from the hinted blurring of the user/system requirements distinction, this is an admirably clear expression of the challenge of change. Yourdon suggests that "prototyping (i.e. rapid iterative development) is entirely compatible with mad-world, good-enough software development", and argues convincingly for
requirements management, an activity that comes before the detailed analysis and design modeling handled by tools...
Yourdon identifies prioritisation as the key response to change. Not everything can be done in limited time with limited money and effort. Three questions to ask the customer, to evaluate priorities effectively, are: In his view,
the tools used for this activity are word processors, because users typically describe their requirements in imperative English sentences like "the system must do X, Y, and Z". Associated tools such as RTM, DOORS, and Requisite Pro can help describe the priority, cost, risk, and other attributes of each requirement... and help ensure that the team remains focused on the must-have features.
This viewpoint takes in the need to give requirements attributes so as to prioritise them effectively, but seemingly ignores both the need to structure requirements into a comprehensible model (whether of the problem for users, of of the system for developers), and to link related requirements and constraints together. The implication that all systems should be developed in a prototyping style similarly cannot be taken absolutely literally. What all developers can surely agree with is the belief that close attention must be paid to user requirements: and since these can change at short notice, development must have a suitably short cycle time (or "wavelength") to enable a continuing and effective requirements engineering dialogue. 

Top Postscript

This review has given examples of quite diverse viewpoints on system development. Through all the differences, their authors share a common concern for accurate definition of systems. Understanding of how to achieve this has broadly increased in sophistication by moving away from the "soldering iron and wire-wrap gun" (Frederick Brooks) to the detached, problem-oriented definition of user requirements.

© Ian Alexander 1997