Modelling Argumentation, Toulmin-style

Ian Alexander

Requirements are hard to engineer. Unlike many other things in engineering, they depend on the intentions, understanding and agreement of human stakeholders. Once the requirements are known, you may be able to determine analytically the best choice of material for a design, or to prove that a software specification is correct. But with requirements there is no such mathematical certainty. Usually, a requirement is 'right' only when the people involved feel that it is what they want – as long as it's possible within the laws of physics. The problem is, different stakeholders often disagree.

For example, safety-related systems are typically designed and manufactured by a company or consortium, tested (often independently) and then certified as safe for the public to use by an independent authority. The company has to convince the authority by a mixture of evidence and argument that the product deserves a safety certificate. The usual arrangement is that the company puts together an extended argument known as a safety case. The authority examines the argumentation, and either grants a certificate or raises objections. The company tries to overcome the objections, either by reworking its arguments, or by adding arguments that rebut the objections. The process can become long and complex, but a safety authority is an exceptionally powerful stakeholder, and its views cannot be ignored.

As another example, the process of engineering the requirements for a product involves prioritising individual requirements or features, and selecting a suitable subset of them for each product release. This is very much a matter of opinion and judgement, as no-one knows for certain – in advance of the judgement of the marketplace – what will in fact 'sell' the product. If you're making a handheld communication and entertainment device, should you build in colour faxing for the next release? What factors should you consider? What evidence can be brought to bear on the decision?

Argumentation, then, is a crucial but all too often neglected part of good requirements practice. It isn't a matter of formal Aristotelian logic ("Customers like faxing. Customers buy products they like. We want to sell products. \ our products must include faxing.") – the syllogism is false. We don't know exactly what people like, even if they say so in an interview or on a form. We don't know how they'd respond to a novel combination of features, and nor do they. There are many possible solutions and we can only reason about them in probabilities, few of which we can really put numbers to.

Fortunately, classical logic and Aristotle's syllogisms are not the only way we can arrive at a conclusion – that approach only works when things are definite (e.g. "When a customer says she likes a product, she has a probability of 0.4 of buying it"). The philosopher Stephen Toulmin, in his book 'The Uses of Argument' (Cambridge, 1958 – and many times reprinted since), sets out a strong case for a different kind of argument, more like what we use in daily life or in a court of law.

A Toulmin argument sets out to establish a Conclusion on the basis of the available Facts. However, why should anyone see the Facts as relevant to the Conclusion? We need a Warrant for believing that the Facts support the Conclusion, in other words an argument that connects them. This argument may in turn need to be supported with some Backing, essentially another argument – and in its turn that could be supported by further arguments and further facts. Similarly, the conclusion may be established more or less definitely (Toulmin allows the use of a Qualifier like 'probably' or 'almost certainly'), unless objected to by a Rebuttal, which is itself just another argument. In other words, we can take the basic Toulmin pattern of argumentation (as illustrated in Figure 1), and wherever we see a term like Warrant, Backing, or Rebuttal, we can replace it by a whole Toulmin argument with facts and warrants and possible rebuttals. In short, we can nest arguments within arguments to form a complete chain, or more exactly a tree, of argumentation.

Figure 1: Structure of a Toulmin Argument

Requirements engineers often have to deal with tree-like data structures, so tools like Telelogic DOORS are designed to handle them. Figure 2 gives a simple example of how the argumentation for and against including a feature can be represented in a DOORS formal module.

Figure 2: Representing the Debate about a Product Feature in DOORS

You can see that there is a hierarchy of statements in the main text, with a column representing the Toulmin Argument Type on the right. It is then a simple matter to assign colours to the different argument types (using a DOORS enumeration type).

The argumentation can then be visualized directly as a tree simply by pressing the graphics (tree) button. However, that clips the text down to about 30 characters, and the presentation is all of rectangular boxes.

A more elegant alternative is to make a Dialog Box using the DOORS extension language (DXL), as shown in Figure 3.

Figure 3: Displaying Argumentation using a DXL Window

I explained in an earlier article (Get More from DOORS with DXL Graphics) how to represent a Decision Tree hierarchy in this way, and the approach here is broadly similar. The code can be downloaded from the Scenario Plus website.

Essentially, the algorithm walks (recursively) down the tree, positioning each item according to the number of items at each level, and linking it with a line to its parent whose position is already known. I've chosen to space the items as widely as possible by sharing out the available width (of the graphics canvas) equally. That means that the diagram can be adjusted simply by resizing the window. Another approach is to fix the position of each item, and to allow the user to scroll the canvas to see hidden items. Either way, the diagram is then constructed fully automatically from the textual model, so no special skill is needed to create or update the model – other than a clear head for argument.

It is common practice to use several ways of getting a model's message across in graphical form, and of course with a tool it is simple to allow users the option of switching these on and off as desired. In Figure 3, I have chosen to use three ways of depicting an object's Argument Type simultaneously:

There are in fact several proprietary methods and tools for representing various kinds of argumentation (see 'Visualizing Argumentation' by Paul Kirschner, Simon Buckingham Shum and Chad Carr, Springer, 2003 for some non-safety examples). I have therefore chosen to stay close to Toulmin's book. It seemed natural to use a box (the metaphor being 'solid, established') for Conclusion, and to make it green (with cultural associations like 'go, success'); conversely the Rebuttal box is a spiky hexagon, coloured red ('stop, warning').

Of course the choice of colours and shapes is essentially arbitrary, but you can consider for yourself by comparing Figures 2 and 3 whether the graphical form of the model makes the argumentation easier to follow. You might also like to compare other forms in which you've received argumentation, such as technical notes and emails. How easy are they to reason from?

Modelling argumentation has other advantages. The steps in an argument can be linked to requirements as justifications, helping to protect them from hasty deletion when things get rough later in the project. It is also simple to calculate metrics, such as how many facts have been adduced. When major decisions are at stake, there are also good reasons for making arguments transparent and for making them available in an audit trail (such as with DOORS-style traceability and history) in case of later disputes.

It is a curious fact that the idea of graphical modelling of argumentation is nothing new: John Henry Wigmore devised his Chart Method for analyzing legal evidence in 1913! It's illustrated on page 6 of 'Visualizing Argumentation'.

Today, engineers join lawyers in having to wade through many pages of complex arguments, often very poorly presented. Perhaps systems could be made better and safer if more of them were specified with tool support, and if the essential arguments were modelled graphically, such as with the simple Toulmin-style approach illustrated here.

© Ian Alexander 2003

 Home Page

Books you may like: