Negligence, Professional Competence and Computer Systems
In the event that the millennium bug bites strongly, many will question whether or not the software industry itself could have done more to prevent the problem and the extent of its responsibility and culpability. This paper considers the on-going concern about standards, particularly in relation to safety-critical systems, within the software industry and the consequent debate about competence and professionalism. The definition of the term 'professional' is reviewed and its importance in the context of the role of professional codes of practice prepared by the relevant professional organisations is assessed. The discussion then moves on to a comparison with the standards required by the courts in professional negligence cases and the way in which all of these factors may serve to enhance both the attributes of computer practitioners and software engineers into the 21st century.
Keywords: professions, professional negligence, competence, codes of practice, duty of care, safety-critical systems.
This is a Refereed Article published on 30 June 1999.
Citation: Rowland D, 'Negligence, Professional Competence and Computer Systems', 1999 (2) The Journal of Information, Law and Technology (JILT). <http://elj.warwick.ac.uk/jilt/99-2/rowland.html>. New citation as at 1/1/04: <http://www2.warwick.ac.uk/fac/soc/law/elj/jilt/1999_2/rowland/>.
As computer control becomes usual for many applications, it becomes apparent that the failure of systems containing software is likely to have an impact on many people who are not a party to the contract to supply that software. They may suffer economic loss or physical injury. Whether or not there are failures attributable to the so-called 'millennium bug', there have already been failures of computer systems which have led to injury, or for which physical damage is a foreseeable possibility. The total number of systems which rely on computer control and whose failure has the potential to lead to physical damage, injury or environmental catastrophe is enormous and increasing all the time. In May 1997 it was reported that software problems were delaying the completion of the world's most advanced air-traffic-control (ATC) centre intended to replace the existing West Drayton centre and probably several others (Doyle A; 1997). The proposed centre was said by National Air Traffic Services (NATS) to be 'the largest and most advanced development of its kind in the world'. The problems were caused by an unusually high number of 'bugs' which needed to be removed from the 1.82 million lines of software code at the heart of the system. Whilst it is not surprising, indeed, as discussed later, it is inevitable, that such software contains some errors, it is clear that the software for such applications would be expected to be of the highest integrity given the potential for disaster in the event of failure. The complexity of the system can be appreciated further by the fact that the system was designed to fulfil 3,300 'functional requirements' and work on no less than 203 workstations. According to the report, much of this software was apparently written by staff with no previous ATC experience. This was also one of the problems identified in one of the most highly publicised system failures of this type, that of the Computer Aided Despatch (CAD) system of the London Ambulance Service (LAS) in October and November 1992 (LAS Report; 1993).
Has increasing computerisation resulted in systems which, like Frankenstein's monster, are capable of effects which far exceed both the expectations and the intentions of their creators? The purpose of this paper is to examine the role, responsibility and liability of the industry itself for the consequences of malfunction and failure of the systems it creates and, in particular, to consider both the relationship between the competence and 'professionalism' of the relevant computer practitioners and the interrelationship between self-regulation via professional codes of practice and the standards of professional competence set out by the courts in cases of professional negligence.
What is the legitimate expectation of someone who contracts for a complex computer system for use in the process industries, medical applications or transport systems and so on? To what extent are these legitimate expectations of the contractor modified by whether or not they consider they are dealing with a 'professional' as defined. At a basic level the expectation is that 'professionals' are specialists who know what they are doing. Arguably there is also an expectation that if someone suffers they will have a remedy as 'it should not be forgotten that the law not only meets expectations, if reasonable, but also engenders them, and should see to it that the expectations it engenders are reasonable' (Weir T; 1994). What level of competence might then be expected from someone holding themselves out as possessing a special skill? Do these expectations depend on whether or not that occupation could be classified as a profession? Can this be equated with the standard required of the professional in law?
Computer systems, in common with other types of system have the propensity for failure for a variety of reasons. Not the least of these reasons is that all software, like that in the ATC system referred to above, contains bugs, some of which can be detected and removed by testing and debugging. exhaustive testing is out of the question due to the complexity of the systems and the scope and extent of testing will depend, inter alia, on the type of application together with questions of resource allocation. What will be unique about year 2000 failures is that systems used for a multitude of applications, in a variety of sectors of industry and commerce will be susceptible to failure for the same reason.
The millennium bug, however, is not quite like the usual 'error'. As is now well known, the Y2K problem has arisen as a consequence of the desire to save computer memory space by denoting the year as two digits only. This was some decades before the millennium and for some applications it may have been the only realistic way of proceeding at a time when memory space was at a premium. Presumably the millennium seemed very distant at this time even though, had software designers directed their minds to the issue, they might have realised that the date rollover would create problems. Indeed, there is evidence that for a short time at the dawn of programming only one digit was used in the date field and that the first decade rollover would have created similar problems if they had not been foreseen in time (see Gerner M; 1996 <http://www.compinfo.co.uk/y2k/mgerner.htm>). Clearly this was on a very much smaller scale but serves to illustrate that the problem is by no means novel. The lack of novelty of the problem is also revealed by documented cases of other date and calendar bugs which frequently occur at leap years and other discontinuities (see e.g. Neumann P; 1995 p. 85). Why then did the industry continue to use this method even when memory had become much less expensive? As modern society came to rely more and more on computer systems to control its communications, its financial institutions, its transport systems, its industries and a host of other important functions, was it not careless in the extreme for the software industry to continue to store the date in this way long after the reason for so doing had ceased to subsist?
For embedded systems in particular, the advent of real-time clock chips in the early 80s meant that system developers rarely had to address their minds to the issue of date - even though it would have been possible to add lines of code to take account of the missing two digits. At what point does it become negligent not to include 4 digits in the date field? What happens when we compare the situation for software written in 1975, 1985 and 1995? When should software designers have had the year 2000 in their contemplation and take into account the ability of their systems to process the corresponding date change? This question raises a number of subsidiary questions related to the competence of those who design and write software, particularly for safety related applications. These are not new questions but the scale on which they might be asked following the Millennium gives an urgency to their resolution which may have been lacking until now.
The software industry is not unaware of the concerns raised by issues of competence. Neither has it escaped notice that, for instance, those who fly aircraft are subject to far more stringent regulation than those who design the programs responsible for the flight and there have been increasing calls from within and without the industry for both external and internal regulation and for the 'professionalisation' of these activities. To what extent does such a debate contribute to the improvement of standards, social awareness of practitioners etc.? How does it, or might it, affect questions of potential liability and could any of the predicted problems have been prevented by embracing the tenets of professionalism? In brief does it matter whether or not we define a category of computer professional? Before becoming embroiled in the debate over definition, it is worth noting that, despite its apparent importance to some sections of the software industry, professional status may not provide a universal panacea to problems of quality and some commentators have warned that such attributes may be overrated. Thus Powell (1996), for instance, goes so far as to suggest that 'the word 'professional' has become less distinct in its connotations and unsatisfactory as a classification of occupations' and Larson ( 1977) warns that 'the implicit assumption that the behaviour of individual professionals is more ethical as a norm, than that of individuals in lesser occupations has seldom, if ever, been tested by empirical means.'
What makes a profession? As long as the classification of 'professional' remains, there will be debate over the definition and, in particular, the distinction between professions and other occupations. Historically only a few professions were acknowledged and the relevant professional bodies had an apparently relaxed approach to the organisation of their members. During the twentieth century it has been possible to detect a gradual evolution both in the nature and character of a profession and the perceived attributes of those who practise it. Harris (1995 ) points to the consumer society and the associated demise of deference towards professionals as one possible reason. This suggestion may also serve to explain the increasing incidence of negligence claims made against professionals noted by the Likierman report (1989). The growth of the consumer society has been enhanced by the rapid progress of technology, particularly during the latter half of the twentieth century, and this technological progress has also spawned societal changes that might not have been conceived in an earlier age. These changes have in themselves created a host of new trades and occupations some of which have metamorphosed into professions or are desirous of joining this select band.
Nonetheless, an accepted and acceptable definition has proved elusive. Indeed Elliott (1972) has remarked that 'the quest for a watertight definition of a profession ... is a quest for an empirical ideal which can only exist in a Platonic heaven'. Cogan (1955 ) in a paper produced for a symposium on the formulation of ethical standards of practice within specific professional groups commented that 'to define 'profession' is to invite controversy'. The fact that the subject is so contentious is probably a reflection of the importance that particular occupations invest in the status of profession, but, notwithstanding his earlier comment, Cogan went on to refer to three separate types of definition which serve different purposes. Historical or lexicographical definitions can be used to define the boundaries, however tentative, which through customary usage have separated professions from other occupations. Persuasive definitions may influence attitudes and be used to plead the case for a particular profession. A powerful persuasive definition may be essential to the occupation seeking to be admitted to the hallowed status of profession. Finally, Cogan refers to operational definitions detailing those attributes which distinguish professions including, for example, rules of conduct, rules governing relations with client, public, colleagues etc., educational qualifications, requirements for admission, standards of competence, and so on. The discussion that follows will refer to aspects of all these definitions but the line between 'profession' and 'occupation' remains indistinct as does the significance of the distinction. Despite the fact that a number of commentators have suggested a range of characteristics which claim to be exhibited by professions, it seems that there is no general consensus over which of these might be mandatory and which merely desirable. Jackson (1997) suggests that clarification of the definition is more a matter for social historians and sociologists than lawyers but has nevertheless crystallised the desired attributes into four particular categories. The nature of the work should include specialist mental (as opposed to manual) work requiring a qualifying period of theoretical and practical training. Moral aspects include the provision of a high quality service and a duty to the wider community which may eclipse the duty to the client. Professions are usually regulated by a collective organisation which is self-governing and independent. The final attribute on this analysis is the recognition by society of the status of the particular profession.
A different approach to definition is evident in the work of Macdonald (1995) who suggests that professions only evolved as a response to the recognition of the importance of knowledge in its own right. Professions can thus be viewed as 'occupations based on advanced or complex, or esoteric, or arcane knowledge.' This indicates that the key aspect of a profession is its knowledge base and on such a model the salient issues are the nature of, the socio-cultural evaluation of, and the strategies for handling that knowledge (Macdonald K A; 1995 p. 160). Protection of the knowledge base is accomplished by the pursuance of four goals; control over admission to and training for the profession, protection of the scope within which the profession has the right to practise, the imposition of the professional rules on practitioners and the defence and enhancement of the profession's status. The reliance by clients on a particular knowledge base may demonstrate one way of distinguishing professions from most other occupations. This fact has certainly been influential in a number of decisions on professional negligence in the courts as exemplified in the classic case of Hedley Byrne v Heller (1964).
Despite Jackson's comment, referred to above, that the definition of profession might be more appropriately discussed by sociologists than lawyers, there are numerous examples of judicial comment on the nature of professions and professionals. As early as 1919, Scrutton L.J. did not confine the definition of profession to one possessing purely intellectual skill (Commissioner of Inland Revenue v Maxse (1919; p. 657)) but believed that it could also include one who exercised manual skill as long as it was controlled by the intellectual skill of the practitioner. He also hinted that this might not be an easy distinction to make and gave tacit approval for an extension of the number of professions in the comment 'the line of demarcation may vary from time to time.' Scott L.J. in Carr v Inland Revenue Commissioners (1944; p. 165) agreed that 'intellectual qualification' was too narrow a pre-requisite for a professional, ignoring as it did aesthetic and literary qualifications which might also be utilised by one practising a profession. He did, however, criticise Scrutton L.J.'s formulation as 'too sweeping' (p. 166), preferring instead that of Lord Sterndale M.R. in Currie v Commissioner of Inland Revenue (1921; p. 336)  2 KB 332 at 336, who considered that the issue was one of fact and degree in every case.
Also in Carr ( 1944; p. 166), Du Parcq L.J., after warning that there may be many people 'whose work demands great skill and ability and long experience and many qualifications who would not be said by any to be carrying on a profession', went on to suggest an objective test based on the question 'would the ordinary man, the ordinary reasonable man - the man, if you like to refer to an old friend, on the Clapham omnibus - say now, in the time in which we live, of any particular occupation, that it is properly described as a profession?' This encapsulates the notion that what is or is not regarded as a profession is dependent on the society of the relevant time. Notwithstanding his earlier warning, Du Parcq also suggested more explicitly that activities which might be considered as trades or vocations in one particular era might subsequently acquire the status of profession. He was, however, also at pains not to give the impression that a finding that a particular practitioner could be regarded as a professional necessarily implied that the whole class to whom that person belonged should automatically be viewed in the same way.
Do any of the above putative definitions enable us to isolate a class of 'computer professional'? The term 'computer professional' does, potentially, cover a very disparate range of activities and functions and it is not clear that it is possible to isolate a coherent knowledge base which would cover all such 'professionals'. Thus some practitioners may fall within the definition and some without, a finding which certainly has resonance with Du Parcq's view in Carr above. However, matching some of the suggested attributes against some categories of computer specialist would appear to give a prima facie case for professionalism - some aspects of computer science certainly require specialist training and involve 'specialist mental work', they deal with a knowledge base which may be viewed by many to be both arcane and esoteric and is certainly complex. Although not represented by a single professional organisation, increasingly practitioners are members of one of a small number of organisations such as the British Computer Society (BCS) or the Institution of Electrical Engineers (IEE). A tacit goal of such bodies is the enhancement of the perceived status of members. They have all formulated codes of ethics and conduct for the guidance and instruction of their members ethics (see e.g. < http://www.iee.org.uk/Profdev/Guides/conduct.htm> <http://www.bcs.org.uk/aboutbcs/overview.htm>), indeed some studies (see e.g. Anderson R et al; 1993) suggest that occupational associations may embrace such codes to further their recognition as a profession. It is not clear that all moral aspects are relevant to all practitioners but as far as a duty to the wider public is concerned an awareness of such a responsibility is certainly expected and this is certainly included in a number of the relevant codes.
On the above analysis, the definition of 'professional' may perhaps be applied to practitioners of the emergent subject of 'software engineering', an engineering discipline embodying the principles of computer science. This term seems to have originated in 1968 with the dawning recognition that 'building software demands a systematic disciplined approach rather than ad hoc tinkering' (Jacky J; 1996 p. 781). In addition, other branches of engineering have already been recognised as professions creating a prima facie case that later offspring of this parent might also be accorded the same status. Practitioners aspiring to the status of professional may also draw some comfort from the view of Elliott (1972) that 'whatever characteristics are chosen for the definition any candidate for membership of the category can only fill the bill more or less, never absolutely, but this does not make the concept useless ...'
As referred to above, a common feature, and often viewed as a defining aspect, of professional life is membership of an organisation which regulates professional activities and represents the profession as a whole. Indeed, Scrutton L.J. in Currie v Commissioner of Inland Revenue (1921) remarked that '... I myself am disposed to attach some importance in findings as to whether a profession is exercised or not to the fact that the particular man is a member of an organised professional body with a recognised standard of ability enforced before he can enter it and a recognised standard of conduct enforced while he is practising it.' There seems, however, to be no accepted template for a professional organisation. Sometimes there will be only one appropriate organisation, sometimes many. Organisations such as the IEE, BCS and others can perform this function on behalf of 'computer professionals' and software engineers. Most organisations can only monitor and review their own membership but some, such as the Law Society, can even set standards and exert control over non-members. In common with the delineation of profession itself, the role and function of such organisations is itself in a state of flux, particularly in relation to the setting of standards governing the quality of professional work (Wagner H A; 1955) which may need to be modified in response to societal and technological change. Although the adoption of specialism specific codes of conduct and detailing the standard of attitude and behaviour expected of members is now both common and expected, this is a relatively modern development. This attempt to be more proactive in regulation can be contrasted with the previous situation in which professional bodies pursued a much more reactive policy in the case of bad work or malpractice by their members (Harris B; 1995). How much this change is the result of judicial debate about the role of professional organisations is unclear, but such codes may serve a useful function in setting standards of quality and providing reassurance for the consumer and the public at large that their interests are not ignored. It is a moot point whether or not adherence to their terms can provide a guarantee of the competence of individual members.
Bingham L.J. (dissenting in Eckersley v Binnie (1988) 18 Con LR 1 at 80) provided a detailed account of the level of knowledge and skill necessary for the professional person:
'... a professional man should command the corpus of knowledge which forms part of the professional equipment of the ordinary member of his profession. He should not lag behind other ordinary and assiduous and intelligent members of his profession in his knowledge of new advances, discoveries and developments in his field. He should have such awareness as an ordinarily competent practitioner would have of the deficiencies in his knowledge and the limitations on his skill. He should be alert to the hazards and risks in any professional task he undertakes to the extent that other ordinarily competent members of the profession would be alert. He must bring to any professional task he undertakes no less expertise, skill and care that other ordinarily competent members of his profession would bring but need bring no more. The standard is that of the reasonable average. The law does not require of the professional man that he be a paragon combining the qualities of a polymath and a prophet.'
What then is the standard required by the relevant professional organisation? Some commentators such as Kling (1996) have suggested that the contents of a professional ethical code should not be limited by a standard of what is merely lawful but should set higher standards. This view is also supported by Harris (1995) who, in attempting to distinguish bad work which will give rise to disciplinary action by the professional body and that which might be adjudged negligent by a court, comments:
'Whatever the true nature of the test of bad work (and the test will vary according to the rules of the relevant body), it is always different from and is usually higher than the test for tort negligence.'
Whilst striving for higher standards is beyond reproach can this be regarded as a practical objective? Is there not a symbiotic relationship between professional codes and the standards laid down by the court pertaining to professionals and summarised in the extract from the judgment of Bingham L.J. above? When a code of conduct requires a professional to be 'competent' how else can this be judged except by reference to accepted norms within the profession? Conversely, the starting point for a court when required to adjudicate on whether or not a practitioner has fallen short of the standard required by the law will almost certainly be a reference to those same norms. Indeed, courts have already been called upon to adjudicate on the accepted practices of professionals and other specialists.
These reservations aside, one positive role which can be fulfilled by professional codes is the proactive one of setting and enforcing standards i.e. introducing self-regulation into the profession so that a legal determination is truly one of last resort. Neither should the effectiveness of a code be measured by the number of unethical cases which are detected or reported but rather by the way in which it facilitates the development of clear principles of good practice (Wagner H A; 1955). One advantage is that the code can represent a considered view and does not need to react on a case by case basis. It may need to be modified to reflect trends and changes in both technology and society but if correctly drafted and implemented it should be capable of both defining and enforcing standards ab initio rather than merely providing a remedy after the event. However it is unclear to what extent the provisions of a code might determine or influence the attributes of the individual members of the 'profession' itself or their levels of competence and how this might feed in to whether or not professionals are likely to be held not just in breach of the standards set by the professional organisation but also negligent in law.
The finding of increased negligence claims against professionals by the Likierman report (1989 ) was not believed to be due to any increase in negligence as such. Indeed, the investigating team from the DTI team found evidence of generally improved standards. Rather they identified two causes which were independent of the standard of professional service in the absolute sense; the development of the law of negligence and the increased willingness of those involved to litigate. An analysis of the relationship between the standards encapsulated in professional codes and the legal standards defined in the case law on professional negligence will clearly provide a useful starting point in any assessment of whether the responsibility for any failure to take into account the Y2K problem rests with any portion of the software industry itself.
The software engineer engaged to design software for a specialist application will be in a contractual relationship with the client but may also owe duties in tort both to the client and others whose safety and health might be compromised by the malfunction of the system. There is thus the propensity for tension between the contractual obligations which are both defined by the contract and entered into on a voluntary basis and the tortious duties which may appear rather more nebulous and wide-ranging. It is worth emphasising, however, if such emphasis were needed, that the law of tort is not supplementary to the law of contract but is part of the general law out of which the parties can contract if they so wish. In other words tort does not exist to fill in the gaps left by contract and liability may be found independently of contractual terms, provided all the necessary prerequisites are satisfied. Clearly the standard imposed by the contract will, for the most part, depend on the terms of the contract which may include express warranties such as, in this context, a requirement that the system be Y2K compliant (Bradgate R; (1999) ). On the other hand, the standard in tort is one of reasonable care, although, in the absence of express terms, the law will imply a standard of reasonable care and skill into all contracts for services (Supply of Goods and Services Act 1982 s13). It is usual for specialised software to be created not by one practitioner but to be developed by a team, members of which will have varying degrees of skill and responsibility commensurate with that skill. Given that law is more often better suited to dealing with individual rather than collective matters, it is perhaps unsurprising that, as noted by Dugdale and Stanton (1997: 312), 'the law does not recognise a team standard.' The individualist approach to responsibility may be easy to apply where each member of the team has a well-defined and almost independent role, but becomes rather less straightforward where tasks and responsibilities are overlapping, shared and interdependent. Where teams involve specialist consultants, there may also be a complex network of contracts, a fact which the court in Pacific Associates v Baxter (1989) considered militated against imposing duties of care on peripheral parties. On the other hand, it would be unfortunate if this were to encourage complex contractual arrangements as a vehicle for deflecting liability. (see also Smith G; 1998, p 105)
Thus a number of factors will impinge on the question of to whom a duty is owed and what is the nature of that duty. However, the standard of reasonable care required by the courts in relevant cases may assist in the assessment of the competence of computer practitioners as required by the emergent professional codes of practice, whether or not a duty actually arises in a particular case. In relation to the actual legal duty, the main area of concern at the present time is with those who design software for safety-critical applications, failure of which is likely to result in physical injury or environmental damage. Such failures are no respecters of persons and the damage is as likely, if not more likely, to be to a third party than to those who have procured the software. Thus the law of negligence is likely to be of more relevance than contract and, as discussed below, it may be more probable that a duty of care will arise where such tangible damage is a foreseeable eventuality but, in any case, the standards required by courts in negligence trials may provide useful pointers to the standard of competence expected of professionals. The focus of this discussion will therefore be just as much on the duty and standard of care in the abstract as an analysis of the situations which may, in reality, give rise to an actionable case of negligence.
Earlier work (Rowland D and Rowland J J; 1993) has considered the use of standards developed by professional negligence as a measure of the competence of practitioners and the relationship of these findings to codes of professional conduct ( Rowland J J and Rowland D; 1995). The difficulty in extracting general rules from the cases on professional negligence is that there has been a tendency for the subject to develop on a case by case basis with due regard being had for the nuances of particular professions. It is thus difficult, notwithstanding the tenor of the judicial comments already discussed, to state with any confidence whether or not courts would treat a newly litigated profession in the same way. Without doubt there are ways of subdividing the types of profession e.g. those who cannot guarantee their results such as medicine and law, and those who impliedly warrant to produce a particular result such as civil engineers (Rowland D and Rowland J J; 1993). Alternatively, the 'scientific' professions such as medicine and engineering, which could be said to be concerned with matters of fact, can be distinguished from the 'normative' professions such as the clergy and the law which could be said to be concerned with matters of value (Macdonald K A; 1995). This latter distinction is clouded by the fact that the so-called scientific professions are increasingly called upon to make value judgements. The fact that the different professions can be distinguished in these ways perhaps supports the development of separate rules for different professions but such an analysis necessitates great care when extending the ambit of the law to include previously unconsidered professions.
Whether or not a duty will be owed by a computer professional to a non-client will depend on the particular circumstances of the case. The failure of a computer-controlled system may have the capacity to cause damage to a vast number of people but it is clear that the law will not allow the duty of care to be virtually unlimited but will place restrictions on its scope. Relevant factors will be the nature of both those undertaking the work and the nature of the work undertaken together with the way in which the work is likely to affect those not in a contractual relationship with the particular professional. The type of damage which may ensue may be direct physical harm as when software failure leads to the malfunction or failure of safety-critical systems such as automatic railway signalling or intensive care equipment. On the other hand the loss inflicted may be as a result of the provision of professional advice to clients which is then relied on by others to their detriment in either physical or economic terms. As summed up by Lord Bridge in Caparo v Dickman (1990; 627) '(i)t is never sufficient to ask simply whether A owes B a duty of care. It is always necessary to determine the scope of the duty by reference to the kind of damage from which A must take care to save B ...' and quoting Brennan J in Sutherland Shire Council v Heyman (1985; 48) 'The question is always whether the defendant was under a duty to avoid or prevent that damage, but the actual nature of the damage suffered is relevant to the existence and extent of any duty to avoid or prevent it.'
Since the case of Hedley Byrne & Co Ltd v Heller and Partners (1964) a duty of care may be owed by a professional to those who have relied on advice given to others. In these cases the scope of the duty is circumscribed by the requirement of reasonable reliance. A negligent misstatement may be repeated endlessly and the consequences of reliance on it may affect many. But not all of these will be owed a duty. On the other hand in many cases of physical damage the link between the negligent action and the damage may be more direct and, in general, there is less reason to restrict the scope of the duty. In such cases, the duty of care can be defined by the well known dictum of Lord Atkin in Donoghue v Stevenson (1932) which can be applied as much to design faults, inadequate testing and lack of appropriate research and development as to defects in manufacture. The House of Lords in IBA v EMI and BICC (1980) found negligence in the design itself, which Lord Fraser regarded as 'a distinct and sufficient reason for imposing liability' and there is no doubt that a duty of care may apply to those professional persons whose duty it is to design, test or manufacture. (Nitrigin Eireann Teoranta v Inco Alloys Ltd (1992)). However, the manner in which the damage results cannot provide the test. If intensive care equipment fails or malfunctions as a result of the Y2K bug, it is more obvious who will suffer damage than might be the case when a similar fault arises with automatic signalling equipment. If as a consequence of the malfunction of such a signalling system, a train crash occurs, consider the points of distinction between the following situations (neglecting any possible contractual claims): A who reads in the paper or hears on the news that the designers of the signalling system have declared it to be Y2K compliant and decides to make a journey by train instead of by car; B who is given a leaflet with this information when he buys his ticket; C who reads the same information on a poster on the station, D, a passenger who is unaware of any of this information. Would it make any difference if any of the passengers did not understand the significance of the term Y2K compliant and were intending to travel by train regardless? What of bystanders who might also be injured? Notwithstanding the fact that there is actual reliance on a statement in some of these cases, the general effect of the Y2K bug in such situations is likely to have rather wider repercussions than the 'one to one' cause and effect in the intensive care example.
The nature and foreseeability of harm can never provide a single exclusive test for the existence of a duty of care and the waters have been muddied further by certain developments which have taken place in the tort of negligence in recent years - in the words of May J. in Nitrigin 'it is well known that the law of negligence underwent a considerable upheaval from about 1985.' May was referring to the onset of 'incrementalism' which served to restrict the existence of a duty of care confining it to pockets of liability which were set to grow and evolve separately. This can be explained as a backlash against the potentially all-embracing duty originating in Donoghue v Stevenson possibly as a reaction to an increasingly litigious society as noted by the Likierman report. Clearly such an approach challenges the previously held view supporting a more general theory of negligence. Stanton (1994) suggests that incrementalism is the 'antithesis of a test'. On the other hand he also points out that embracing incrementalism does not automatically exclude the possibility of similar rules being developed for analogous work by different professionals and concludes that there is little evidence of incrementalism being adopted in this area of liability - 'the general theories of negligence liability are thriving in the area of professional liabilities rather than going away.' However he has apparently resiled from this view to a degree (Stanton K M; 1996) stating that it would be 'short-sighted to discount the possibility of the law on one profession being informed by the experience of another' but going on to warn that 'grand theories tend to be too blunt to provide an answer to every particular problem that arises'. Could a unified theory of professional duty not be sharpened to allow sufficiently incisive analysis? Powell (1996) argues that professional negligence is an 'evolving term which might be better replaced with the more neutral and accurate title Professional liability' but the retention of the word 'professional' means that difficulties and ambiguities of definition can still not be avoided.
In some senses the use of the word 'professional' is merely shorthand for the legitimate expectations of the competence of a specialist and whether or not the person is a 'professional' as defined is merely one factor in establishing the requisite duty of care which does not depend on the precise nomenclature adopted. Thus in Nitrigin Eireann Teoranta v Inco Alloys Ltd. (1992), May J distinguished the earlier case of Pirelli v Oscar Faber (1983), in the first case because it was a case heard before D & F Estates v Church Commissioners (1989) and Murphy v Brentwood DC (1991) (see also later analysis), but, in addition, because the defendant in that case was a firm of professional consulting engineers engaged to advise and design. In Nitrigin on the other hand May J pointed out that the defendants were 'specialist manufacturers who knew or ought to have known the purpose for which their specialist pipes were needed. In my judgment, that is neither a professional relationship in the sense in which the law treats professional negligence nor a Hedley Byrne relationship.'
Does this distinction stand up to scrutiny implying as it does that different standards might be applied to each as a result of a difference in definition when in reality it would be more accurate to use all the relevant factors in determining whether or not there was a duty of care?
Whether or not the law treats such designers as professionals it is clear from the comments of May J that, notwithstanding issues of definition, a material factor was the knowledge of the specialist manufacturer of the use of the components and systems they had designed and manufactured. That this also encompasses not only technical facts about the nature of products but also extends to a knowledge of those who should have been in the contemplation of the designer is well illustrated by a comparison of the cases of Clayton v Woodman & Son (Builders) Ltd (1962) and Clay v A J Crump & Sons Ltd (1964). Both of these cases concerned architects, a profession which in some respects can provide an apposite analogy for certain computer professionals. In the former construction work was under the direction of an architect but builders were responsible for the detailed arrangements. On a site visit an experienced bricklayer suggested modifications to the design because of difficulties incorporating an old gable. The architect rejected this advice and wished to adhere to the original specification. Soon afterwards the gable collapsed injuring the plaintiff. The architect in this case had no liability as he could reasonably have expected that whatever work was carried out the building contractor would arrange for it to be accomplished in a safe manner. The architect had given no direct instruction. In contrast in Clayton v Crump, an architect was found to be liable for the plaintiff worker's injuries caused by the collapse of wall which the architect had told the demolition contractor was safe to leave standing. This case relied upon the familiar principles of Donoghue v Stevenson that the architect should have had the plaintiff in his contemplation as he knew that the purpose of the project was to redevelop the site in question. The presence of workmen should, therefore, have been foreseen. A comparison of these two cases shows that the basis for liability depended on a combination of all the circumstances giving rise to the duty of care. The fact of the professional status of the defendants involved provided only one factor in the equation.
In Hedley Byrne v Heller (1964), there was frequent reference to the notion of 'voluntary assumption of responsibility'. Although Lord Oliver in Caparo v Dickman (1990) suggested that this might be a convenient phrase but was not intended to be a test for the existence of a duty of care, nonetheless it seems a convenient yardstick for the measurement of professional behaviour. Indeed it could be said that assumption of responsibility is a requirement of a professional. In this context, an interesting status, that of the 'quasi-professional' was introduced in Henderson v Merrett Syndicates Ltd. (1994) (the 'Lloyds names' case) where it was held that a tortious duty of care could indeed arise in cases of detrimental reliance if responsibility was assumed by a person providing professional or quasi-professional services. This concept was not defined but could presumably include anyone with the necessary specialist knowledge. Lord Browne Wilkinson commented further (p. 543) that 'Although the historical development of the rules of law and equity have, in the past, caused different labels to be stuck on different manifestations of the duty, in truth the duty of care imposed ... is the same duty: it arises from the circumstances in which the defendants were acting, not from their status or description.'
Disputes about the status of 'computer professionals' may merely serve to obfuscate the issue and depending on the precise circumstances of the case a duty of care may arise. A far more pertinent question which may cause considerable difficulties is encountered when assessing what constitutes 'reasonable care'. It is well established that an error of judgment is not equivalent to negligence in the legal sense (Whitehouse v Jordan (1980)) and that allegations against 'professionals' should be considered carefully because of the importance of the various interests at stake. Further, there will be room for professional disagreement between various bodies of expert opinion without any issue of professional malpractice or negligence (the so-called 'Maynard' doctrine) and 'the court may reject a body of professional opinion in the very rare case where it can be shown that the opinion in question is not logically supportable at all' (Bolitho v City and Hackney Health Authority (1997, p. 1160). The situation may be quite different where there is agreement on standard practice. There have been cases where the courts although generally favourably disposed to general practice have judged such practice themselves to be negligent.
The premise that 'a design which departs substantially from relevant engineering codes is prima facie a faulty design unless it can be demonstrated that it conforms to accepted engineering practice by rational analysis' (Bevan Investments v Blackhall and Struthers (1973)) demonstrates that the formulation of clear standards and codes of practice is clearly a crucial one in an assessment of the standard of care. Where there are appropriate standards and codes but these are not adhered to, designers will need to demonstrate that their design is at least as good as that which would have been created if the standard had been followed. A difficulty with software is the lack of accepted standards of general application and the difficulty in formulating such standards. The standards which do exist tend to be sector specific. Attempts by the International Electrotechnical Commission (IEC) to create a general standard for the manufacturing and process industries have resulted in a number of draft version over the last 10 years or so. A new draft IEC 61508 (see e.g. Hughes G, (1999)) was published in 1998 and has been much debated by the professional bodies but it will be a while before it is known whether it will be accepted and approved. Such analysis suggests that, one appropriate role for professional bodies is the setting not only of standards of behaviour but also of establishing standard practices and instigating of appropriate review mechanisms. Such ongoing review might have led to earlier and more effective action by the industry in relation to Y2K. Unfortunately there is a clear gap between the proactive capabilities of professional codes and the reactive ability of law to provide a remedy after the event which depends on so many particular issues a number of which may only be specific to the case under consideration.
A similar situation pertains in relation to common practice in the trade. However the courts have not fought shy of declaring common practice to be negligent. In general this is the result of application of objective tests of the reasonable man variety (see e.g General Cleaning v Christmas (1953)). What should be regarded as reasonable in technical specialisms where knowledge is advancing all the time? Is there a duty to keep up to date? How far does this extend? What effect should new information have on an existing general practice? This has particular relevance in connection with the millennium bug (not only for designers but also for those who use systems containing software). Some guidance can be found in Stokes v GKN (Bolts and Nuts) Ltd (1968):-
'... where there is a recognised and general practice which has been followed for a substantial period in similar circumstances without mishap, he is entitled to follow it, unless in the light of common sense or newer knowledge it is clearly bad; but, where there is developing knowledge, he must keep reasonably abreast of it and not be too slow to apply it, and where he has in fact greater than average knowledge of the risks, he may be thereby obliged to take more than the average or standard precautions.'
A example of the application of these principles can be seen in the cases relating to liability for noise-induced hearing loss where the above dictum of Swanwick J was quoted with approval in Thompson v Smiths Shiprepairers (North Shields) Ltd (1984). In applying the dictum the court found that no negligence might be found where a practice was well-established, even where a practice had not been followed without mishap. The crucial factor was not when the knowledge (in this case that excessive exposure to noise resulted in hearing loss) was discovered in the absolute sense but when it should be deemed to be known in the industry. The clear implication is that if the industry is at the forefront of research and development then it will be expected to take technological advances into account very quickly. Clearly the salient issue is not just one of the actual knowledge of the producer or designer but also of his or her constructive knowledge.
In the light of this test, consider again the software written in 1975, 1985, 1995. It should be clear that those writing software in 1995 ought to have been aware of the need for millennium compliance but would this have been so apparent in 1985 and 1975? It could be argued that, given the decade rollover incident referred to earlier a designer's mind, especially one at the forefront of research and development, should have at least considered the problem. Certainly the plaintiffs in a number of the suits already filed in the US are arguing that the computer industry knew or should have known of the problems associated with the change of century since at least the mid 1970s. On the other hand an alternative explanation could be an expectation of limited shelf-life (see also Bradgate R; (1999)).
Where risks are foreseeable, it will be necessary to balance the risk against the measures necessary to eliminate it (Latimer v AEC (1952)). Whilst it may not be negligence to ignore a small risk of slight damage, there have been cases where negligence has required a very high standard in relation to high risk and/or new or pioneering designs (see especially IBA v. EMI and BICC (1980)). There are already examples of the millennium bug 'biting' (see e.g. <http://www.year2000.com/y2kbugbytes.html>) and over 60 lawsuits have already been filed in the US (see <http://www.2000law.com/html/lawsuits.html>) Most of these are based on contractual claims, some of which have already been dismissed because no damage has yet been suffered. An accurate prediction of the likely extent of the damage remains difficult.
At first sight it could appear that, in relation to the time bomb effect of the millennium bug, the 'latent defect' might provide an apposite analogy. It appears to be well-accepted that a quality defect in an item supplied under a contract is deemed to produce pure economic loss which is irrecoverable in tort unless it later causes physical damage. In relation to this consider the comments of Lord Keith of Kinkel in Murphy v Brentwood DC (1991 p. 465)
'It is difficult to draw a distinction in principle between an article which is useless or valueless and one which suffers from a defect which would render it dangerous in use but which is discovered by the purchaser in time to avert any possibility of injury. The purchaser may incur expense in putting right the defect, or, more probably, discard the article. In either case the loss is purely economic.'
Lord Keith goes on to discuss buildings where there is no actual damage but there was an imminent danger to health and safety of persons ... the resultant fall in value is equivalent to an economic loss. In apparently seeking to minimise the occasions on which liability might be found for latent defects, Lord Bridge in the same case said: '...in equating the damage sustained in repairing the chattel to make it safe with the damage which would have been suffered if the latent defect had never been discovered and the chattel had injured somebody in use, the judgment ignores the circumstance that once a chattel is known to be dangerous it is simply unusable.'
This is something of an oversimplification. The concept of latency of itself must indicate that the defect is not yet operative but may become so in the future. Whereas the wise and prudent owner or operator might decide not to continue with use, the damaging event, itself, cannot be assumed to be inevitable nor can it automatically be a consequence of reckless behaviour. It may well be that a relatively informed decision is taken to continue use for a certain time or under certain conditions without necessarily introducing an excessive element of risk. Even if disaster ensues and some of the blame can be laid at the feet of the operator, should the designer be automatically absolved of all responsibility if it was a design fault which led to the defect?
This view of latent defect has not been without other criticism, Sir Robin Cooke (1994), (President of the New Zealand Court of Appeal) whilst acknowledging some of the policy issues in Murphy v Brentwood commented that 'the doctrine that a dangerous defect once known becomes merely a defect in quality seems to propound a dogma rather than a lesson of experience ... no inexorable logic compels the conclusion that a defect in quality is not redressible in negligence.' An interesting case in this context is Nitrigin Eireann Teoranta v Inco Alloys Ltd (1992) referred to above. A relevant aspect of the case for the present purposes concerned the lapse of time between discovery of the defect and damage to other property. Certain cracks were discovered in a pipe, allegedly due to negligence in manufacture. The cracks later resulted in severe physical damage and the plaintiffs wished to claim for all the damage and loss of profits whilst the plant was out of action. Although at the time of discovery of the cracks there was only pure economic loss which could not be recovered in negligence, it was found that despite reasonable investigation (emphasis added) the plaintiff was not aware of the extent of the defect and therefore could not make sufficient repairs to avoid the physical damage which occurred later and was damage which could give rise to an action in negligence. May J referred to D & F Estates v Church Commissioners and Murphy v Brentwood DC and in relation to the duty of care he said 'The critical question is whether the scope of the duty of care in the circumstances of the case is such as to embrace damage of the kind which the plaintiff claims to have suffered.' (emphasis added)
This can very appropriately be applied to situations like Y2K or indeed any other failure/malfunction of a computer system which occurs as a result of faulty software design leading to safety critical consequences. A number of the cases already filed in the US relate to the availability of 'software patches' to make existing systems Y2K compliant. If these were cases in negligence would this be equated with irrecoverable economic loss? Would it be necessary to wait until the damage had actually occurred? Determining when a cause of action accrues can be crucially important because of the existence of limitation periods which may make actions time barred. This was one of the issues in Nitrigin and would certainly be a factor if a case was to be made out against a designer of software which had been produced in 1985, say. As discussed above, there are many factors surrounding whether a duty of care arises and the mere fact that harm is reasonably foreseeable, although a necessary condition, is not sufficient to give rise to the duty. In Nitrigin, liability was found for the faulty design and the consequences of the latent defect because of the knowledge of the designers and also the difficulty for the plaintiffs despite reasonable investigation in ascertaining the potential effects of the latent defect. Can this be likened to the Y2K problem? It may well be that users are aware that there is a problem but are neither aware of its extent or all the foreseeable consequences nor have the wherewithal to discover this. Interestingly May J also held that even if the investigation (and it is a moot point what would be deemed to be a reasonable investigation) were not to be deemed to be sufficiently thorough, it would not have affected liability as such but that there may then have been an issue of contributory negligence.
The foregoing analysis serves to throw into stark relief the arguments about professional status and professionalism which are currently rife among both software engineers and similar computer 'professionals' and those who represent their interests and the interests of society at large. The literature on professional issues and ethics in computing has also increased dramatically in the last few years and professional codes of practice have proliferated. Such documents make reference to the need for competence (and of course the Bolam test for professional negligence is based on the supposed attributes of the reasonably competent practitioner) but the twin concepts of competence and professionalisation elude definition. Indeed the Association of Computing machinery (ACM) code for instance (see Anderson et al; (1993) and <http://info.acm.org/serving/>) has a rather circular definition requiring a professional to acquire and maintain professional competence whilst also directing that same professional to participate in setting standards for appropriate levels of competence. Nevertheless, at the heart of this debate is the genuine desire to focus on the attributes of a professional and their relationship to the quality of service offered to clients, together with the legitimate expectations of the public and society at large not to be subjected to inordinate risk as a consequence of the activities of a particular specialist group. The focus for this article has been, in the main, on those lapses of judgment which have the propensity to cause physical harm but that is not to say that the general issues relating to competence, skill and care cannot also be applied to cases of economic loss irrespective of whether, in the particular case, that economic loss could be held to be recoverable in tort.
Because, in some sense, software can be regarded as pure information, some commentators have expressed conceptual difficulties with the idea that software can cause damage (see discussion in Rowland D and Macdonald E (1997; p. 200 et seq), another necessary ingredient for a finding of negligence. In the mid-1980s, defects in the software controlling the Therac-25 radiotherapy machine resulted in substantial overdoses of radiation being administered to several patients with resulting injury, illness and death in some cases (see e.g. Leveson N G; 1995). There seems to be little difficulty here in making the link between the defective software and the damage. Suppose, instead, that the software error had resulted in an underdose of radiation with the result that patients died, not from the effects of the radiation, but from the cancer for which they were supposedly being treated. Although the competence of the designers might be called into question, an action for negligence would fail. The standards of a professional code are not dependent on issues of causation and a judicious blend of the standards encapsulated in the pertinent cases together with an ability to cover all relevant professional activities may provide the appropriate guidance for those working in the burgeoning software industry. There may be many lessons which could be learned from the millennium bug, and whether or not this results in higher standards in the 21st century will probably not be directly related to whether or not negligence liability might be found. Law provides guidance but not all the answers.
Bradgate R (1999) 'Beyond the Millennium - The Legal Issues: Sale of Goods Issues and the Millennium Bug' Journal of Information, Law and Technology (JILT). < http://elj.warwick.ac.uk/jilt/99-2/bradgate.html>.
Gerner M (1996) 'Why Has The Year 2000 Problem Happened' <http://www.compinfo.co.uk/y2k/mgerner.htm>.
Howells G (1999) 'The Millennium Bug and Product Liability' Journal of Information, Law and Technology (JILT). <http://elj.warwick.ac.uk/jilt/99-2/howells.html>.
Hughes G (1999) 'Reasonable Design' Journal of Information, Law and Technology (JILT). <http://elj.warwick.ac.uk/jilt/99-2/hughes.html>.
Peysner J (1999) 'Y2K - Will There be a Litigation Explosion?' Journal of Information, Law and Technology (JILT). <http://elj.warwick.ac.uk/jilt/99-2/peysner.html>.
Rowland D and Rowland J J (1993) 'Competence and Legal Liability in the Development of Software for Safety-related Applications' Law, Computers and Artificial Intelligence vol.2 pp. 229-243 also published in Computers and Law Carr IM and Williams K S (eds.) (Oxford: Intellect Books).
2. It could be questionable whether this always attracts positive connotations Borenstein N (1991), quoted in Neumann P (1995), suggests 'The most likely way for the world to be destroyed, most experts agree is by accident. That where we come in: we're computer professionals, we cause accidents'.
5. The symposium papers are published as Volume 297 of the Annals of the American Academy of Political and Social Science.
7. The case concerned an optician, was he carrying on a profession or was he merely a seller of spectacles?
8. Other organisations include the Institute of Electrical and Electronics Engineers (IEEE) and the Association of Computing Machinery (ACM). These are both based in the US but have world-wide influence.
10. Used instead of the rather briefer and more familiar Bolam test because of the manner in which the requirements and detail have been fleshed out.
14. E.g. RTCA/Eurocae for the aircraft industry and a range of defence standards such as DEF-STAN 00-55 and 00-56.
15. In relation to non-research based industries however, (increasingly relevant e.g. for domestic appliances containing software) - there may be a lapse in time in which the relevant information has not been brought to the attention of the producer - that time will then be crucial for the availability of the defence. See also Howells G (1999).
16. See e.g. the summaries of Courtney v Medical Manager Corp. and Faegenburg v Intuit Inc <http://www.2000law.com/html/summaries.htm>. In Miller v State of Alabama the plaintiff alleged that state government had also known of the problem since the 1980s.