Onward! 2012 essay
Realising Software Development as a Lived Experience
Meurig BeynonThe Empirical Modelling Group, Computer Science, University of Warwick
wmb@dcs.warwick.ac.ukAbstract
This essay discusses software development from the perspective of Empirical Modelling (EM) [4], an approach to computing that draws on the construals of David Gooding [51], Bruno Latour’s vexing notion of construction [69] and William James’s radical empiricism [62]. It argues that effective software development must embrace semantic principles radically different from those endorsed by the traditional perspective on software that is based on computational thinking. Of paramount importance is the immediacy of the developer’s experience of the relationship between software as an artefact on the computer and software as an agency in the world.
This is a version of an essay to be published by the ACM in Onward! '12 : Proceedings of the ACM International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
Onward! '12, October 21 - 25 2012, Tucson, AZ, USA
Copyright 2012 ACM 978-1-4503-1562-3/12/10...$15.00.
Anyone who has developed software has some personal knowledge of what kind of experience it involves. Perhaps you can recall the first time you ever wrote a program. If – like me – you are old enough to remember batch processing of Fortran on punched cards, you will know just how frustrating an experience this can be. Even if you are not, it is important for my purpose in this essay that you should imagine it in all its mundane detail. I shall argue that only by focusing our primary attention on what is involved moment-by-moment in creating software is it possible to do justice to the amazing and unprecedented technical and semantic challenges that software development presents. Pause at this point to register that throughout this essay my primary focus in discussing software development is on the present, of-this-moment, being lived, experience of the developer1 – what they currently have in their mind in so far as it relates to the task in hand. I must apologise for the fact that this necessarily gives such prominence to the subject of this sentence, but I have no access to moment-by-moment experience other than my own. To my mind, of all the advances in software development that have taken place in the forty years or so since I wrote my first Fortran program, the change in quality of the experience of development is the most significant. By this, I do not simply mean that development has become ‘more user-friendly’, but that there is a growing awareness of the importance of helping the developer to experience the relationship between software as an artefact on the computer and software as an agent in the world2. Understanding the nature of this experience more fully and engineering development environments and approaches accordingly is key in realising the goal of ‘software for humanity’.
There were several phases involved in writing my first Fortran program. There was the phase of trying to figure out what steps I would need to take to instruct the computer to read two input numbers, add them together and output the result. I remember the experience of constructing a flowchart, and grappling for the first time with the idea that a box of a particular shape signified such a sophisticated abstract concept as an assignment or a branching statement. I recall how insecure I felt about the correspondence I was trying to make between points in the flowchart and the state of the executing computer – a novel concept of which I had only the haziest notion. As a mathematician, I found it difficult to look beyond the patent contradiction in the statement I = I + 1. Next was the translation of my sketchy code into a stack of punched cards, each containing a line of code. There was something about the absolutely methodical, pedantic and artificial way in which every detail of the layout mattered and the formats for reading and writing from cards had to be precisely specified that exasperated me. Then came the machine execution phase – days as I now imagine it – whilst I awaited the verdict on my efforts in the form of a paper printout. I soon learnt that Fortran programming was for me primarily a lesson in management science – managing my expectations, and anger management.
At first sight, it may seem that this experience of writing a program naturally separates into aspects that – in the spirit of Fred Brooks’s famous essay No Silver Bullet [39] – are essential and accidental. My prejudice towards elegant abstract mathematics, and impatience with the grubby protocols for interaction with physical devices – so ephemeral and context-specific, strongly endorsed this view. The only essential and interesting aspects of the palaver of programming were that I had supplied an abstract recipe that enabled me to input 2 and 3 and get the number 5 as output. All the rest – the rigmarole of drawing up a flowchart, preparing cards, interpreting the prolific job control data that decorated my output – was accidental, and lay beneath my intellectual radar.
When, shortly afterwards, I took up a lectureship in computer science, I was able to consolidate this perspective on programming by appreciating Alan Turing’s abstract characterisation of programs [82]. Programming a Turing machine was much more congenial to me. In the moment-by-moment experience of developing a Turing machine program, I had now only to contemplate potential states of an abstract device and the consequences of transition rules. Nor did I have to consider petty concerns about how to present input and output. The input string that might or might not be a palindromic was explicit – abstract, unadorned – in the initial state. In this development process, I appreciated the way in which I had to conceive the states in the computation in meaningful ways (“now the last symbol of the input string has been detected”, “now I have checked to see that the last symbol matches the first”) and frame the rules accordingly. Not that I found the activity particularly easy: I still had, in the present moment, to juggle two behaviours in my mind – what my Turing machine actually did and what I intended my matching process to do – and try to check whether they necessarily conformed to each other whatever input was given – and if not, why not. But at least this activity was – as a pure form of computatonal thinking – of the essence.
I had had the great good fortune to have joined a computer science department with a strong theoretical orientation in its earliest stages of development. My mission as a research mathematician turned theoretical computer scientist was to contribute to bringing mathematical thinking to bear on the problems of computer science, and thereby help to rescue the discipline from the mire of all that was contingent and accidental in the practice of computing. Even the Turing machine itself, with its concrete mechanistic characteristics, was to my taste rather too specific and explicit as a model of abstract computation, notwithstanding the fact that it stood as a representative for an abstract equivalence class of actual computing devices.
In programming the Turing machine, there was that sweet recapitulation of moment-by-moment experience as I successfully traced the execution of my algorithm whilst it realised its intended behaviour. But there was also that unpleasant experience familiar to everyone involved in debugging – contemplating the broken relation between a rogue actual and ideal intended behaviour, when the intuitions that had guided the process of construction were suddenly irrelevant. In this predicament, What should the specific focus of my attention be now? What input should I consider? What states of the actual and ideal processing of the input should I have in mind? Was my conception of the ideal process flawed? And if I were to modify a rule, how would this impact on all the possible inputs and processing states in the actual execution?
It is an instructive exercise to consider the way in which different paradigms and approaches to programming have set out to protect the developer from experiences of this nature. In appreciating this, a metaphor may be helpful. In experiential terms, developing a program – in the ways that I encountered it in my early computer science career – has something of the quality of driving3. When the relationship between the actual and the intended behaviour is in clear focus in the developer’s mind, it is only necessary to make small adjustments that have the experiential quality of observing a road sign, steering or braking. But when this relationship breaks unaccountably to the extent that the developer no longer knows where to focus her attention, it is as if the driver has lost their way, or the car has crashed and left the road. The two principal academic developments of the 1970s that brought a mathematical perspective to bear on programming – ‘structured programming’ [45] and ‘declarative programming’ [57] – can be regarded as the metaphorical counterparts of ‘refining the road environment and regulating driving practice’, and ‘introducing a reliable taxi service’. Both, in their different ways, attempt to prevent the development of a program from straying into the realm of experiential breakdown.
It may seem that the quality of the experience associated with structured programming or declarative programming would have appealed to me as a mathematician. In both cases, the developer contemplates the relationship between a formal text and an algorithmic behaviour for computing a well-defined input-output relation. The power of mathematics in this context lies in its capacity to express abstract relationships with elegance and precision. These qualities are epitomised in Dijkstra’s programming examples [48], and in the examples of functional programming applications of aficionados such as John Hughes [58]. But the idea that – with appropriate training in Hoare logic or lambda calculus – we can readily experience the correlation between formal expressions and the closed computations of well-defined input-output relations they represent is not wholly convincing4. To this day, it seems that we still have much to learn about the relationships between mathematics, experience and programming, as I shall now discuss in more detail.
My introduction to the practice and science of programming (cf. [53]) was also my first exposure to two contrasting cultures of engineering [37], respectively hermeneutic and formalist in character in the sense discussed by West [84, Chap. 2]. Understanding and resolving the potential conflict between these two cultures has been a central motivation for my research ever since. Over this period, I have become more circumspect about what is ephemeral in computing, and ever more convinced that what is not ephemeral is more than a formal mathematical treatment can endorse. It is in that context that, despite the subsequent major developments in the study of programming, my first experience of a long obsolete practice of Fortran programming remains topical.
In retrospect, I was naive to think that instructing a computer to evaluate “2+3” was such a trivial matter. As a child, it took me several years to appreciate what the symbols ‘2’ and ‘3’ conventionally denote, and several more years to become as uncertain about whether I truly understand what they denote as I am now. And of course I know that it makes little sense to add the number of days in July to the area of the bath mat in square metres and subtract the number of bottles of shampoo in the bathroom cupboard, but it is not so easy to say why. How large must the font be, and how acute my sight, for me to distinguish ‘2’ from ‘3’? and is “2+3” the appropriate calculation in relation to what I aim to achieve? And am I computing “2+3” as integers or residues modulo 4? Would we expect “2+3” to yield the same result as “(2/3 + 1) * 3”?
There is an implicit homely frame within which calculating “2+3” is conventionally interpreted. When we invoke the computer as a general-purpose device for such a calculation, much of this frame is somewhere explicitly taken into account. A convention must be established for displaying the output, making provision for the possibility that the result exceeds the size of the largest representable integer, or the number of digits that can be conveniently presented. Fortran required us to use identifiers such as I and J to record integer values – even today there are contexts where a JavaScript program might inadvertently return the value “23” where 5 is expected. And there is yet more to the computational frame to be considered. Add to the semantic mix the fact that I might have many purposes in instructing the computer to add 2 and 3. Indeed, in the first instance, I most certainly did not do this because I wanted to know the answer. In another context, I might have been doing this to test the arithmetic unit, or the compiler – perhaps even the operating system (there was an infamous occasion on which it seemed that the output from one program on our time-sharing system corrupted that of another).
Had I but realised, the murky details that I found so distasteful in my first programming exercise were to a very large degree a reflection of the extraordinarily rich possible interpretations of a simple computation to which my mathematical sophistication had blinded me. To appreciate this is to better understand the sense of semantic insecurity I felt at that time, acting as I was, innocently, in the presence of so many possible experiences and interpretations in the neighbouring space of sense.
It is no wonder that the first instinct of the theoretician is to insulate the programmer from the messy world of potential meanings that surround a program as enacted. A longstanding aspiration has been to develop mathematical formalisms and techniques that – in keeping with the formalist cultural values of rationality and objectivity (cf. [84, p.51]) – achieve this goal by substantially reducing the level of human engagement in programming. As West [84, p.58] observes5, the effectiveness of formalism depends crucially on how closely the programming context resembles a machine.
Of course, such a caveat has little force for those who have absolute conviction that Turing’s thesis comprehensively frames the capability of the human mind, and in the process renders all activity machine-like. For me, one of the most perplexing applications of mathematics I encountered in computer science was the development of semantic models for programming languages6. In contrast to the mathematical models that have revolutionised practice in science – such as Newtonian mechanics, or Maxwell’s electromagnetic theory – which provide simplifying principles and enabling techniques informed by the most painstaking and mature observation of complex phenomena, semantic models of programming languages are typically much less amenable to human interpretation than the languages themselves. Such models have been applied with some success in the verification of hardware [86]. But, to my mind, to expect their application to deliver significant insights into processes that involve human cognition is to be in thrall to what Naur has referred to as “the mystique of logic and rules” [75].
More pragmatic approaches to developing theory and principles of programming acknowledge the human dimension as beyond formalisation. Most give high priority to abstraction as a means of maintaining a safe distance from the semantic mire of human experience – indeed, Kramer [66] proposes abstraction as “the key to computing”. Many other champions of formal approaches take a similar view. Lamport [67] reviews the prospects for systems development based on logic or biology, and declares his faith in logic. His key message is “we must keep our systems simple enough so we can understand them”. For Lamport, there is a role for metaphor, but it is one that is subservient to logic: “A good program must use good metaphors, and it must behave logically. The metaphors must be applied consistently – and that means logically.” In his “discipline of programming” (cf. [48]), Dijkstra strongly favoured the use of symbolic arguments, deprecating the ‘intuitive’ diagrammatic representations that featured in established software development methods. For such thinkers, visual images and other immediate components of our experience are to be treated with the greatest caution, and are in no way to be regarded as mature products of understanding7.
The suspicion of metaphor and intuition in this context stems from the fact that they are associated with a tradition that is hermeneutic rather than formalist in the sense discussed by West in [84, Chapter 2]8. In a hermeneutic approach to semantics, meaning is regarded as being negotiated through interaction with the world and with other human agents. This establishes an essential link between knowing and personal experience, and means that what a formalist regards as ‘objective’ knowledge is subject to a process of construction (cf. Latour [69]).
The scope for conflict between formalist and hermeneutic stances is highlighted by West’s provocative characterisation of a hermeneutic perspective [84, p.54]:
“The hermeneutic philosopher sees a world that is unpredictable, biological, and emergent rather than mechanical and deterministic. Mathematics and logic do not capture some human-independent truth about the world. Instead they reflect the particularistic worldview of a specific group of human proponents. Software development is neither a scientific nor an engineering task. It is an act of reality construction that is political and artistic.”
Taken at face value, such a characterisation leaves little scope for reconciliation between hermeneutic and formalist philosophical stances – an issue central to my own research objectives to which I shall return later in this essay. But this controversy is much more than a matter for esoteric philosophical debate; it is a fundamental concern for the practical evolution of software development.
The tension between formalist and hermeneutic outlooks is prominent, for instance, in the unwelcome impact that SQL, as the de facto standard relational database query language, has had on software development. There can be no doubt that the world-wide adoption of SQL has the character of “an act of reality construction”. There can likewise be no question that, as has been argued by Date and Darwen [46] over several decades, SQL embodies fundamental logical flaws that have subverted Codd’s visionary conception of a pure relational algebra database model [42]. Whether the mathematical and logical model proposed by Codd can be described as “a human-independent truth about the world” may be controversial. But, as Date and Darwen have made clear in numerous writings, the implications of the deviations from a pure relational model are profound and disturbing. Relational algebra provides a framework within which to represent and manage complex data that – in its appropriate context of application – has exceptional elegance and power. As is commonly the case when a construction is inconsistent with a well-conceived mathematical pattern, even if only in what may appear to be minor details, it cannot diverge ‘just a little’ from the ideal. The logical flaws in SQL have contributed directly to problematic issues in its design and application that are compounded by legacy issues and cannot be satisfactorily addressed by retrospectively revising the standard.
In championing a comprehensive reappraisal of, or alternative to SQL as the standard relational algebra query language, Date and Darwen are battling against seemingly impossible odds. It is in some sense easy to recognise the notion that SQL will prevail, warts and all, as representative of what might be hermeneutic ‘truth’: the current status of SQL is surely human-dependent, but what kind of human agency could change it is wholly unclear. Contrast the status of SQL with that of the relational model whose role it has usurped. Beyond question, one could give a demonstration of the qualities of a pure relational algebra model that would show decisively why it is so exceptionally well-suited to the role of data representation and management. I do not know in what sense such a model can be construed as the product of human politics or artistry, and – as an algebraist by training who appreciates the inexplicable aesthetic and expressive qualities of mathematical constructions – I have every sympathy with the formalist who declares it to be an instance of human-independent truth9. My purpose in contrasting the status of SQL with the status of the relational model in this way is to highlight the common experiential basis on which they rest. Whether or not SQL is to be deemed a “constructed reality”, or the relational model to be regarded as “a human-independent truth”, is in practice immaterial: the status of both is something that can be made apparent in our immediate experience, and it is hard to imagine an agency that can change either.
Whatever hopes for a new order the computer scientists of the 1970s may have cherished, the compromising of formalist ideals that SQL illustrates is still typical of the contemporary programming scene. In his fascinating series of lectures on JavaScript [43], Crockford exposes the design flaws in a modern programming language that, like SQL, has worldwide influence. From the perspective of this essay, the most significant feature is the way in which Crockford approaches his subject. As an expert exponent of JavaScript programming, he is knowledgeable about the whole spectrum of political and semantic issues that lie behind the language, yet his primary focus is at all times on “what does the programmer experience when looking at this particular piece of JavaScript code, and in what way does this connect with her previous experience, as expressed through expectations, intentions and current understanding?” He points out (for instance) that the interpretation of a with-statement cannot be inferred from its form, and must be deprecated; that the use of a backslash to indicate that a single line is split over two lines is visually indistinguishable from two lines, the first of which ends in backslash followed by a space; that a construction of the form “if (a=b) ...” has a perfectly valid interpretation as a conditional that incorporates an assignment, but more commonly occurs in error in a context where “==” or “===” was intended, and is to be avoided. He counsels the programmer not to read the code with a narrow emphasis on functionality and efficiency, but to consider whether code is maintainable, adaptable, comprehensible, and to what extent it is secure and robust under extension.
The bad features of the design of JavaScript may reflect unfavourably on what the software development community has learned over fifty years. The development in the culture of programming that these lectures illustrate is by comparison immense. Highly significant contributions to this development have been made by studying programming languages from a formalist perspective, assessing their qualities as idealised mathematical objects. Their influence is prominently seen in “the Good Parts of JavaScript” [44]. In approaching programming language design from a formalist perspective, there is a danger of framing the design of a tool and prescribing how it should be used in accordance with some idealised vision of how to support activities of which you have only limited personal experience. Crockford, in contrast, speaks as one practitioner among many who has mastered an instrument that has come to his hand and is encouraging others to share his mastery – and seek their own. His emphasis is upon the personal responsibility of programmers to establish a good culture for the use of the language, with broad reference to such formally ‘irrelevant’ matters as the layout of symbols, the judicious choice of semantically equivalent segments of code, and establishing good habits of thought10.
A parallel may be drawn with the virtuoso instrumentalist, who does not expend effort in speculating on how the violin could have been designed to be easier to play, or whether a violin is a better instrument than a trumpet, but is primarily concerned with how to get the best from the instrument, and gives full attention to the momentary experience that the instrument affords. If programming is like gaining technical mastery over a musical instrument, software development is like making music. A similar kind of focus is appropriate in both activities. To explore this further, it is helpful to consider how mathematics emerges from our experience, rather than how it may be used prescriptively to tame experience.
In computer science circles, the association between mathematics and well-regulated, formal, rule-based activities resembling ‘computations’ is well-established. There is a natural symbiosis: the power of the computer to implement activities based on mathematical rules highlights the role of mathematics as a way of specifying recipes and at the same time disposes us to giving rule-based accounts of our experience that privilege the computational domain. But the qualities of the mathematical objects that we introduce through making an algebraic specification of an everyday system such as a library are quite different in character from a traditional algebraic entity such as a ‘group’. The most fruitful mathematical abstractions are universal generic patterns distilled from experience of many different realms. Appreciating these patterns enriches and supplements the experiences – it does not purport to circumscribe and specify them. As has been observed by Jaron Lanier in You are not a gadget [68], using mathematics as a tool for formalisation and specification can have the opposite effect, tempting us to suppose that the artifices we contrive to describe our experience can also serve as the basis for their computational reconstruction.
Good mathematicians are not symbol processors experimenting with formal axiomatic systems and generating theorems in isolation from their experience11. Though their experience may be of highly abstract domains, such as group theory, if they are to create interesting and novel mathematics, it must nonetheless have the quality of experience that is not fully understood and potentially defies simple explanation by established rules. In its most profound expression, as in the work of Gauss or Turing, the concepts that are introduced relate to a much broader domain of human experience. The most interesting mathematics involves making new connections between one kind of experience and another, and the more unexpected and illuminating these connections the better12.
Advances in the theory of programming seem likewise peripheral to the vision of the creative software developer. Developing software can be a routine task of customising an existing system with no more than cursory attention to the domain within which it is deployed. It may be that a new software requirement can be met simply by implementing a new abstract function. But the core challenge of developing genuinely new software has to do with establishing the connections between abstract computations and experience of a concrete domain. ‘Radical design’ of complex software systems [60] cannot be addressed by presuming that the requirements for a piece of software come gift-wrapped as a set of functions – or indeed that the requirement can necessarily be expressed functionally at all.
As applications of software have broadened in scale and scope, development principles and techniques that make it possible to understand the relationship between its abstract computational interpretation and its interpretation in the application domain have become ever more relevant. Relational databases and the associated design methods which prescribe the table structure on the basis of semantic relationships governing the domain data – such as functional dependencies, the introduction of object-oriented design techniques, David Harel’s statechart as a “visual formalism” [54] – all these are intended to make it easier for the designer to relate structures in the computer to features of the external world. Each of these approaches has been successful within particular domains of application. But each has its expressive limitations – as shown by the controversies surrounding relational databases [79], the failure to devise universally effective methods of translating object-oriented designs into object-oriented programs without obfuscation [63], and the immense unresolved difficulties of managing the state of large-scale software systems.
In complex software system development, comprehension – in both meanings of the word – is the central concern. When Brooks [39] declares conceptual integrity to be the most important characteristic of good software, it is natural that he should argue that this can be best achieved through comprehension of the entire design in the mind of a single architect. Such a putative architect has both to make sense of the relationships between software and diverse human and non-human agents – from myriad perspectives, and to synthesise them into a coherent whole. Historically, mathematics has been the weapon of choice in managing complexity, but even some champions of formal approaches have come to recognise the essential complementary active role for human intelligence, intervention and interpretation in deploying formal methods to help make complex software comprehensible. Harel [54] advocates the statechart not merely as an abstraction to aid the apprehension of complex state, but as a physical artifact that exploits the exceptional qualities of the human visual system13.
The limitations of formal approaches in respect of radical design of complex software systems have motivated softer approaches to software development over the last twenty-five years or so. They have also stimulated critiques and reappraisals of the underlying principles. In the 1980s, Winograd and Flores highlighted the critical importance of the social perspective on software design [85], Brian Cantwell-Smith exposed the scale of the conceptual and philosophical challenges facing computing [41], and Naur emphasised the indispensable role of intuition [74]14. More recently, the eminent software consultant Michael Jackson has stimulated debate amongst the proponents of formal development methods by posing the question What can we expect of formal specification? [60]. Approaches such as open source development, agile methods and distributed participatory design that explicitly acknowledge the essential contribution that individual and social human agency makes to software development now represent mainstream thinking. Such approaches have redressed the inappropriate emphasis placed – at any rate in academic computer science circles – on software as fundamentally based on methods and logics. They have also demonstrated the important complementary contribution that computing technology can make to supporting broader human-centred development. But – from the perspective of this essay – they have conspicuously omitted to address the most crucial legacy of that illusion: the notion that the principles of software development are first and foremost concerned with computational thinking and the study of process-like behaviours.
Without doubt, Turing’s mathematical conception of an algorithm is one of the most significant intellectual achievements of the 20th century. It has had enormous influence over the formalist approach to software development. The implications can be seen at a high-level in the way in which we think about computing in relation to human activity in its many application contexts – and in concrete ways that are not commonly appreciated.
As explained in [7], Turing’s treatment of states of mind was conceived with a view to modelling “a mind following rules”. It has been fashionable in computer science circles to interpret his research as offering yet broader insights into the nature of mind, and to account for all aspects of mind as computational in character. It is not clear to what extent Turing himself would have endorsed this view [20]. Certainly, not all our experience seems to sit comfortably with this disposition of mind. When, in a foreign country, we use a tourist guide to reach specific landmarks we wish to see, our mind may be construed as following rules. But what we know about the familiar places in which we live is qualitatively quite different from what can be summarised in a tourist guide, and from what any tourist can readily experience. In dwelling natively in our present situation, we have immediate implicit access to familiar resources to which we can relate in quite different ways from those to which the tourist guide draws our explicit attention. As elaborated in [20], this is vividly exposed by considering what is involved in following an abstract set of rules – such as might specify ‘making a cup of tea’ – in an unfamiliar environment. We have to identify “the teapot”, find the “teabags”, figure out how to “switch the kettle on”, and possibly realise that you have to fill the kettle through the spout, and only the left-hand plug socket works. This process of implementing an abstract procedure highlights the significance of the hermeneutic perspective: it may be quite unclear what is signified by our abstract symbols (‘teapots’ and ‘teabags’) and instructions (‘switch on the kettle’) without exploratory interaction in the world.
Software development is framed around activities that are goal-oriented and rule-based like Turing computation. At some level – on a conventional computer architecture – this appeal to a computational foundation obliges us express everything in terms of programmed activities that effect changes of state. When we frame our conception of a system in terms of functional specifications of component parts, we have to formulate knowledge of the domain in the same way that a tourist agency describes a country: these are the landmarks, the key destinations, the services that will take you from A to B. To realise software development as a lived experience entails changing the quality of our engagement with computing technology so that it more closely resembles being native in a situation. Implementing spreadsheets, databases or objects are practical ways to exploit – and hide – the underlying executing procedures so as to realise the neutrality with respect to focus of attention and the open-ended potential for action that is characteristic of our native environment. But finding a satisfactory way to conceptualise and realise computing in a way that gives priority to native state is challenging precisely because our native environment does not come with a tourist handbook: it mediates meanings by supplying the stage for our exploratory sense-making interactions rather than words carefully pre-crafted to prescribe our interaction.
Placing computational recipes at the core of software development has obvious merits where implementation on conventional hardware is concerned. For instance, the scope for optimisation has been crucially important in promoting effective computer use, and is an important research field in its own right. Key abstract issues concerning functionality and correctness are seen in clear focus. But there are also many undesirable consequences. Knowing recipes is no substitute for deep knowledge of the domain, as so often becomes apparent when trying to adapt software to a new application. There is no way to express the broader aesthetic agenda, as represented in Crockford’s injunction [43, Section 8] to “make your programs look like what they do”. The semantics of a system to be developed can be framed in terms of its potential functionality, but knowing at what point in the development it is appropriate to regard the emerging system as having this semantics is problematic [70]. The roles of the developer and the user are played out in clearly separated realms of experience. Since a functional specification gives no information about the intermediate states traversed by computational procedures, there is no means to blend computational processes as opposed to composing them to form a more complex computational process. This is in sharp contrast to the overlapping of the roles of human agents and the blending of construction and use in the spreadsheet-based environments for software development studied by Nardi in [73].
Agile development, as represented for instance in extreme programming (XP), may be seen as a way to overcome some of these limitations. An agile approach has the advantage of generating executable code that enhances the scope for evaluating the software under development in broader terms. It may be able to ameliorate the problems of modifying the requirement by adopting appropriate strategies for incrementally framing the requirement. When used in conjunction with test-driven development, it affords simplification and control by allowing the states that are being realised by the underlying computational procedures to be monitored. But as West has observed [84, 28], the nature and the thinking that accompanies XP is critical to its success. More is expected of XP than the kind of basic reassurance about the overall integrity of a collaborative development that is obtained by deploying test-driven checking against a functional specification.
The real benefits of XP are in the support it offers for a hermeneutic stance. This has special relevance to software development that demands radical design. In this connection, as explained by West in [84, p.26-9], an agile approach enables developers to mould the agency that is needed to support the crafting of meanings through exploratory interaction. West advocates the adoption of ‘object thinking’ for the construction of this agency. With this approach, test driven development can play an additional supporting role in monitoring the characteristics of the objects that have been created. The relationships that feature in these tests may be different in character from testing that a functional specification has been met. They include relationships of the kind that we apprehend in our lived experience, such as the dependency relations that associate a change in one observable with changing another. An important characteristic of these relationships is that they are a means of gaining a better understanding of agency within the domain. For instance, they can help to address issues such as ‘what would be involved in adapting to a new requirement?’ and ‘what speed of response is required to ensure a satisfying user experience?’15.
In an XP approach to creating software that involves radical design, the software developer aspires to use the computer in the role of an instrument. Informal confirmation that something radically different from traditional development is appropriate for this is given by the amusing blog on the theme of “What if a Piano behaved like a Computer?” at [76]16. It is extraordinary how this nonsensical fantasy, featuring all manner of absurd relationships between the piano keys, the legs of the piano stool and cups of coffee, so aptly conveys our perception of how computers as we conventionally program them are disposed to behave. In the process – when viewed at a meta-level as an exercise in communication – this blog highlights many of the key features of the human mind that are topical in the hermeneutic account of meaning. These include the astonishing power of the mind to appreciate metaphor, to find meaning in words that refer to a fantasy world, and to generate and retain implicit models of the character of inanimate objects.
The limitations of software development that is framed by using functional specifications are most evident in activities where domain learning on the part of the developer(s) is involved. Early in my academic career, my motivation for creating software was to use the computer to support my mathematical research. This proved to be exceptionally challenging, primarily because there was no clear single requirement that could guide the development. Though I was able to make some use of the computer in developing mathematical results, this was essentially only by using the computer in a conventional way to compute the structure of objects of interest and provide empirical evidence to guide conjectures and proofs. My aspiration was for a much more intimate relationship between my thought processes and the supporting software – one that would allow me to blend thinking about mathematics with constructing environments in which to develop and record these thoughts17. It quickly became apparent that this goal was unrealistic because of the notorious difficulty of revising computational procedures in a timely fashion, given the degree of fluidity with which partial insights and speculative ideas change. Most striking in this proposed area of application was the way in which in some contexts the key challenge was to adapt existing software so as to respect the continuity in the thought process, and in others to transform it so as to reflect thinking in a totally different direction.
Anyone who has ventured to develop software in an exploratory fashion will have experienced the trepidation that accompanies making a small change to the code. We know all too well that our seemingly innocuous action may be about to precipitate an avalanche of unintended consequences. When the outcome seems to be favourable, we experience another kind of anxiety – is it possible that in some circumstances where it used to work, it no longer works as before? Then our mind is crowded with hazy memories of all the carefully thought out patches we made en route to first developing our software, the once-vivid issues we encountered and with some difficulty resolved. At these moments, we are like the burglar who has just left the scene of a crime, and is trying to remember exactly what they did and whether they have inadvertently left their fingerprints.
That the study of behaviours is the core subject of software development is undeniable. But it is equally clear that each decision about whether an actual behaviour matches an intended behaviour has at some point to be taken in the moment, and that knowing how to do this is in general problematic. When developing a piece of software to meet a well-defined specification, test driven development can to some extent steer us away from the uncomfortable experience of trying to hold behaviours in our mind for comparison. Tests can alert us when something has gone wrong, and help us to ensure that the software remains consistent with our intentions. The principle behind this testing is to correlate actual and intended behaviours by monitoring their impact according to a preconceived plan. In this way, it is possible to avoid the detailed correlation of one behaviour with another that I alluded to in connection with devising a Turing machine18.
Checking that behaviours at all times satisfy suitably devised tests might be sufficient if developing software to meet a static functional requirement were our sole concern. On the other hand, correlating behaviours at checkpoints does not constrain the way in which behaviours are implemented in point of detail. If there are changes to the requirement or the context for the development, there can be no guarantee that the tests we have devised are still appropriate. More problematic yet, when the degree and nature of domain learning involved in the development is too great, there may be no viable way of conceiving a suitable framework for testing.
I conceive the task of developing a complex software system that requires radical design as having many characteristics in common with the task of developing software to support my personal learning in a research field. The challenges in respect of the number and kinds of participants and the nature and degree of learning involved in the design seem incomparably greater. If I extrapolate from my moment-by-moment experience, it is hard to imagine what is in the mind of the software architect who oversees the development of a complex safety-critical software system, and is charged with ensuring its conceptual integrity. It is intimidating enough to be responsible for deciding whether the behaviour that I once constructed with great care and concentration is in every detail what I intended. How much more taxing to be expected to vouch for the state-changing actions and interactions of the diverse agents – human and non-human – associated with the design, implementation and use of a large system. At some level, making any such guarantee requires comprehensive knowledge of the capacity to respond, the speed of response, and where appropriate the skills and experience of all these agents, and an appreciation of how the system will respond in exceptional circumstances, when for instance an agent fails to honour its protocol. The fact that in practice – despite Brooks’s proposal [39] – the role of the software architect is more than likely to be played by a team of designers with complementary specialist knowledge and skills in some respects only compounds the challenge to the imagination.
No matter how well-conceived and organised a development team may be, questions relating to changes to the requirement must ultimately be answered by someone who can say with confidence that the behaviour of pieces of software will still conform to an intended behaviour. As professional developers, they may have much more sophisticated strategies and techniques to assist them, but like me they also have to contemplate the consequences of change by making connections in current moment-by-moment experience. Version control, object-oriented documentation in UML, invariants, declarative abstractions, descriptive identifiers and commented code, integrated development environments and the like make it easier to grasp and adapt the actual behaviour, and to maintain its relation to a requirement as abstractly conceived and specified. They are of limited value in anticipating the significance of changes to the symbolic code where its contextualised interpretation is concerned. The significance of these changes will depend on relationships that – in a sense discussed in the previous section – pertain to our lived experience, rather than being functional in character. What is more, the nature of these relationships cannot be foreseen when the software is first conceived, and remains volatile until we have empirically developed extensive understanding of the agency operating in the application domain19.
Software development has been misdirected by a conceptual framework preoccupied with developing and optimising algorithms to realise functional goals. As Harel recognised in Biting the Silver Bullet [55], the close integration between software and physical devices represented in ‘reactive systems’ that has latterly become ever more prominent in computing applications has radical implications for software development and demands new principles and tools. The environments in which it is plausible to invoke programmed behaviours in the spirit of classical computer science are those in which reliable mechanisms and protocols for changing state and interpreting state-change have been established. They include the environment of the computing machine itself, and extend to the formal computation of abstract input-output relations. They also embrace domains where science has disclosed processes that follow a reliable and predictable pattern that can be automatically computed. In such environments, the idealisations that have been developed in the theory of programming can be invoked, and the possibility of correlating the states of an automated behaviour generated by the computer with those of an external behaviour can be entertained. Environments suited for computation of this kind are in general the products of an engineering process, not only in the traditional sense but perhaps also in the extended sense in which the term might be applied to stable environments that have evolved without human intervention. In the processes of engineering that lead to the identification and construction of such environments, alternative methods of conceiving behaviours are required. This is most obvious when the perspectives of computer science and traditional engineering come together in an application.
Consider, for instance, the way in which ‘intelligent’ applications to help people negotiate stairs might be designed. Whether or not the computer had any direct agency to effect climbing stairs in such an application, it would be necessary to develop some kind of computer model of stair-climbing behaviour. From a computational perspective, we might regard going up or down stairs as an abstract behaviour. Such an abstraction most closely conforms to the manner in which we traverse a staircase that is so familiar to us that we give no conscious consideration to the real environment but – oblivious to how the light falls upon the stairs, and to the identity and geometry of the particular step we are currently standing on – we execute a series of complex physical movements from memory. If this seems to be too crude an account of behaviour, omitting too much detail concerning the agency that informs climbing stairs more generally, we may abstract at another level. A more procedural recipe for specifying stair climbing might involve registering relevant parameters of each step – such as its height and width – as it is encountered, and invoking suitable movements accordingly. From an object-oriented perspective, such a recipe might be seen as a specific instance of a generic method of traversing stairs. The overall strategy here involves treating the world as machine-like by simplification. Real-world observables are represented by mathematical or program variables.
The engineer’s perspective, by contrast, realises behaviours as the products of an empirical process that is rooted in lived experience. The primary emphasis is on understanding the environment in which the application is to be engineered: What are the reliable features that can enable programmable behaviours? What will change state? How will these changes be contrived? Rather than simplifying by abstraction, this activity enriches experience of the world, superimposing machine-like interpretations upon it in a way that can be sustained subject to observing sensible modes of interaction. In this way, engineering practice introduces new observables both specific to the context and derived from general theory that has well-established empirical roots. Whilst the frame in which the system is being developed is uncertain or context-specific, the frame is explored dynamically, eschewing abstractions predicated on prematurely fixing the frame. This is in keeping with Eugene Ferguson’s concern that engineering practice should “do justice to the incalculable complexity of the real-world” [50].
The techniques for validating the products of engineering activity are quite different in character from those used in checking the behaviour of software. The engineer is typically concerned with concrete measurements and checks that even if they cannot be made instantly can be carried out in a systematic manner following a standard protocol. Though sophisticated prior knowledge of procedures and regulations and specialised instruments may be needed, the end result is that the engineer is presented with a situation in which they can exercise judgement through direct experience of things that are easy to observe, such as the reading on a meter, the status of a warning light, the current temperature, or the time of day. The immediacy of this feedback and its direct relevance to the present moment also renders the design subject to corroboration throughout the process of construction. In contrast, for the software developer, the pragmatics of making a correlation between behaviours is in general quite unclear. It is only possible to be confident that an abstract computational behaviour conforms to an intended pattern and is in all contexts appropriate if we appeal to knowledge that is informed – explicitly or implicitly – by a wealth of prior experience. What is more, this problem is compounded when we mix computational paradigms, and adapt recipes for generating behaviours so as to make them more efficient.
Empirical Modelling (EM) [21] is a reconceptualisation of computing that is better oriented towards an engineering perspective on systems development. It was first conceived in an embryonic form by the author early in the 1980s with the idea of combining different programming paradigms in an experientially coherent way. Achieving this objective means stepping back from conventional programming to address the more primitive underlying sense-making activity that precedes the formal specification of behaviour. At the core of this activity is a form of model-building that focuses on identifying observables, dependencies and agency in the application domain – hence ‘Modelling’. The central ingredients in this distinctive form of modelling are observation and experiment – hence ‘Empirical’. Largely through collaboration with my colleague Steve Russ and computer science students at the University of Warwick over the last twenty-five years, EM has now developed a relatively mature body of concepts, principles and tools with wide potential applications, many of which stem directly from its unconventional stance towards software development [4].
The contribution of EM can be understood with reference to practice, principles and philosophy. All three are centrally concerned with a process of construction in which the relationship between a model and its referent is corroborated in the moment-by-moment experience of the modeller. When we look at a traditional computer program, we see a recipe for a behaviour that is only realised when input is provided and execution begins. By contrast, the product of EM is an interactive source of experience for the modeller (an “artefact”) that at any moment stands in a special relation to some other experience that can presently either be accessed directly or imagined. Depending moment-by-moment on what perspective the modeller adopts, this special relation may take many different forms. For instance, it may refer to the current status of the design, the result of a speculative exploratory interaction or the current state in a simulation of transitions which the artefact is undergoing. According to the interpretation that is being applied to an EM artefact, it may be appropriate to call it “a construal”, “a model” or indeed “a program”. By nature, EM products are so fluid that it is generally unhelpful to ascribe any one of these terms except as a way of indicating which perspective on the part of the modeller is at present most topical. Even the sharp distinctions between the current status of the development and the current state of an execution, and between the roles of the developer, the user and automated agents that also act to change the state, can be dissolved in this way and are explicitly and dynamically constructed as the modeller sees fit. The merits of this in relation to the many different perspectives that may pertain at any point in a software development are clear.
EM practice is based on framing networks of observables and dependencies (“OD-nets” [78]) whose role is to support metaphors for state as it is experienced by the model-builder within the application domain. The most obvious precedent for this semantic practice is the development and use of spreadsheets. In that context, whether the spreadsheet is in the process of being developed or used, the modeller holds in mind the current state of the spreadsheet, as presented on the computer screen, and the current state of the situation to which the spreadsheet pertains. The modeller’s interaction with the spreadsheet in this context can readily be interpreted in EM terms: the cells being instances of observables whose values may be defined in terms of other observables by dependency, and the environment affording many different kinds of agency that are associated with different perspectives on the referent. An examinations spreadsheet is a simple illustration: in the process of drawing up the exam grid certain dependencies are introduced to construct the average marks for modules, or the overall marks of students; in the use of the spreadsheet, values are entered into the cells of the grid to correspond to student marks; different human agents are privileged to adapt the values and dependencies according to their status – for instance, as the developer of the examinations spreadsheet, or the responsible examiner for a module.
The commonplace use of spreadsheets should not disguise the profound nature of the underlying semantic model, far richer than that which can be expressed in a formal manner. Consider the way in which rows and columns in the array layout metaphorically convey the records of individual students and modules in a way that is well-matched to our visual perception. Consider also the way in which the spreadsheet serves a constructive role – creating a representation that is perceived as having conceptual integrity and objectivity for all the participants, despite the fact that its different components are endorsed in their experience in quite different ways. This characteristic – typically unremarked in normal spreadsheet use – was once vividly exposed at an exam board I attended where one of the entries turned out to refer to a student who had accepted a place on the course, but in fact had never shown up. In our discussions, it took some time for it to become clear that each of the examiners presumed that this candidate was known to someone else. These characteristics of spreadsheets in use illustrate how effectively interaction with the spreadsheet as an artefact is effortlessly experienced as if it pertained directly to the application domain. The qualities of the spreadsheet in respect of What if? experimentation are well-recognised, and play a fundamental role in ensuring that experiential connections of this kind can be established and maintained.
The notion that EM offers principles for making informal semantic relationships has proved particularly difficult to clarify and sustain. It is in conflict with the idea – often promoted in theoretical computer science – that all academically respectable use of computers should be mediated directly or indirectly by formal notations and be accountable in logical terms. EM thinking builds on the capacity – illustrated in the most basic use of a spreadsheet – for exploiting a network of observables and dependencies to express a human agent’s state-as-experienced. Such a technique can be generalised to a concurrent system, where an OD-net can be created to reflect the interaction of each agent with other agents and their environment. Framing an OD-net is a way of declaring what is perceived to be concurrent in the view of an agent. This reflects the fact that concurrency has a subjective aspect, as when one agent perceives only a dependency where another may be able to detect the sequence of individual states associated with maintaining the dependency. The ‘commonsense concurrency’ associated with running together in the experience of an agent, as expressed in an OD-net, can be linked to the more objective notion of concurrency that is favoured in formal treatments of software development [36]. This presupposes an external observer in a role somewhat similar to that of Brooks’s software architect [39] whose perspective can be realised by specifying the perspectives of the individual agents on the observables, dependencies and agents associated with the system and empirically moulding their corporate interaction to derive an objective behaviour20. The extent to which this proposal conflicts with the widely-accepted imperative to give formal accounts of concurrency is illustrated by the response of an anonymous reviewer who asserted “There is no such thing as commonsense concurrency”.
It would be inappropriate to counter the charge of vacuousness by venturing a formal semantics – as others have suggested. The role to which EM artefacts are best adapted – though by no means confined – is that of sense-making and communication in the preliminary stages of design. When suitably developed, an EM artefact can incorporate counterparts of observables characteristic of its referent and reflect the dependencies and agency to which these are subject. Used in this way, such an artefact can serve a similar function to the constructions made by an experimental scientist in order to record their current understanding of a phenomenon21. The term ‘construal’ that we have adopted for EM artefacts that act in a sense-making role is borrowed from the philosopher of science David Gooding who introduced it in his studies of the experimental work of Faraday on electromagnetism [51]. By revisiting Faraday’s experiments and consulting the documentation in his diaries, Gooding was able to to retrace the development of his construals that led Faraday to make the first electric motor. This process of construction can be seen as exemplifying the way in which – without recourse to formal semantics, but with reference only to their interactive experiential characteristics, suitably documented – we aspire to exploit EM construals in the development of software22.
The choice of the word ‘construction’ to describe the EM approach to development is deliberate. As Bruno Latour explains so eloquently in The Promises of Constructivism [69], what we understand by construction has critical significance for the potential rehabilitation of the concept of constructivism. In seeking an appropriate notion of construction, Latour takes his inspiration from the work of the traditional architect, and his discussion could equally well apply to the putative software architect who oversees the entire design process in Brooks’s vision [39]. As outlined above, EM for software development – as mediated by construals that may in due course evolve into programs – is centrally about a process of negotiation between the perspectives of many different agents, human and non-human, conducted in an environment that is itself to be engineered subject to the constraints imposed by physical laws and social conventions. The guarantees concerning construction that Latour spells out as necessary for the rehabilitation of the concept are consistent with good practice in EM [33].
A most significant feature of an EM artefact is the dynamic and contingent way in which it acquires meaning. To make sense of an artefact is to interact with it and recognise different ways in which its responses can be experienced as invoking or exploring a referent. Traditional ways of describing the semantics of a computer-based artefact are ill-suited for discussing construals. Even where experience has a prominent role in the description, the focus is typically upon user experience, whereas interaction with a construal is closer in spirit to instrumental performance than to the goal-directed use of a tool. Communicating about EM activity is inhibited by perceptions of semantics that are well-established in computer science: that only what can be expressed in logic and is mathematically consistent can be represented on the computer, that there should be no ambiguity in the interpretation of programs, that all computing activities are of their essence rule-based. These constraints apply to the model of computation conceived by Turing – they have much less relevance for the practices of computing-in-the-wild.
The formalist’s concerns about the semantics of EM artefacts are (in principle!) addressed by acknowledging that the relationship which is established in the modeller’s mind between the artefact and its referent is not amenable to generic explanation. This premise, which in effect contends that such a relationship can be a fact of experience, is the basic tenet of William James’s philosophical stance of ‘radical empiricism’ [62]. James’s elaboration of this idea provides a framework for thinking about the semantics of EM artefacts that can meet the challenge of bringing coherence to the diverse perspectives represented in software development. A key element of this reconciliation of perspectives is the idea that there are no absolute rules – that our accounts of agency are necessarily provisional in character, and that we have only pragmatic grounds for arbitrating between different models of agency. This thesis, which – as James observes in [62] – applies even to radical empiricism itself, commends an approach to semantics that is well-suited to the practice of EM. In particular, it emphasises the significance of construal as a source of lived experience that is always open to re-configuration and re-interpretation.
The semantics of construals is well-suited to fields where a hermeneutic perspective is appropriate. Potential applications of EM for software development discussed in theses by doctoral students of EM (cf. [1]) include conceptual design, decision-support, financial modelling and educational technology. To my mind, the application area with which EM has the greatest affinity is Humanities Computing – in the human-centric sense in which this discipline has been conceived by Willard McCarty [28, 71].
Making the transition from my 1970s perspective on programming and software development has been highly rewarding but difficult in many respects. EM principles put too much emphasis on the role of the subjective and social to sit well alongside the perspective of ‘hard’ computer science, and too much emphasis on a specific paradigm for exploiting technology to suit that of ‘soft’ computer science. As a result, few external people have had experience of EM, and its principles, practices and tools have been developed with quite limited human and technical resources almost entirely local and internal to our project. Though we have been exceptionally fortunate to have had the support of many technically gifted and enthusiastic research students who have prototyped tools for EM with imagination and ambition, much more could be achieved if others were persuaded of the qualities and potential benefits of an EM approach. And whilst James’s radical empiricism [62] is to my mind an excellent philosophical framework within which to set EM thinking, it has never attracted the same attention as his foundational work in psychology, perhaps in part because the priority it gives to experience over language is uncongenial to many philosophers [81]. It is to be hoped that the contribution that EM can make to Lanier’s vision [68] for exploiting computing technology for “post-symbolic communication” will help to redress this neglect.
Revisiting my earliest experience of computer programming, I now realise how much I was misled where drawing a sharp line between the essential and the ephemeral aspects in software development is concerned. It has become apparent that the development of software has in general to address the moulding of the machine itself (cf. the job control output), the way in which human input is mediated to the machine (cf. the stack of punched cards), and the constructions of metaphors that assist the developer in grasping relevant structure (cf. the flowchart). The extent to which these factors matter in software development can be gauged from a modern programming environment such as Scratch [12] – a language that supports only the most basic programming constructs, but transforms the specification of input through its drag-and-drop interface, supports rich multi-media functions and gives the programmer easy access to a range of powerful peripheral devices for giving stimuli and generating effects. Particularly relevant in this context is the fact that built-in dependency is used to make the connections between these devices and the program code. This illustrates one of several ways in which dependency-maintenance is now making an impact – notably in web environments, where the display is always to be maintained so as to be consistent with the underlying data, and in modern development toolkits such as Flex [8], and the Windows Presentation Foundation [13]. This emerging interest in exploiting dependency in software development is further evidence for the topicality and relevance of EM thinking. But the mere addition of dependency to the repertoire of techniques available to the developer will only create further confusion [80]. Dependency can be used effectively only in conjunction with the broader reappraisal of the place of formal and behavioural representations in the development process discussed in this essay. The crucial need is for that fundamental shift in focus that a Jamesian outlook commends – towards knowing that is rooted in relationships between experiences that are themselves given in experience.
A common theme that runs through this essay is the need to find principles for software development that embrace the hermeneutic tradition. The critiques by Naur [74], Cantwell-Smith [41] and Winograd and Flores [85] all point to this conclusion. Such a move is endorsed by Latour’s agenda for constructivism [69] and West’s promotion of ‘object thinking’ [84]. As Latour points out in [69], being able to identify what is meant by construction is critical; as he himself makes clear, the reputation of his research programme on constructivism has been seriously – if inadvertently – damaged by association with sympathetic researchers who have invoked quite inappropriate notions of construction.
A major problem for those who wish to defend a notion of construction is that they have both to exhibit practical techniques for building artefacts and to explain what distinctive features make these techniques characteristic of construction. As West emphasises in relation to practising XP [84, p.29-30], this means that construction is more than learning to apply rules and techniques – it entails adopting a way of thinking.
There are strong points of contact between EM and object thinking. For instance, they both aspire to serve as modes of construction. They have both had to wrestle with the formidable problems of communicating about a mode of thinking inspired by experience of a practice. Both place an emphasis on adopting an anthropomorphic stance, whereby the developer projects herself into the role of the agents within the domain. Both recognise the need to set software development in the first instance within the conceptual frame of distributed cooperation and communication.
There are a number of ways in which EM can complement object thinking to strengthen the case for radical change in the software development culture.
On the practical front, there has been extensive work on developing EM tools [3] and illustrative construals [2]. This has been useful in establishing proof-of-concept for principles and clarifying the philosophical orientation, though practical applications in software development have so far been limited23. Over the last twenty years, many different perspectives on software systems development have been represented in the work of EM doctoral students24. In this work, there has been a progression from thinking about EM primarily as a broad conceptual framework for conceiving software to be deployed in the early stages of development to regarding it as an approach with the potential to deliver software systems with exceptional expressive qualities. The prospects of marrying EM with software development in the spirit of object thinking can best be appreciated by consulting the work of Nicolas Pope. Pope’s doctoral thesis [78] includes a robust critique of the principal tool that has been used in EM to date (“the EDEN interpreter”). It also introduces Cadence, a new tool [22, 77], potentially much better suited to software development, in which he combines the key EM concept of dependency maintenance with original ideas initially influenced by object-oriented languages in the Smalltalk tradition. Our current research is directed at developing a web-enabled variant of the EDEN interpreter incorporating ideas derived from Cadence [10, 72] that can help us to understand and exploit the connection between EM and object thinking more fully. This is being implemented in JavaScript.
EM can also offer support to the philosophical stance behind object thinking. The term ‘construal’ does not suffer from the acute legacy problem that the term ‘object’ has in relation to software development. The notion of a construal has a well-developed characterisation with foundational support from the writings of Gooding [51] and his account of the experimental work of Faraday. By showing how construals can migrate into formally specified programs in which the invariants can be construed as sophisticated kinds of observable, it has been possible to demonstrate the complementarity of the hermeneutic and formalist stances within the EM framework [24]. This helps both to clarify the distinction between the two philosophical stances, and to illuminate their relationship. It also serves to illustrate the much more nuanced account of the relationship between experience and logic that EM affords25.
At the heart of EM is a pivotal issue that first excited my interest in the nature of software development: finding a secure place to stand between logical accounts of programming that are too far removed from experience and concrete accounts that are so richly contextualised that they confound abstract interpretation. Such a place has to be constructed by crafting the technology, honing our skill, and engineering the context so that we can directly experience the connection between an artefact on the computer and a situation in the world. This personal experience is what William James identifies as the root of all knowing [62]. Modern computing technology empowers us to establish such relationships in unprecedented ways, as when – without conscious thought – we relate our physical surroundings to the image on a satnav. This is meaning that cannot be denied, though it defies logic. It is not unreasonably reliable, may be subverted by roadworks, accidents and earthquakes, and is particular to the human observer who makes associations as best they can, whether as a native or a tourist. Liberating such meaning is the motivation for realising software development as lived experience.
Notes
- This is in keeping with the way in which John Dewey uses the term ’experience’ to refer to “an actual focusing of the world at one point in a focus of immediate shining apparency”, as discussed in great depth in the introduction to [47]: Essays in Experimental Logic.
- This essay is informed by a specific proposal for addressing this issue that stems from research on Empirical Modelling (EM). The decision to pend discussion of EM until the end of the essay is deliberate, and is intended to avoid further complicating what is already a highly complex agenda. A reader who finds this oblique approach frustrating may find it helpful to think of EM as resembling the mode of software development using spreadsheets discussed by Bonnie Nardi in [73]. Useful sources for orientation on EM are Karl King’s MSc dissertation on Uncovering Empirical Modelling [65] and the Sudoku Experience workshops at [6]. Alternatively, you may wish to read the final sections of the essay first.
- In this metaphor, the destination is the counterpart of the functional specification of the program. Structured programming engineers the environment and educates the programmer in such a way as to make it easier to realise the functional specification. Declarative programming only requires that we frame a suitable functional or logical specification that can then be routinely interpreted.
- Empirical evidence that interpreting formal specifications is cognitively challenging can be found in Vinter et al [83].
- “Formalism works close to the computer, is highly questionable at the level of the application, and fails at the level of complete systems and architectures.”
- Often as quasi-routine exercises in formalisation delegated to novice researchers familiar with logic but with little knowledge of programming.
- Consider how Dijkstra [49] quotes the following extract from an essay on Cayley and Sylvester by the mathematician E.T.Bell [15]: “If there is any mysterious virtue in talking about situations which arise in analysis as if we were back with Archimedes drawing diagrams in the dust, it has yet to be revealed. Pictures after all may be suitable only for very young children; Lagrange dispensed entirely with such infantile aids when he composed his analytical mechanics. Our propensity to “geometrize” our analysis may only be evidence that we have not yet grown up”.
- In keeping with the terminology used by West in his discussion of the philosophical context for software development [84, p.51], I have adopted the term ‘hermeneutic’ where in other EM publications I have used the term ‘constructivist’.
- An EM construal that illustrates the status of SQL as a flawed logical model of relational algebra, and can be used to expose the practical implications for its design and applications is described in [26].
- To these we might add even moral imperatives, such as honesty. As Crockford observes in [43, Section 8]: “We imagine that we spend most of our time power typing” ... when in fact ... “We spend most of our time looking into the abyss, asking ‘My God - what have I done?’, trying to make sense of this puddle of confusion and turn it back into a working program.”
- The online construal at [9], as described in [18], uses EM principles and tools to illustrate the distinction between mathematical objects as they can be automatically generated from an axiomatic specification using the Alloy tool [59] and as they may be conceived by a mathematician.
- Some corroboration for this broader view of mathematical activity can be found in Byers’s account of how mathematicians think [40].
- The EM construal of a digital watch – see digitalwatchFischer1999 in the EM archive [2] – illustrates the expressive power of OD-nets as a way of representing state. This construal incorporates a statechart devised by Harel to represent the display functions of a watch [56] as one aspect of a much more comprehensive representation of the state of the watch.
- Naur’s reflection on the role of intuition in software development are discussed from an EM perspective in [29]. It is particularly interesting to note that Naur came independently to the conclusion that William James’s thinking was particularly relevant in this connection (see for instance Naur’s summary of James’s ideas on knowing in [61]). Whilst Naur took his inspiration from James’s seminal writings on psychology, EM has primarily drawn on James’s later philosophical writings on radical empiricism (cf. [16]).
- The role that modelling plays in understanding agency within the domain in the context of software development is discussed from an EM perspective in [27].
- The merits of EM as a basis for using the computer as an instrument are discussed in [30]. A much more elaborate (but much less amusing) account of how a computer might resemble a musical instrument, also drawing on an EM perspective, can be found in [19].
- I first recognised the potential for using scripts of definitions to express state in connection with using the computer interactively to construct Cayley diagrams – see [31]. There were precedents for using dependency in other notations used for interactive graphics, and in spreadsheets, though I was not aware of this at that time.
- A parallel may be drawn with the principles of constraint-based modelling in intelligent tutoring systems, where constraints can serve as checks on whether the learner is on track. This implicitly presumes a high degree of understanding of the nature of the learning activity. A relevant discussion of the character of learning activities with reference to the distinction made by William James [62] between ‘understanding backwards’ and ‘understanding forwards’, can be found in [17].
- In commenting on authentic software development practices, I am handicapped by my lack of experience. A reviewer who is clearly much more knowledgeable than I am about real software development practice commented on the impact of test-driven development (TDD) on the developer’s experience. Coincidentally, I also received feedback on the same theme from Chris Brown, now a Software Development Manager at i-nexus, a UK software company, who worked closely with the EM group on projects in his final undergraduate year. It may be helpful to the reader to contrast his feedback with my attempt to describe the role of TDD from an EM perspective: “TDD ... strikes me as resonating quite strongly with some of the ideas surrounding EM. For example, in EM you start with the simplest possible model, and iterate towards your goal in a series of experiment-observe-refine cycles. In TDD, you similarly start with the simplest possible requirement, and follow a cycle of writing a unit test to assert the requirement, running all your tests, then making the smallest possible code change such that the tests pass. You continue to repeat, making the smallest possible change to the specification at each step. The interesting part is that it all but eliminates the anxiety about whether your latest change has adversely affected the software you are working on. Instead of trying to build a mental model of the program and analysing it to see whether it is consistent, you simply run the tests. If they pass, you move on to the next increment without giving it a second thought. This for me resulted in a significant change in the experience of software development.”
- This was first developed as an approach to modelling and simulating concurrent systems [34] with sponsorship from British Telecom Research Labs. The views of the individual agents were described using the LSD notation [32] and animated using the Abstract Definitive Machine [35]. Adzhiev and Rikhlinsky, at the Moscow Engineering Physics Institute, later made an ambitious attempt to extend this strategy to build a practical environment for software development (“The LSD Engine”) [14].
- This theme is discussed in more detail from a philosophical perspective in [23] and with reference to the development of a construal of cataglyphis ant navigation in [64].
- Gooding’s perspective on this proposal is described in [52].
- Practical software applications of EM include timetabling [37], ant navigation [64], elevator design [25] and Sudoku puzzle solving [6]. EM techniques had an influential role in work by Richard Cartwright at the BBC R&D Labs on porting interactive TV applications to different digital platforms that led to the establishment of a new international standard [11].
- Relevant theses include: Yung(1993), Ness(1997), Cartwright(1999), Sun(1999), Ch’en(2001), Wong(2003), Ward (2004), Chan(2009) and Pope(2011). These are available online from [1].
- This draws on well-developed ideas of William James, as illustrated in the poster that can be accessed online at [5].
Acknowledgements
I am much indebted to my colleague Steve Russ and the many students of Empirical Modelling who have contributed to the work described in this essay. Particular credit is due to Nick Pope, without whose inspiration this essay would not have been written. I am most grateful to David West, Megan Beynon and two anonymous reviewers for encouragement and constructive criticism. I also thank Russell Boyatt for technical help in preparing this essay.
References
[1] http://www2.warwick.ac.uk/fac/sci/dcs/research/em/ publications/phd/.
[2] empublic.dcs.warwick.ac.uk/projects.
[3] http://www2.warwick.ac.uk/fac/sci/dcs/research/em/ software/.
[4] www.dcs.warwick.ac.uk/modelling.
[5] http://www2.warwick.ac.uk/fac/sci/dcs/research/em/ wj_re_em/posterfinal.pdf.
[6] http://www.dcs.warwick.ac.uk/~wmb/sudokuExperience/ workshops/.
[7] http://plato.stanford.edu/entries/turing/.
[8] http://www.adobe.com/products/flex.html.
[9] http://www.dcs.warwick.ac.uk/~wmb/webeden/ Group8OpenDayNov2008.html.
[10] http://jseden.dcs.warwick.ac.uk.
[11] http://www.bbc.co.uk/rd/publications/whitepaper134. shtml.
[12] http://scratch.mit.edu.
[13] http://msdn.microsoft.com/en-us/library/ms754130. aspx.
[14] V. Adzhiev and A. Rikhlinsky. The LSD engine. Technical report, Moscow Engineering Physics Institute, 1997.
[15] E. T. Bell. Men of Mathematics (First published 1937). Touchstone Books, 1986.
[16] M. Beynon. Radical Empiricism, Empirical Modelling and the nature of knowing. Cognitive Technologies and the Pragmatics of Cognition: Special Issue of Pragmatics and Cognition, 13:615–646, Dec. 2005.
[17] M. Beynon. Towards technology for learning in a developing world. In Proc. IEEE 4th International Workshop on Technology for Education in Developing Countries, pages 88–92, Iringa, Tanzania, July 2006.
[18] M. Beynon. Constructivist Computer Science Education Reconstructed. HEA-ICS ITALICS e-Journal, 8:73–90, 2009.
[19] M. Beynon. From formalism to experience: a Jamesian perspective on music, computing and consciousness. In David and E. Clarke, editors, Music and Consciousness: Philosophical, Psychological, and Cultural Perspectives, pages 157–178. OUP, 2011.
[20] M. Beynon. Turing’s approach to modelling states of mind. In S. B. Cooper and J. van Leeuwen, editors, Alan Turing - His Work and Impact, pages 70–76. Elsevier, 2012.
[21] M. Beynon. Modelling with experience: construal and construction for software. In C. Bissell and C. Dillon, editors, Ways of Thinking, Ways of Seeing, pages 197–228. Springer-Verlag, Jan. 2012.
[22] M. Beynon and N. Pope. Cadence and the Empirical Modelling conceptual framework: a new perspective on modelling state-as-experienced. Research Report 447, Department of Computer Science, University of Warwick, 2011. http://www.dcs.warwick.ac.uk/ report/pdfs/cs-rr-447.pdf.
[23] M. Beynon and S. Russ. Experimenting with Computing. Journal of Applied Logic, 6:476–489, 2008.
[24] M. Beynon, J. Rungrattanaubol, and J. Sinclair. Formal Specification from an Observation-Oriented Perspective. Journal of Universal Computer Science, 6:407–421, 2000.
[25] M. Beynon, S. Rasmequan, and S. Russ. A New Paradigm for Computer-Based Decision Support. Decision Support Systems, 33: 127–142, 2002.
[26] M. Beynon, A. Bhalerao, C. Roe, and A. Ward. A computer-based environment for the study of relational query languages. In LTSN-ICS Workshop on Teaching Learning and Assessment in Databases (TLAD), Coventry, UK, 2003.
[27] M. Beynon, R. Boyatt, and S. Russ. Rethinking Programming. In Proceedings IEEE ITNG 2006, pages 149–154, Las Vegas, Nevada, USA, 2006.
[28] M. Beynon, S. Russ, and W. McCarty. Human Computing: Modelling with Meaning. Literary and Linguistic Computing, 21:141–157, 2006.
[29] M. Beynon, R. Boyatt, and Z. Chan. Intuition in software development revisited. In Proceedings of 20th Annual Psychology of Programming Interest Group Conference, Lancaster University, UK, 2008.
[30] M. Beynon et al. The Computer as Instrument. In Proc. 4th International Conference on Cognitive Technology, volume 2117 of LNCS, pages 476–489. Springer-Verlag, 2001.
[31] W. M. Beynon. A definition of the ARCA notation. Research Report 87, Department of Computer Science, University of Warwick, 1983.
[32] W. M. Beynon. The LSD notation for communicating systems. Research Report 87, Department of Computer Science, University of Warwick, 1986. Presented at 3rd BCTCS, Leicester 1987.
[33] W. M. Beynon and A. J. Harfield. Lifelong Learning, Empirical Modelling and the Promises of Constructivism. J of Computers, 2 (3):43–55, 2007.
[34] W. M. Beynon, M. T. Norris, and M. D. Slade. Definitions for modelling and simulating concurrent systems. In Proc. IASTED conference ASM 1988, pages 94–98. Acta Press, 1988.
[35] W. M. Beynon, M. D. Slade, and Y. W. Yung. Parallel computation in definitive models. In Proc. CONPAR 1988, pages 359–367, Manchester, UK, June 1988.
[36] W. M. Beynon, M. T. Norris, R. A. Orr, and M. D. Slade. Definitive specification of concurrent systems. In Proc. UK IT 1990, IEE Conference Publications 316, pages 52–57, Southampton, UK, 1990.
[37] W. M. Beynon, A. Ward, S. Maad, A. Wong, S. Rasmequan, and S. Russ. The Temposcope: a Computer Instrument for the Idealist Timetabler. In Proc. 3rd international conference on the practice and Theory of Automated Timetabling, pages 153–175. Konstanz, Germany, August 16-18, 2000.
[38] P. Brödner. The Two Cultures in Engineering. In Skill, Technology and Enlightenment, pages 249–260. Springer-Verlag, 1995.
[39] F. P. Brooks. No Silver Bullet: Essence and Accidents of Software Engineering. IEEE Computer, 20(4):10–19, 1987.
[40] W. Byers. How Mathematicians Think: Using Ambiguity, Contradiction, and Paradox to Create Mathematics. Princeton University Press, 2007.
[41] B. Cantwell-Smith. The Foundations of Computing. In M. Scheutz, editor, Computationalism: New Directions, pages 23–58. Cambridge, MA: MIT Press, 2002.
[42] E. F. Codd. A Relational Model of Data for Large Shared Data Banks. Communications of the ACM, 13(6):377–387, 1970.
[43] D. Crockford. yuiblog.com/crockford/.
[44] D. Crockford. JavaScript: The Good Parts. O’Reilly, 2008.
[45] O. Dahl, E. Dijkstra, and C. Hoare. Structured Programming. Academic Press, 1972.
[46] C. J. Date and H. Darwen. The Third Database Manifesto. Database Programming and Design, 8(1), 1995.
[47] J. Dewey. Essays in Experimental Logic. Chicago: Unversity of Chicago, 1916.
[48] E. Dijkstra. A Discipline of Programming. Prentice Hall, 1976.
[49] E. W. Dijkstra. http://www.cs.utexas.edu/users/EWD/ transcriptions/EWD07xx/EWD772.html.
[50] E. S. Ferguson. Engineering and the Mind’s Eye. The MIT Press, 1992. ISBN 0-262-06147-3.
[51] D. Gooding. Experiment and the Making of Meaning. Kluwer, 1990.
[52] D. Gooding. Some Historical Encouragement for TTC: Alchemy, the Calculus and Electromagnetism. In Proc. Workshop ‘Thinking Through Computing’. Computer Science, University of Warwick, 2007. http://www2.warwick.ac.uk/fac/sci/dcs/research/ em/thinkcomp07/gooding2.pdf.
[53] D. Gries. The Science of Programming. Springer-Verlag, New York, 1981.
[54] D. Harel. On Visual Formalisms. ACM Comms., pages 514 – 530, May 1988.
[55] D. Harel. Biting the Silver Bullet: Towards a Brighter Future for Software Development. IEEE Computer, Jan. 1992.
[56] D. Harel. Algorithmics. Addison-Wesley, Reading, MA, 1992.
[57] P. Henderson. Functional Programming - Application and Implementation. Prentice-Hall International, 1980.
[58] J. Hughes. Why Functional Programming Matters. In D. Turner, editor, Research Topics in Functional Programming, pages 17–42. Addison-Wesley, 1990.
[59] D. Jackson. Software Abstractions: Logic, Language and Analysis. MIT Press, 2006.
[60] M. Jackson. What Can We Expect From Program Verification? IEEE Computer, 39(10):53–59, Oct. 2006.
[61] W. James. The psychology of knowing. In Knowing and the Mystique of Logic and Rules. Kluwer Academic Publishers, 1995. Abridged by P. Naur.
[62] W. James. Essays in Radical Empiricism. Bison Books, 1996.
[63] H. Kaindl. Difficulties in the Transition from OO Analysis to Design. IEEE Software, 16:94–102, 1999.
[64] D. Keer, S. Russ, and M. Beynon. Computing for construal: an exploratory study of desert ant navigation. Procedia Computer Science, 1(1):2207–2216, May 2010.
[65] K. King. Uncovering Empirical Modelling, MSc Thesis. Master’s thesis, Department of Computer Science, University ofWarwick, 2007. http://www2.warwick.ac.uk/fac/sci/dcs/research/ em/publications/mscbyresearch/kking/.
[66] J. Kramer. Is abstraction the key to computing? Communications of the ACM, 50:36–42, 2007.
[67] L. Lamport. The Future of Computing: Logic or Biology. Talk at Christian Albrechts University, Kiel, 2003. http://research.microsoft.com/users/lamport/pubs/future-ofcomputing. pdf.
[68] J. Lanier. You are not a gadget. Penguin Books, 2010.
[69] B. Latour. The Promises of Constructivism. In D. Ihde, editor, Chasing Technoscience: Matrix of Materiality. Indiana University Press, 2003.
[70] M. Loomes and C. Nehaniv. Fact and Artifact: Reification and Drift in the History and Growth of Interactive Software Systems. In Proc. 4th International Conference on Cognitive Technology, volume 2117 of LNCS, pages 25–39. Springer-Verlag, 2001.
[71] W. McCarty. Humanities Computing. Palgrave Macmillan, 2005.
[72] T. R. Monks. A Definitive System for the Browser. MSc Dissertation Report, Computer Science, University of Warwick, 2011.
[73] B. Nardi. A Small Matter of Programming: Perspectives on End User Computing. MIT Press, 1993.
[74] P. Naur. Intuition in Software Development. In TAPSOFT, volume 2, pages 60–79, 1985.
[75] P. Naur. Knowing and the Mystique of Logic and Rules. Kluwer Academic Publishers, 1995.
[76] @OzzyDweller, 2011. http://dwellertunes.blogspot.co.uk/ 2011/06/what-if-piano-behaved-like-computer.html.
[77] N. Pope and M. Beynon. Empirical Modelling as an unconventional approach to software development. In Proc. SPLASH 2010 Workshop on Flexible Modeling Tools, Reno/Tahoe Nevada, USA, 2010.
[78] N. W. Pope. Supporting the Migration from Construal to Program: Rethinking Software Development. PhD thesis, Department of Computer Science, University of Warwick, Dec. 2011.
[79] M. Ridley. Database Systems or Database Theory – or ‘Why Don’t You Teach Oracle’. In LTSN-ICS Workshop on Teaching Learning and Assessment in Databases (TLAD), Coventry, UK, 2003.
[80] C. Roe and M. Beynon. Dependency by definition in Imagine-d Logo: applications and implications. In Ivan Kala (ed.) Proc. of the 11th European Logo Conference, Bratislava, Slovakia, 2007.
[81] E. Taylor and R. Wozniak. Pure experience, the response to William James: An introduction. In E. Taylor and R. Wozniak, editors, Pure experience, the response to William James, pages ix–xxxii. Bristol: Thoemmes Press, 1996.
[82] A. Turing. On Computable Numbers with an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, 2(42), 1936.
[83] R. Vinter, M. Loomes, and D. Kornbort. Applying Software Metrics to Formal Specifications: A Cognitive Approach. IEEE Metrics, pages 216–223, 1998.
[84] D. West. Object Thinking. Microsoft Professional, 2004.
[85] T. Winograd and F. Flores. Understanding Computers and Cognition: A New Foundation for Design. New York: Addison-Wesley, 1986.
[86] W.Wong. Formal Verification of VIPER’s ALU, UCAM-CL-TR-300. Technical report, Computer Laboratory, University of Cambridge, UK, 1993.