098 - Experimenting with Computing (extended abstract)
Extended abstract
The phrase "experimenting with computing" admits many interpretations. Two interpretations are topical in this paper. One refers to using computers in support of experimental activity, the other to innovative thinking associated with experiments in practical computing. Under both interpretations, the words 'experimenting' and 'computing' are being used in a broad sense - in contrast with the strict definitions of these activities that might be proposed in experimental science or computer science. Specifically, the term 'experimenting' is being used to refer quite generally to activities that involve taking an action whose effect is unknown, and the term 'computing' to the wide range of activities that exploit computer-related technology.
Computing in support of experiment
In thinking of experiment as 'taking an action whose effect is unknown', it is apparent that some degree of human involvement is essential. An activity is experimental by virtue of how it is interpreted. Without this interpretative aspect, the idea of 'performing an experiment' is obscure. Computer science has been much preoccupied with action that is anything but experimental in nature: its major contribution has been to our understanding of how to specify reliable procedures with precisely determined outcomes and interpretations. The common tendency for novice programmers to develop computer programs through informal experiment is naturally deprecated. And whilst an activity such as "using the computer to search for the next Mersenne prime" can be viewed as experimental in so far as the outcome is unknown, it is in essence little different from a routine numerical calculation that requires computational support.
The affinity between experiment and speculative calculation informs what can be informally characterised as a 'post-theory' view of experiment. In post-theory experiment, there is a reliable basis for prediction, or criterion for successful outcome, drawn from theory or previous experience. It is then often the case that the consequences of the theory can only be determined with the assistance of the computer. Though human intelligence typically plays a highly significant role in setting up post-theory experiments (as in 'devising a criterion for a Mersenne number to be prime' and 'developing an efficient algorithm to check this criterion'), human interpretation is a marginal element in the activity itself. The search process that is implemented is not traced in detail by any direct human experience. The trust in the outcome of the computation that underpins 'knowing that the effect of an action has been accurately determined' is based on the premise that what the computer has done could in principle have been done by a human computer.
It is easy to see how this notion of post-theory experiment as a form of calculation can be extrapolated to encompass traditional forms of scientific experiment. To connect action in the laboratory with calculation requires only that some process of abstraction has taken place, whereby certain key observables have been identified and the procedures for measuring them have been suitably formalised. It is then possible to envisage a computer that is coupled to its environment in such a way that it can generate the experimental data required as input through making observations and performing physical actions to configure instruments. The way in which human interaction is displaced in this computer-controlled experimental activity has led some commentators to believe that automata rather than humans will be at the forefront of scientific experimentation in the future.
Whilst the significant unrealised potential for using computers for scientific experiment in this fashion must be acknowledged, this paper argues from the premise that much richer conceptions of experimenting and computing are needed to do justice to the human practices of science. The key idea is that experimental activity also has what must necessarily be viewed as pre-theory aspects, and that this requires computing support of a conceptually different kind.
If experiment is deemed to be 'taking an action whose effect is unknown', the scope of this concept is surely universal - potentially to the point of being vacuous. When all that can be experienced is taken into account, the effect of an action cannot possibly be exactly predicted - indeed, it cannot even be uncontroversially identified or comprehensively registered. A primitive concern, universally relevant to all sense-making - and to experimental science in particular, is establishing contexts for an action that can be identifiably revisited and where there is a recognisable correlation between the action and significant selected ingredients of its effect. The provisional cause-and-effect relationships that have to be made for this purpose are quite different in character from those that underpin the predictions of scientific theory and computer calculation. Though every relationship in principle has the potential to acquire an uncontroversial objective status and to become associated with well-defined protocols and instruments for identification and exploitation, it is first and foremost a relationship that is appreciated in personal and pragmatic terms. And whilst it has this nature, it is not amenable to abstract representation independent of concrete action in context, and is much more likely to be refuted than confirmed by experiment.
It is activity of this exploratory sense-making nature that we describe as 'pre-theory experiment'. Where post-theory experiment is associated with a stable objective context of observation in which parameters can be changed and the outcomes observed, pre-theory experiment is concerned with identifying appropriate contexts for reliable observation, distinguishing between essential and accidental features of interaction, deciding what is deemed to be an outcome and what is deemed to have significant implications for this outcome.
If it seems that human involvement in post-theory experiment can be marginalised, the role for human engagement in pre-theory experiment is by contrast crucial. Only from an extreme reductionist perspective is it plausible that an automaton could identify the possibilities for imaginative observation and replicate the capacity for inventive intervention open to the human experimenter. It is nevertheless appropriate to ask whether the computer can play a useful supporting role in pre-theory experiment. Empirical Modelling (see [12]) has been developed as an approach to computer-based modelling directed at just such a role. It assists sense-making by allowing the pre-theory experimenter to construct artefacts that reflect the way in which action within a context is provisionally being construed by the experimenter. Because of the manner in which this artefact evolves, there is an intimate relationship between such an artefact and the personal understanding of the experimenter. The most important characteristic of the artefact is that it can be readily adapted to shifting perceptions about the nature of the agency, dependency and key observables presently at work. The status of Empirical Modelling as "an experiment in computing" is the subject of the next section of the paper.
Experimental computing
In the conclusion to his paper 'Absolutely unsolvable problems and relatively undecidable propositions’ [8] - submitted to a mathematics periodical in 1941 - Emil Post writes:
... perhaps the greatest service the present account could render would stem from its stressing of its final conclusion that mathematical thinking is, and must be, essentially creative. It is to the writer's continuing amazement that ten years after Gödel's remarkable achievement current views on the nature of mathematics are thereby affected only to the point of seeing the need of many formal systems, instead of a universal one. Rather has it seemed to us to be inevitable that these developments will result in a reversal of the entire axiomatic trend of the late nineteenth and early twentieth centuries, with a return to meaning and truth. Postulational thinking will then remain as but one phase of mathematical thinking.
If Post were alive today, he would find much in academic computing to reinforce his amazement at the proliferation of formal systems. Theoretical computer science has mathematical models and formal systems at its core: within its domain, computer programs are viewed and developed as mathematical entities. Computing in the wild by contrast is by and large an activity that defies formalisation, in which all manner of informal and heuristic techniques are used to construct programs with primary reference to their intended meaning and function. Just as Post's observation begs the question 'what is the characteristic nature of mathematics, if not the study of formal systems?', so modern computing practice challenges us to reposition the boundaries of computer science, and to recognise these as broader than the confines of the classical theory of computation.
The major natural sciences have emerged from centuries of practice preceding their theory. Think, for example, of the pre-theory experimentation of Galileo, Hooke and Faraday. Computing, if it is indeed a science, is unusual in that it has evolved with an inheritance of theory that predates most of its significant practice. The pervasive use of computer technology has moreover served to reinforce attitudes that privilege theory over practice. If we presume that all our experience is mediated by symbols and logic, and can be cast in a digital form, we cannot appreciate the full import of Post's conclusion: "... that mathematical thinking is, and must be, essentially creative". The inadequacy of a view of science that favours abstract symbolic representations at the expense of practical interaction in the laboratory is exposed by the philosopher of science David Gooding in his study of Faraday's researches in electromagnetism [3]. Gooding’s analysis is helpful in identifying a perspective on computing that can embrace both its formal and informal aspects, and help in understanding its potential for genuine experimentation.
In broad terms, Gooding's concern is to show that Faraday's knowledge of electromagnetic phenomena, as it evolved through practical experiment and communication with other experimental scientists, was embodied in the physical artefacts and procedures for interaction, observation and interpretation that he developed, and that ‘construals’ [3] of this nature have an indispensable role in our appreciation of the science of electromagnetism. Though Faraday's experiments did eventually underpin Maxwell's mathematical theory, they initially had a far more primitive role. For instance, they served to distinguish transient effects from significant observables, and to relate Faraday's personal construals of a phenomenon to those of others who had typically employed different modes of observation and identified different concepts and terminology. Such experiments were not conducted post-theory to ‘explain some aspect of reality’, but rather to establish pre-theory what should be deemed to be an aspect of reality.
According to Karl Popper, proper empirical method consists of continually exposing a theory, through experiment, to the possibility of being falsified. Such post-theory experiment requires stable experience that can be coherently interpreted. It therefore calls for a kind of experience that has to be:
- interpreted with respect to a preconceived context;
- circumscribed in respect of relevance;
- amenable to consistent interpretation.
Such experience arises from parts of the world with which we are thoroughly familiar. Scientists make mathematical models of such domains by means of conventional programs but it is unclear to what extent they can make genuine experiments with such programs.
In contrast, the construals with which an experimenter records and refines her current provisional understanding through interaction pre-theory are:
- influenced by factors in the situation that are as yet unidentified;
- subject to interpretation and interaction in ways that are as yet unknown;
- capable of exposing inconsistencies and ambiguities that are as yet unresolved.
In the pre-theory context, these characteristics are qualities of construals that relate to situation, ignorance and nonsense respectively, and are beyond the expressive scope of a formal system. But it is with just such characteristics that our approach to computing is chiefly concerned. We call it Empirical Modelling [11] because of its close relation to observation and experiment. The artefacts we build using our tools are well-described as construals in Gooding’s sense. They do not ‘represent’ in a fixed sense, but rather their meaning arises for the modeller in the correspondence experienced between interactions with the artefact and interactions with the referent in the world (or the imagination). It is therefore initially a personal meaning, created in the moment by the user’s interactions. The meaning of the construal cannot be appreciated in isolation from these interactions, which in many cases require the skilful application of experimental techniques and may serve no more than a private and ephemeral role in helping to record and advance understanding. The somewhat polemical writings of the computer scientist Peter Naur are particularly relevant in this connection [6,7], as they highlight the need to study interpretation and meaning with reference to what William James characterises as ‘the personal stream of thought’ [5].
Unlike the construction and execution of algorithms in conventional computing, Empirical Modelling is centrally concerned with the much softer, and more personal, activity of sense-making through interaction. And every interaction with an emerging artefact is itself an experiment in sense-making. So we claim this approach to computing both allows for genuine experiment and broadens the boundaries of computing to a surprising degree. Brian Cantwell Smith, whose profound analysis of computing has spanned over twenty five years, concludes in [9] that the study of computers ‘is not an autonomous subject matter’ – rather that ‘the considerable and impressive body of practice associated with them amounts to … neither more nor less than the full-fledged social construction and development of intentional artefacts’. This conception is well-matched to Gooding’s notions on the development of construals in physical science, but also accommodates the generalisation that is appropriate to our approach to computing, in which model-building is routinely concerned with representing phenomena that – unlike those in the natural world – are potentially artificial, imaginary and subjective in character.
The entire approach of Empirical Modelling is itself a kind of experiment in computing. The principles, notations and tools that have been developed over the last twenty years have provided the core of a final year undergraduate module and are illustrated by a web archive containing about 120 models [12]. Our research originated in practical model-building, and its philosophy, principles and tools continue to develop in response to innovative project work at undergraduate and postgraduate level. The primary emphasis has been on proof-of-concept and on making connections with other approaches to software development, especially those that exploit dependency maintenance. In the spirit of Post’s ‘return to meaning and truth’, our treatment of observables in model-building is reminiscent of mathematics prior to its formalisation in the 19th century [1, 10]. Like spreadsheets, our models can be viewed as capturing states as we experience them ‘in the stream of thought’, and their semantics is rooted in a fundamental tenet of James’s radical empiricism – that primitive knowledge rests on the way in which ‘one experience knows another’ [2, 4].
References
1. W M Beynon and S B Russ, The Development and Use of Variables in Mathematics & Computer Science, in The Mathematical Revolution Inspired by Computing, IMA Conf Series 30, 285-95, 1991.
2. W M Beynon, Radical Empiricism, Empirical Modelling and the nature of knowing, in Proc WM2003 Workshop on Knowledge Management on Philosophy, Luzern, April 2003, http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/Vol-85, ISSN 1613-0073
3. David Gooding, Experiment and the Making of Meaning, Kluwer Academic, 1990
4. William James, Essays in Radical Empiricism, Bison Books, 1996
5. William James, Principles of Psychology, Henry Holt USA, 1890, reprinted Dover, 1950
6. Peter Naur, Knowing and the Mystique of Logic and Rules, Kluwer Academic, 1995
7. Peter Naur, The Anti-Philosophical Dictionary, naur.com, 2001
8. Emil Post, Absolutely unsolvable problems and relatively undecidable propositions, in M Davis, The Undecidable, Raven Press Books, 1965
9. Brian Cantwell Smith, The Foundations of Computing, in Matthias Scheutz (ed.) Computationalism: New Directions, MIT Press, 2002
10. Steve Russ, The Mathematical Works of Bernard Bolzano, OUP 2004 (to appear)
11. http://www.dcs.warwick.ac.uk/modelling
12. http://empublic.dcs.warwick.ac.uk/projects
Link
The full paper developed from this extended abstract can be found in The Journal of Logic at http://dx.doi.org/10.1016/j.jal.2008.09.00