Skip to main content Skip to navigation

101 - Computing with Experience

Short abstract

We introduce an approach to computing (Empirical Modelling, EM) that gives priority to direct experience of phenomena rather than to the abstractions used in conventional programming. This is made possible by regarding the computer itself as a source of experience that is of a piece with our experience of the world. The central activity of EM is building artefacts (models) which embody the modeller's personal construal of a phenomenon and consist of collections of observables, dependencies and agents. EM has developed its own (extensible) tools and has strong philosophical connections. Particularly noteworthy are the grounds for claiming an explanatory role for the models and the special role of interaction in the way that EM artefacts represent their referents.

Extended abstract

The conventional view of computing – deriving from the central position of mathematical models of computation – takes for granted the idea that in order to produce a computer solution, or a programmed solution, a given problem must first be replaced by an abstract version. (For example, the physical symbols and their locations in a game, or the liquid levels in an experiment, are replaced by values in a mathematical, or programming language, type.) Then it is this abstract version that is actually solved by the program, or investigated by a model or simulation. Simplistic as this formulation undoubtedly is, the fundamental assumption is pervasive and deep: a computer-based solution primarily engages with an abstract version of a problem or task, in contrast to the problem or task as we experience it. (In which, for example in the case of physical symbols, or measurements, there are borderline cases, unclear or partial cases, mistaken cases etc, arising from human psychology and context.) The purpose of this paper is to introduce an approach to computing, and in particular to modelling and simulation, developed over many years at Warwick, which reverses this fundamental assumption of conventional computing. Our experience of a problem instance is taken here as primary, while abstractions, wherever they are useful, take an auxiliary role. The approach, known as Empirical Modelling (EM) [1] for its emphasis on observation and experiment, promotes an alternative way of using computers: one that gives priority to our direct experience of phenomena. This is only possible by regarding the computer itself (with its associated devices) as a source of new experience that is continuous (in a sense to be defined) with experience of the real-world problem or phenomenon.

The basic concepts of EM are observable, dependency and agency, where these are interpreted as elements of a particular agent’s view (or construal) of a situation. The term construal has been taken over in the sense used by David Gooding in his Experiment and the Making of Meaning (Kluwer, 1990) for the construals of electromagnetism made by Faraday in the early 19C – ‘interpretative combinations of words and images’. The central process of EM is the interactive construction of an artefact that maintains a symbiotic relationship with the modeller’s construal of a phenomenon. It is particularly appropriate for new phenomena, or phenomena which are as yet little understood. Such a construction is better described as modelling rather than programming, but it is modelling in a sense that is more elementary, and initially lacking in commitment, than programming. It is a kind of ‘directed thinking’ where the direction comes from the maintenance of a correspondence between two interactive experiences: that of the emerging model and that of the real-world, or imagined, referent. The resulting models have an unusually open, extensible quality because a construal always remains open to re-interpretation: there is no preconceived functionality in its development (other than a better understanding of the phenomenon). These properties of openness extend also to what might be called ‘primitive simulations’ in which fragments of behaviour are displayed in a selectively automated fashion. We can demonstrate some examples such as an early planimeter, a vehicle cruise control and the scenarios around a railway accident occurring at the Clayton Tunnel when a telegraph system was first introduced. The models and simulations developed using EM so far are small-scale, but qualitatively different from those developed using conventional programming. They are cognitively rich because they result from extensive experience of the model and the referent through interaction by a human modeller in the course of identifying relevant observables and experimenting with dependencies (which can express physical properties or theories). There is consequently a very close relationship between human processes and computing processes throughout EM which is in striking contrast to the tendency identified by Humphreys in Extending Ourselves (OUP, 2004) as, ‘a significant shift of emphasis in the scientific enterprise away from humans…’. The usual mode of development in EM using ‘pre-theoretical’ experimentation [2] to ensure the reliability of components before using them to build larger components gives grounds for claiming a genuinely explanatory role to the resulting models of phenomena.

It is no doubt too crude simply to associate conventional computing with an analytic style of philosophical commitment that gives priority to logic and language, while suggesting the alternative EM approach with its experiential emphasis might reflect a more phenomenological philosophy. But this is a starting point. The fundamental significance of experience for EM has entailed extended thinking about philosophical issues in our work. Beynon [3] has, for example, given detailed commentary on how some prominent themes in EM are represented in the work of William James. Another philosophical consequence of this approach to computing, worth further elaboration and illustration, is a better understanding of how interaction can form an essential role in the way an artefact represents its referent.


[1] See
[2] W.M. Beynon, S.B. Russ, 'Redressing the past: liberating computing as an experimental science' (University of Warwick, CS-RR-421, also downloadable from [1])
[3] Meurig Beynon, ‘Empirical Modelling and the Foundations of Artificial Intelligence’, in Computation for Metaphors, Analogy and Agents, Lecture Notes in Artificial Intelligence 1562, Springer, 322-364, 1999 (Also downloadable from [1].)