Coronavirus (Covid-19): Latest updates and information
Skip to main content Skip to navigation

Poster prepared for Warwick University 25th anniversary

Computers and Commonsense or Why I am not a Computer - yet

As I write this, I am seated in a train at Shrewsbury station. A talkative young lady is gathering up her luggage in the opposite corner of the carriage. I look out at a deserted platform on which there is a bench. A station clock shows 9.15 a.m.

Time passes. An older woman now sits where the young lady was. There is a child sitting on the bench on the platform. The hands of the clock have moved round through several degrees.

What I have described are two sets of observations. Each set of observations is made in a moment. At 9.15, there is a young lady in the compartment no-one is sitting on the bench. At 9.20, there is an older woman in the compartment and a child sitting on the bench. Simultaneous observations of this kind define a state of the world as I - the observer - see it.

* * * * *

Computer programming is fundamentally concerned with state. The computer itself has a vast collection of internal memory cells each of which can store a different numerical value. What is stored in the memory at any moment determines the current state of the computer. A recipe for changing the state of the computer memory is a computer program. The computer can store a program in its memory and perform it on command.

We can use computers to represent state. What is stored in computer memory can reflect the state of the external world, as when recording the contents of a library and to whom books are currently on loan. We can use computers to change state. A computer program might calculate how many books are overdue and display this number on a screen.

Practical use of computers relies upon a relationship between the world state and the computer state. Conventional computers are largely insensitive to their environment. The user can affect the state of the computer by typing at a keyboard, perhaps by moving a mouse or a joystick, but not - so far as we can tell - by grimacing at the screen or by resorting to abusive language or mild physical assault. For its part, a conventional computer can't directly change the state of its environment in very many ways: it can change the screen display, or generate printed output, but it can't shout at you, lock you out of the office, or run away.

Modern applications of computers involve ever closer interaction between world state and computer state. In control systems, computers receive signals from sensory devices and send signals to mechanical devices. Optical readers have replaced keyboards at the supermarket checkout. Chequecards are used to access computer bank accounts automatically. Where once up-to-date information was stored in indexes and files, the computer often plays an essential role in keeping a record of the current state.

Close interaction between computers and their environment can save work for the user. Typing in prices at the checkout is a tedious business, prone to error. On the other hand, automation means that the user has less chance to exercise judgement. This may be disastrous in some circumstances, as when an automatic light switch operates in a gas filled room.

People and computers should interact to enrich and not to endanger our lives. To make this possible, we need a better grasp of how people and computers can communicate through changes of state.

* * * * *

There is more to the state of Shrewsbury station than meets the eye. When I look at the station clock at 9.15 and 9.20, I assume it is the same clock. It is of course in exactly the same place at 9.15 as it is at 9.20. But then again, I assume that the hands of the clock are the same hands, though they have moved. As for the old woman, I'm confident that she is not be confused with the talkative young lady, though this would not be quite as absurd as it sounds. Perhaps my observations are separated in time by 5 minutes and 60 years. British Rail is ever the same ... and I'm also a very slow writer. "We regret the delay caused to the 9.10 train for London Euston now standing at Platform 4. This is due to the late arrival of the Cambrian Coast Express from 1931." The train is late.

How could a computer be programmed to make its own representation of the states I am observing? Give it a camera and a microphone and - not without difficulty - it might be able to gather lots of the sensory input I use when describing the state. To recognise this data as clocks, hands, platforms, people and benches is no trivial research problem. But to distinguish between the child, the lady who is young and talkative and the woman who is old; to recognise that the announcer spoke English - albeit with a thick Welsh accent - and that what he said was part information, part jest; to translate the position of the hands of the clock into a time and infer that the train is late - these are probably not problems for solution in my lifetime.

All this should convince you that cleverly programmed computers can't compete even with quite unexceptional people, like me - and possibly you. I know lots of things about the state of Shrewsbury station that it would be very hard for a computer to represent in its memory. This helps to remind us that our insight into the state of our environment has qualities no computer can be programmed to acquire. This should make us cautious about introducing computers to carry out complex tasks where human judgement is at present essential. But my comparison is not entirely fair ...

* * * * *

In describing the state of Shrewsbury station as I observe it, I set out to represent myself, a human observer, in a favourable light. Talkative young ladies and deserted platforms have their place in poetry books; not everybody can tell the time from the Shrewsbury station clock; you have to be clever to understand British Rail announcements - and you do need a sense of humour. But what would you most like to know about the state of Shrewsbury station? I think it was raining, but I can't remember what the child on the bench looked like, and I really don't know whether the station clock was fast or slow. Had I had a camera and an electronic watch I could tell you much more about these things. (My drawings of children, young ladies and old women are quite indistinguishable and have been confused for chickens in the past.)

When all is said and done, there is no state of Shrewsbury station. There is what you see, what I see, what a camera or a microphone records, what the transistor radio picks up ... if we must make comparisons between one image of state and another, it must be on the basis of fitness for purpose. If you hope to identify the child who sat on the bench, better the photograph than my sketch or description. If you are an ardent collector of bizarre British Rail train announcements, better my written account than a tape-recording - naturally I made that up to suit my purpose.

A fairer comparison between person and computer should be made on the basis of a task to be performed. The reason for my journey is not to make random observations of everyday life with British Rail, but to achieve a particular change of state. I set off from Aberystwyth, I hope to arrive at Coventry. What I need to know about the state of my environment to complete my journey is relatively simple: as it happens, it wouldn't matter if I were fast asleep here at Shrewsbury station. Philosophically minded computers tinged with poetic sentiment and a sense of humour, computers that can transform sensory data into state representations to suit their purpose ... these are not for the present. A more realistic problem for today is deciding how I - knowing what I need to know about the state of the world to get from Aberystwyth to Coventry - can set up a representation of that state in the computer.

* * * * *

We know a great deal about programming computers to solve specific tasks. For example, it's a simple exercise to invent a program that reads two numbers chosen by the user, adds them together, and prints out their difference:

		program difference;
		var m, n, diff: integer;
		begin
1			print ("Please input two numbers");
2			read (m);					read in the numbers
3			read (n);
4			if m>n then diff := m-n;			find their  difference
				 else diff := n-m;
5			print (m, " & ", n, " differ by ", diff);	print out the answer
		end.

In this program, there are two kinds of statement. The 'print' and 'read' statements 1,2,4 and 5 describe interactions between the computer and the user. By printing "Please input two numbers" the computer changes the state of the screen. The user is expected to see this change and type in two numbers. The statement 'read (n)' programs the computer to interpret the changes of state the user transmits via the keyboard. As a result, a number is stored in the memory cell called 'n'. Statement 4 programs the computer to find the difference between the input numbers and store the result in the memory cell called 'diff'. This statement changes the state of the computer, but the change is hidden from the user.

The 'difference' program works in a world where changes of state follow relatively simple patterns. There is only one way in which the computer can change the state of the external world: by displaying a message on the screen. There is only one way its state can be changed by the user: when it's programmed to receive input it can interpret what the user types. The computer can change its own state by storing the results of calculation in memory cells. It is programmed to carry out one sequence of printing, reading and state changing steps rather than another according to its current state. Perhaps most important of all, the computer and user cooperate in the simplest possible way, taking it in turns to make state changes. First the user waits until the computer requests input, then the computer waits until the user supplies input; finally, the user waits until the computer produces the required answer.

The 'difference' program describes a solution to a state-changing problem. Before the program is carried out, the user knows two numbers but does not know the difference between them. When the program stops, the difference is displayed on the screen. The program achieves its goal because the computer is instructed to follow a sequence of simple state-changing steps from a standard repertoire.

The program doesn't tell us everything about how the solution works though. We have to read between the lines. If the user makes a mistake, and types something apart from a pair of numbers, the program will fail. If the user doesn't supply input, the computer will wait for ever. There is a hidden program that the user must perform; for successful completion of the task, both user and computer must behave properly. Though we assume that the computer will carry out all the instructions in the program in order, there is nothing in the program to say how long it should take at each step.

* * * * *

Programs are like journeys. I can tell you how to get from the platform on Shrewsbury station to the city centre. My program, like the difference program, will instruct you to do things that I presume you can do: walking, looking, using a bus or a taxi. I will want to offer choices for action: "if it's raining and you miss the number 11 bus, get a taxi". My program, like the difference program, will work if things go according to plan. Maybe you misunderstand my instructions, or I forget to tell you to turn right at the ticket office. Maybe the entrance to the station is sealed off because of a bomb scare. Most likely there isn't a number 11 bus - I just made that up too.

It would be nice if I could be absolutely sure that my program would get you to the city centre, but this isn't reasonable. I have to imagine all the possible state-changing activities that might intervene to prevent you carrying out my instructions. You might be arrested by Scotland Yard at the ticket barrier and taken off to Wormwood Scrubs, you might be attacked by a man or even woman eating tiger, or be struck by a brick falling off the ceiling off the ticket office.

Quite apart from these commonplace problems, there are other possibilities so incredible that you and I might never give them a moment's thought. Perhaps the stairs from the platform now go on and on and on and on forever downwards, and you never get a chance to turn left. Perhaps Shrewsbury city centre isn't there any more. Perhaps by the time you get there, you're somebody else.

Of course, we regard this as fantasy. Isambard Kingdom Brunel, the great British engineer who built the Clifton suspension bridge, doubtless failed to consider what would have happened if the cables holding up his bridge had turned into knicker elastic overnight. But if we want to program computers to take sensible action in the everyday world, we shall need to represent the difference between fact and fantasy. If, for example, the computer holds a picture of the view from my carriage window in memory, there had better be some difference between moving the image of the hands of the clock forwards and backwards. And if it stores a map of Shrewsbury railway station, moving trains should be more common than moving platforms. Just.

* * * * *

When we consider whether you will get to Shrewsbury city centre following my directions, there are two different kinds of uncertainty. There are snags that fit in with our knowledge of how the world operates, however unlikely they are to arise. It is more likely that the station entrance is closed for building work than that you will be intercepted by the This is Your Life team. (Not that you aren't famous, of course, but they don't get up this early.) If these events happened, they might take us by surprise, but they wouldn't change our view of world. The other problems we have considered seem to be different: like things that happen only in Alice in Wonderland or Monty Python, they violate natural laws.

To program a robot to make the journey from Aberystwyth to Coventry, we need to model what we expect to happen according to natural laws and what might happen because of more or less unexpected interactions. The code we use for our program itself expresses some of our basic assumptions. In the difference program, we expect the program statement print ("Please input two numbers") to cause the computer to act in a certain way. We trust that the computer can and will do what we expect, just as it has always done before. My programming language couldn't even express the possibility that instead of printing a message, the computer said "No".

We can relate natural laws to changes we expect to follow an entirely predictable pattern. How could a computer possibly say "No"? - after all, it might not even have a loudspeaker. We tend to call this logic, but faith might be a better word. Past experience suggests that a computer without a loudspeaker has never said "No", but this does not rule out the possibility that some day one will. If it did, we might call it a miracle, but a new observation might be more apt. The programming language for our robot will be used to express state changes in whose predictability we have the greatest faith.

Relatively few of the state changes that might affect a robot on a railway journey are predictable from natural laws. When we can't predict events using a theory, we typically attribute them to agents. Robots, computers, users, police officers, tigers and clocks are all examples of agents. Agents are able to change the state of the world at odd times, sometimes getting in the way of our plans for action. As far as possible, our robot must be programmed to deal with interference from other agents - the fellow passenger who hangs his coat over its optical sensor, the mischievous schoolboys who carry it off the train, the British Rail committee that changes Wellington to Telford West.

In the 'difference' program there are just two agents at work: the user and the computer. They interact in a simple way, avoiding interference by acting at different times. Describing the behaviour of several agents that act at the same time - concurrent systems - is hard. Describing concurrent systems in which there are many different kinds of agents, such as people, computers, electronic and mechanical devices- reactive systems - is harder. Describing reactive systems that are realisable with existing hardware and operate reliably within realistic time-constraints is harder still. The problems of a travelling robot fall into the hardest category: they are without doubt insoluble at present - on British Rail, at any rate. Several research groups here at Warwick are tackling these problems. The ideas developed below are based on the work of one these research groups, concerned with an agent-oriented approach to programming in which state changes are represented using the principles underlying the spreadsheet.

* * * * *

Anyone fortunate enough to have seen a British Rail train in motion will have noticed that when the locomotive moves, the coaches follow. This is a simple illustration of a fact about changing state that we take for granted, but is not immediately obvious from inspection of a stationary train. (Perhaps this accounts for the fact that solitary locomotives in motion are almost as common as trains.) A computer model of a stationary train that recorded the precise locations of the locomotive and the coaches at Shrewsbury station would be an excellent basis on which to generate a picture of a British Rail train in its characteristic state, but it wouldn't help to explain why, when the locomotive moves, it takes the coaches, but leaves the station behind.

The locomotive and its coaches is one example of a general type of relationship that can be observed when the state of a system changes. When the minute hand of the clock moves, so does the hour hand. When the signalman pulls the lever, the signal moves.

If we express this idea in terms of observations, we may say that certain combinations of observed values change as if they were one. Mechanical linkages provide the simplest examples, but there are many other more subtle ways in which values may be coupled in change. When a truck is shunted into another they at first move as one, but will separate again when it is held back. When the time of departure passes, the train is - as of that moment - late. When a cloud passes across the sun the shadow of the train vanishes. Sometimes a state change affects the coupling between values, as in changing gear.

Systems of interrelated values are illustrated in spreadsheets. Their power in analysing business data stems from the way in which they faithfully model such things as how changes in cost affect profit. Such dependencies between data are fundamental to our intuitions about change. We put great faith in their predictability. The ghost train that runs over us yet leaves us unscathed is a nightmare vision.

Exploiting these principles has been the focus for much of our recent research. It has proved helpful to apply them both in modelling relationships between observed values external to the computer and as a way of describing the state of the computer itself.

* * * * *

The study of agents has emerged from our research as a natural complement to our methods of modelling state and change. Looking about me at Shrewsbury station, I see many relationships between observed values, but I only have control over some of these. The hands of the clock move without my help. I am no more able to move the station than the locomotive. All my wishing will not move the train.

I can change some things, but gone are the days when trains were adventure playgrounds on wheels. There is no sash to slide the window up and down and bruise your fingers. No little round reading lights to switch on and off until they seem to be particularly off. No sliding door to slide open and closed until somebody says that that's the last time they're going to get up and shut it. No communication cord.

Knowing just exactly how we can change things is an important part of our perception of state. The things whose values an agent can change determine its privileges. Privileges depend heavily upon context. As a small boy, I was briefly privileged to slide the windows, switch the lights on and off, open and close the door, but other agents always intervened before I had savoured these delights fully. Most fascinating was the way that the lights came on in the Severn tunnel and could not be put off. Of all the privileges I never exercised, pulling the communication cord was the most exciting.

Every agent needs to be able to respond to its environment. Without access to its environment, an agent will never know when it is proper to exercise its privileges. Small boys are a good case study in this respect: oblivious to all external influence, they will exercise their privileges at random whatever the circumstances. The observations to which an agent responds are its oracles. Until an agent's oracles are properly developed, there is no way to protect it from delinquency other than to remove its privileges.

* * * * *

A modern railway carriage is more suitable for robots than small boys. In the future, we may expect it to be even better adapted, in the same way that motorways are better adapted for cars than country lanes. On present-day trains, our robot would certainly need powerful oracles and privileges. Like the 'difference' program, a robot will manage fine when things go according to plan, but have difficulty coping with a change in circumstances. On my outward journey to Aberystwyth, the brakes failed at Wolverhampton - we had to change trains and platforms. When the train finally arrived, the correct route to Aberystwyth was displayed but for the substitution of Basingstoke for Shrewsbury. Alas! poor Robot.

The privileges we might wish to give our robot would have problematic social and legal implications. There are many ways in which it could be protected from the actions of malicious or clumsy fellow travellers, but few could be safely exercised in all situations. And, recalling 'the book that told me everything about the wasp, except why' - from A Child's Christmas in Wales, we shall need to consider what useful social purpose travelling robots could serve. Despite their disadvantages, small boys add more to our lives than rows of mobile milk churns meekly trundling up and down and on and off trains. Perhaps robots could be travelling companions for people with disabilities, but they could never be a substitute for well-trained talkative young ladies. We have to guard against the dangers of creating a society that is better adapted for robots than for people. We must beware of developing robots to do inadequately what people can do well and with fulfillment. Perhaps robots that could join or leave - perhaps even detect - a train in motion ...

* * * * *

To illustrate the technical work that has been done in connection with the ideas discussed above, a specification of the agents participating in the arrival and departure of a train is attached. The specification is in a notation called LSD that has been designed and developed in collaboration with British Telecom Research Laboratories. Unlike the 'difference' program, the LSD specification does not in itself describe a particular pattern of interaction between agents. Its function is to identify the role that each agent could in principle play in the interaction as far as its privileges and oracles are concerned. Such a specification can then be used as a basis for animation.

At this stage, our research is still experimental in nature. The approach to specification we're developing has given us new ways to think about how complex systems work. The process of analysis that leads to such specification can be both enlightening and entertaining. As I have suggested, there are many serious implications to consider wherever people and computers interact. Our methods, though promising, need to be more fully analysed and tested before they can be used responsibly in engineering. Perhaps they can be recommended as a useful source of stimulation for creative writing. As I hope I have illustrated, modern computer science has a special fascination as a subject that helps us to understand how the worlds of people and machines can combine in harmony. And who knows, one day we may be sponsored by British Rail. I was only joking, honest - but it did say Basingstoke.

* * * * *

Acknowledgements (as of 1991)

The research to which this article relates has been developed at Warwick over several years. It is aimed at finding new methods of programming that combine intelligibility with expressive power. This work has been directed by the author Dr Meurig Beynon, but there have been major contributions from colleagues and undergraduate and research students. Acknowledgments are due to Dr Mark Norris, of British Telecom Research Laboratories, Dr Steve Russ, Department of Computer Science and Mr Alan Cartwright, Department of Engineering. The main programming support for this project has been supplied by three research students: Yun Wai (Edward) Yung and his youger brother Yun Pui (Simon) Yung from Hong Kong and Mike Slade. All three are recent Warwick Computer Science graduates. The LSD specification of the train is the work of Simon Yung, who is presently training for the ministry. The LSD notation was designed and developed by Meurig Beynon, Mark Norris and Mike Slade, now a trainee psychiatric nurse.