Skip to main content Skip to navigation

The Risks and Rewards of Automated Buildings

One of the key functions of science fiction is to explore both the potentialities and the risks of technologies, particularly those which are new or to which we aspire. Perhaps unsurprisingly then, automation has been a preoccupation of SF almost since it began. Often this has focussed on automation of work or construction, but another recurring theme has been the automation of buildings, and its impact on their function and those using them.

[Sidenote: I’ll focus here on the automation of buildings occupied by humans, omitting automated factories (Mockingbird by Walter Tevis has a brilliant example here) and entirely contained environments such as spaceships. To keep this manageable, I’ll also omit examples where household service is provided by stand-alone robots (as, for example, in Isaac Asimov’s The Naked Sun) or by living entities (e.g. William Tenn’s The House Dutiful).]

The austere automated accomodation in The Machine Stops, dramatised for Out of the Unknown (BBC 1966)This theme can be traced back a surprisingly long way. As early as 1909, novelist E M Forster wrote The Machine Stops, in which a society is utterly dependent on a single mechanism which controls their living environment, modifying lighting and furniture and providing food and other resources over the course of a normal day. The novella is a touching exploration of what it means to retain humanity and self-will and the risks of surrendering control of those things. Of course, the Machine in Forster’s novella is an extrapolation from industrial machinery and predates the computer age. The level of technology it describes had to wait for the dawn of modern computing before it reemerged as a major theme.

From the industrial fairs of the 1930s and through the 1950s, as the world explored the wonders of the future, the automated home was generally represented as an inevitable and aspirational development, and a symbol of modernism. Just as every home would be heated, powered and supplied with modern home appliances in the prosperous world of the near future, so every house would cater to the needs of its occupiers with a range of automated devices. The concept was popular enough to be satirised in the (astonishingly sexist) Tex Avery cartoon The House of Tomorrow (1949). This optimistic reading of automation is also used to great effect by Raymond Bradley in the fantastic short story There Will Come Soft Rains (1950). Here, an automated house is seen falling into slow decay as it attempts to follow its programming amidst the ruins of a nuclear wasteland. The house’s sing-song vocalisations invoke the routines of a normal, prosperous, modern family life, underlining the potential of the future that has been destroyed. [There’s a haunting dramatised reading of this story from 1962 which the BBC repeats from time to time.]

Jane Jetson using her push-button automated food machine.

The same spirit of optimism for home automation carried forward into the 1960s, with the American television cartoon The Jetsons (1962-1963) famously portraying a utopian future which automated virtually every function of our home and work spaces. As late as 1965, on UK television, Thunderbirds was showcasing the (admittedly mixed) joys of automated cooking (e.g. “Give or Take a Million”) and an office building that served up chilled drinks and lit cigars at the touch of a button (“The Duchess Assignment”).

However already by this point, there is evidence for a reaction against the possibility of thoughtless automation, echoing back to the concerns of The Machine Stops. The Avengers episode “The House that Jack Built” (1966) places Mrs Peel in a computer-run house which acts to disorient and torment her. In this case, the house is a deliberately designed trap, but there is an effective use of camera-eye views to conjure up a sense of voyeurism: to anticipate its occupants, the building must constantly watch them and assess their behaviour. The same underlying premise recurs in The New Avengers in “Complex” (1977). In this episode, the supposedly impregnable computer-run security system in a building proves to be passing on sensitive information, while also disposing of anyone who shows signs of suspecting it. Again, scenes shot from a security-camera perspective remind us that the attentive servant-in-waiting is also a potential voyeur, aware of the characters’ every move and able to use that information against them. In both these cases the buildings were constructed with malign intent, and so come equipped with lethal traps to deploy against their occupants. While such a circumstance is relatively unlikely, and hazards such as booby-trapped lift floors are unlikely to occur in most automated homes, the basic premise - that technology in our environments can be a threat - is clear and more widely applicable.

Wallace (from Wallace and Gromit: The Wrong Trousers, 1993) being dressed by automated mechanisms in his house.By the 1980s, examples such as the voice-activated devices in the McFly home in Back to the Future II were used as visual shorthand for the future, but also for comic effect to underscore the disfunction in a potential future household, and the degree to which technology could intrude into life. And by the 1990s, Aardman Studios’ Wallace and Gromit were moving home automation back into the realm of old-school mechanisms, summoning a simultaneous nostalgia for the past and interest in the potential for the new devices dreamed up by the eponymous inventor.

However all of the examples mentioned so far have been illustrations of automation rather than a parallel preoccupation of science fiction that has also found its way into our homes: artificial intelligence. If having a home or office that responds to commands (preprogrammed or push-button) began to be seen as frightening, then how much more scary is a home or office that can decide for itself what to do?

The artificial intelligence controlled smarthouse S.A.R.A.H. from the television series Eureka.Artificial intelligence in science fiction depicts homes which have a sentience and emotional identity of their own, whether due to a central computer controlling the house’s functions or distributed computing that permeates the building. Examples include S.A.R.A.H. (the lethally unstable bunker-house in US television series Eureka, 2006-2012), the Dreamhouse in Doctor Who novel “Sick Building” (Magrs, 2007) and the house computer in audio drama Bernice Summerfield and the Stone’s Lament (Big Finish 2001). Although I’ve avoided spaceships in this discussion of automated buildings, an honorable mention here also has to go to the granddaddy of uncooperative AIs, H.A.L. (2001, Kubrick 1969) who in many ways set the blueprint for those who followed. Where these full-blown artificial intelligences appear in science fiction, they often contrast the rationality of computers with the emotion of the AI. They ask valid questions which boil down to issues of responsibility and freedom: if an AI has rational and emotional sentience, is it fair either to imprison it in servitude or deny it the emotional support it needs by treating it as a tool rather than a friend or family member? If it is treated as one of the family, other problems may follow. In many cases, stories illustrate the dangers of an AI becoming too emotionally dependent on a single person and becoming irrational if that person is endangered or leaves them for any reason.

At the same time as asking about human responsibility for their creations though, such stories also ask about the extent to which humans are willing to surrender control of their lives to other individuals. In some fiction, the occupants of either a traditional or an AI-driven automated building have little volition concerning their choice of clothing, food or even entertainment. Such things are served up to them without being requested, and the first symptom of trouble in an automated home is often when the personal preferences of the occupant deviate from those of the AI. Such a conflict can be seen to mirror the real-world dangers of an over-controlling spouse at a personal level, or as an analogue for surrender of volition to authority at a political or societal level. Although Orwell’s Nineteen Eighty-Four (1949) does not include a fully automated (let alone AI) house, the state control of viewing screens and compulsory watching of propaganda also predicts this intrusion of control into a home environment, and perhaps provides a model for later examples.

The limited success in the quest for true artificial intelligence, together with the questions articulated in the science fiction above, means that full blown AIs are not running our homes in the 2020s, despite earlier predictions. However it is impossible to ignore the development of digital assistants, including Alexa, Siri and Cortana. These act primarily as a user interface between the occupier and the technology in a building, and follow relatively simple rules or machine-learning processes to respond to requests, rather than acting as a full-blown AI. Such devices can now control heating, activate appliances including robot vacuum cleaners, open windows and even order shopping. In many ways they fulfill the original 1940s and 1950s aspirations for an automated house, without risking the loss of volition that was foreseen in AI houses. An interesting example in science fiction is perhaps JARVIS, Tony Stark’s assistant in the Iron Man franchise, which began as a simple verbal user interface (although it later evolved into something more like a fully-developed AI).

A key feature of digital assistants, however, is that they lack volition and emotion. Their actions are usually limited to those specified by the owners and a modern smart-home runs to a combination of pre-scheduled sequences and specific voice commands, rather than improvising behaviour in response to human actions in the way anticipated (although perhaps for comic effect) by the Jetsons and other early science fiction. Nonetheless, their machine learning algorithms together with the ubiquity of internet connectivity has raised issues of control more akin to the AI scenario. The threat of hackers accessing smarthome devices and viewing or modifying their data is sometimes overstated, and is unlikely to prove lethal, but is nonetheless real. Further, if data is uploaded to the digital assistants’ home organisation to improve learning algorithms or predictions, as is usually the case, then is there an intrusion into the privacy of the building (whether home or office) and is that data secure? 

While these questions are similar to those of control and intrusion discussed above, they also illustrate the limitations in science fiction. Most automated buildings of the past were stand-alone entities. The level of everyday global connectivity now prevalent was predicted to some extent by Forster in The Machine Stops, but has exceeded the expectations of most twentieth, or even early-twenty-first, century SF. Modern smarthomes require less surrender of volition than envisaged, but often more sacrifices in the area of privacy and security. The value of data itself, and the extent to which a smarthome can compromise or surrender that value, is a new challenge with which both science fiction and reality still grapples.

"The Risks and Rewards of Automated Buildings", Elizabeth Stanway, Cosmic Stories blog, 25th July 2021


Note added 30/7/21:
I've just come across an interesting radio adaptation of "There Will Come Soft Rains" in the 1950s US science fiction radio anthology series X-Minus-One which some folks might find interesting. There's also an adaptation of "The Veldt" - another memorable Bradbury short story which features an automated house.