Where the Action Is; The Foundations of Embodied Interaction
By Paul Dourish
MIT Press, 2001
In this book Paul Dourish outlines his philosophical perspective on interaction design and HCI. Dourish advocates that designers need to consider the embodied nature of human action (and interaction) when creating systems. His concept of embodied interaction is strongly rooted in the phenomenological approaches of Heiddegger and Merleau-Ponty.
I enjoyed reading this book as it marries my current interests in Interaction Design with Phenomenological philosophy. The book provides an academic and theoretical perspective that was interesting but sometimes hard to follow. Tangible and social computing, two areas of interaction design in which I am very interested, are closely examined through an embodied interaction framework. The broad concepts and principles outlined here are abstract enough to remain relevant for a long time. However, designers looking for practical advice and how-to tips on using current technologies should avoid this title.
What is Embodied Interaction?
Embodied interaction refers not to technology but to the nature of our interaction with the world. “Our actions cannot be separated from the meanings that we and others ascribe to them.” Actions carry meaning that is derived from being embedded in social and physical environments that are laden with meaning. In turn, actions also create meaning that transforms the environments whose meaning originally gave rise to the actions. “Action both produces and draws upon meaning; meaning both gives rises to and arises from action.” This perspective on interactions has important implications for the design of human-computer interfaces.
Chapter 1 – A History of Interaction
In the first chapter, Dourish provides a brief overview of the evolution of computer technology and the history of its adoption by society. He illustrates how the style of human-computer interaction has evolved, and how this change has been crucial for enabling computation to become embedded in so many facets of modern life.
Towards the end of the chapter, Dourish turns his attention to tangible and social computing. These new types of interface enable an “expansion of the range of human skills and abilities that can be incorporated into interaction with computers” . Here Dourish lays out the main thesis of his book and provides a brief overview of its implications. His thesis is that “[tangible and social computing] draw on the same sets of skills and abilities… [and] are arguably aspects of one and the same research program.” This argument has four parts:
- Social and tangible interactions are based on the same underlying principles.
- Embodiment is central to these alternative perspectives on interaction.
- Other schools of thoughts can provide a foundation for understanding embodiment.
- We can build on existing schools of thought to create a foundational approach to embodied interaction that informs and supports design and unites social and tangible interactions into single model of human-computer interaction.
Here is a brief overview of the stages in the development of human-computer interaction as outlined by Dourish. These stages have been defined based on the types of human skills that are required by the user interface:
- Electrical: Analog computers were essentially an “apparatus for laboratory simulations that took place not in the physical world, but an analogous electronic reality.” During this time to set up a new experiment (which would be analogous to running a new application) the computer would have to be completely re-configured, including the incorporation of new circuits – hence the label as an electrical interface. A user would need to have deep understanding of the construction of any given machine in order to operate it. Even when the initial transition was made from “hardware configuration to digitally stored programs [from analog to digital computers] the dominant parading for interaction with the computer was electronic (i.e. machine language). The boundary that we now take for granted between hardware and software was a lot fuzzier”.
- Symbolic: The arrival of symbolic forms of interaction was characterized by the emergence of conventions and well-understood capacities that became available across a wide range of machines – “register files, index registers, accumulators, and so forth”. A detailed understanding of the construction of individual computers was no longer necessary for computer programming. This was the era when computers began to be produced industrially. During this period, programs shifted from being primarily number-based to more symbolic forms that are easier for humans to learn and apply (i.e. assembly languages). Programming systems arose that specified two sets of rules: the first determines the instruction set for a programming language; the second describes how the human-written program can be converted into a set of instructions that the computer can execute (i.e. machine language).
- Textual: Symbolic interaction evolved into textual interaction when the primary means of actual interaction with the computer shifted from punch cards and other symbolic media to keyboards via teletype and video terminals. The textual interactions are structured by a grammar that defines “commands, parameters, arguments, and options.” Human-computer interaction became a loop of “endless back-and-forth… instructions and responses between user and system.” This dialogue was enabled by the new way that interactions were mediated.
- Graphical: Most modern day computer interfaces are based on graphical interactions. The evolution from textual to graphical interactions “did not only replace words with icons, but instead opened up whole new dimensions for interaction-quite literally, in fact, by turning interaction into something that happened in a two-dimensional space rather than a one-dimensional stream of characters.” This evolution enabled users to interact with computers using several additional human abilities, including: peripheral attention; pattern recognition and spatial reasoning; information density; visual metaphors.
The graphical interface paradigm continues to be the most common style of human-computer interaction. However, as I mentioned above it is two other emerging fields of study that are of special interest to Dourish–tangible and social computing. Here is a quick overview of key trends in these areas:
- Tangible: Tangible computing encompasses a wide variety of physical interactions. There are three general trends in research related to tangible computing that Dourish highlights. The first trend is distribution of computation “across a variety of devices, which are spread throughout the physical environment and are sensitive to their location and their proximity to other devices.” The second is augmentation of every day world with computational power to make common physical objects “active entities that respond to their environment and people’s activities.” The last trend is an investigation into how these two types of approaches can be used to create environments where people interact with computation devices via physical artifacts.
- Social: Social computing also encompasses a varied range of activities. There are three main areas of activities that are addressed by Dourish. The first area of investigation is incorporation of social understanding into the design of interactions. Next is the concern is with the application of anthropological and sociological approaches to uncover the “mechanisms through which people organize activity, and the role that social and organizational settings play in this process.” The final area of investigation is how the traditional “single-user” interaction paradigm can be enhanced by the incorporation of information regarding others, and their activities.
Chapter 2 – Getting in Touch
In the second chapter we delve deeper into the world of tangible computing. Here is one of my favorite passages from this chapter, where Dourish explains his perspective on tangible computing: “The essence of tangible computing lies in the way in which it allows computation to be manifest for us in the everyday world; a world that is available for our interpretation, and one which is meaningful for us in the ways in which we can undertand and act in it.”
Since tangible computing has only recently become established, Dourish focuses on providing an overview of the important studies from the past decade that have provided the foundation for this field. The main strands of research that are discussed include: ubiquitous computing, which attempts to make computing invisible by embedding it in everyday objects and places; and tangible bits, which attempts to make computing more accessible by enabling humans to interact with digital information via physical media.
I will provide a deeper dive into these two schools research shortly, however, first I want to highlight some common features and issues related to tangible computing systems:
- Multiple centers of interaction: unlike tradional computing systems that have a single or a few centers of interaction, tangible computing has multiple centers of interaction. In traditional systems “Only one window has the ‘focus’ at any given moment; the cursor is always exactly in one place, and that place defines where my action is carried out.” In tangible computing systems, the interaction takes place in the environment distributed across several objects. The coordinated use of various objects is required for the user to accomplish tasks.
- Non-sequential organization of interactions: In traditional computing, the sequential nature of interactions is a consequence of the singular focus of intetractions. This helps to simplify both the user interface and the development of systems. In tangible computing systems, interactions are non-sequential – similar to the way in which we interact with the physical world. And, there is never any way to know how what a user might do next.
- Physical properties are suggestive of use: Like other physical objects, tangible computing artifacts have physical properties that are suggestive of their use. This feature enables designers to create artifacts that can guide users through the process of use – “with each stage leading naturally to the next through the way in which the physical configuration at each moment suggests the appropriate action to take.”
Ubiquitous Computing: the term “Ubiquitous Computing” was coined by Mark Weiser while working on a research project at Xerox PARC. The main idea behind this discipline is that “instead of taking work to the computer, why not put computation wherever it is needed.” Ubiquitous computing attempts to seamlessly integrate computation into activities of our everyday life by enhancing objects and locations with processing power. “Computers would disappear into the woodwork; computers would be nowhere to be seen, but computation would be everywhere.”
Examples of ubiquitous computing research projects include: use of active badges that enable applications to adapt a computationally-embedded environment to specific user needs (computing by the inch); digitally enhanced notepads that enable humans to interact with computers in the way we interact paper (computing by the foot); computer-enhanced desks that enable users to interact seamlessly and interchangeably with paper and digital documents and artifacts (digital desk).
From Ubiquitous Computing to Tangible Bits: Here is the explanation of this evolution in Dourish’s own words – below I’ve included my own interpretation: “[Tangible bits] sees computation within a wider context. Ubiquitous Computing pioneers saw that, in order to support human activity, computation needs to move into the environment in which that activity unfolds… [Tangible bits takes] the next step of considering how computation is to be manifest when it moves into the physical environment, and recognizing that this move makes the physicality of computation central.”
As promised here is my understanding of the differences between these two schools of thought:
- The ubiquitous computing view of the world states that computing will become progressively more invisible once it is embedded into every day objects. This view has a technical/scientific perspective and is based on analytical thinking.
- The tangible bits school of thought acknowledges that computation is being embedded into physical objects but rather than believe that it will become invisible it focuses on how to manifest computation in this new realm – the physical environment. This view has a design perspective that is based on lateral thinking.
Here are three important distinctions that differentiate the perspective of tangible bits from that of ubiquitous computing:
- The design of artifacts in the world of tangible bits reflects a concern with communication. Artifacts are designed to convey information that is important, and are often readable “at-a-glance”. This is in contrast to the “invisibility” of computation under the ubiquitous paradigm.
- The physicality of artifacts based on tangible bits must be designed intentionally; it is not simply a consequence of the design. This is based on “recognition that technology is the world, and so its physicality and its presence is a deeply important part of its nature.”
- In the realm of tangible bits, computation is embedded more directly within physical objects. Whereas even in the realm of ubiquitous computing there is still a seam between physical objects and computation.
Tangible Bits: The term “Tangible Bits” comes from the Tangible Media Group at the MIT Media Lab. Their research is focused on the belief that “while digital and physical media might be informationally equivalent, they are not interactionally equivalent.” Based on this premise, much of their investigation focuses on creating artifacts that support physical manipulation of digital information. By leveraging physical objects to represent information and/or actions we are able create more natural interactions with digital information.