How do we communicate with(in) intelligent spaces? |
Charalampos Rizopoulos, Dimitris Charitos Introduction
Ιntelligent user interfaces (IUIs) seem to be among the most important paradigms for future research in Human-Computer Interaction. "Intelligent" or quasi-intelligent behaviour may be applied to both virtual and real space. Although the technologies that lead to the realisation of such spaces have been extensively documented, this has not been the case with the impact of these technologies on the user. This paper reviews relevant literature with the aim of examining intelligent spaces as a type of spatial interface and documenting their importance for the formulation of the user's spatial experience. It attempts to investigate relevant theoretical approaches that may be applied into studying human behaviour and communication within the context of intelligent spaces. Ultimately, this paper aims to outline the influence of intelligent spaces on the users' activity within the environment and their environmental experience in general.
"Intelligence" may be bestowed on real (Ambient Intelligence) or virtual (Intelligent Virtual Reality) spaces. Although both cases refer to space, they do so from a different perspective. Ambient Intelligence (AmI) may be seen as the convergence of ubiquitous computing, multimodality, and Artificial Intelligence. Its goal is to support and augment human-computer interaction by positioning this activity within an everyday spatial context. In contrast to Virtual Reality (VR), the user is not placed inside a synthetic environment; rather, devices are placed in the real environment the user inhabits. Both approaches utilise networked intelligent artefacts that are embedded in the environment, whether in real or virtual space (essentially an additional layer of information), and negotiate multimodal content which dynamically changes as a result of user interaction. These artefacts are added to static spatial elements, forming a coherent whole that offers an enhanced environmental experience.
For reasons mentioned above, some authors (e.g. Weiser, 1991; Riva, 2005) consider AmI as the opposite of VR. However, it is suggested that the differences of these two cases of intelligent space do not prevent the theoretical approaches discussed in this paper from being applicable to both kinds of intelligent space.
Characteristics of intelligent spaces
The preceding discussion reveals that intelligent spaces share three important characteristics: ubiquity, multimodality, and adaptation. Ubiquity refers to the "omnipresence" of the system through the implementation of numerous networked devices scattered throughout the environment (e.g. ubiquitous computing), taking advantage of the fact that humans are physical beings, "unavoidably enmeshed in a world of physical facts" (Dourish, 2001: 99) and thus familiar with physical artefacts (or with virtual artefacts that resemble real ones). An essential aspect of ubiquity is the physical or mental "disappearance" of the devices. As McCullough (2004: 3) noted, "the most significant technologies tend to disappear into daily life". Physical disappearance may be achieved by "hiding" the devices from plain sight (e.g. using small-size devices). Mental disappearance refers to the user perceiving computing devices as everyday objects with augmented functionality1 due to their integration with the general ecology of the environment (Alcañiz and Rey, 2005: 4). The user is presented with an environment, which offers a wider selection of possible actions while retaining a familiar structure. Devices are moved from the centre of our attention to the periphery, the area just outside focal attention (McCullough, 2004: 49). The environment acts as the interface and consequently the interface is spatialised.
Multimodality refers to the ability of the system to utilise multisensory content as input and output. It is seen as an efficient HCI practice and an indicator of a system's usability (Negroponte, 1996). Since humans utilise various modalities during direct human-human communication, the implementation of such modalities in HCI results in interfaces with reduced cognitive load (Russell et al., 2005). Furthermore, the use of other senses besides vision may accelerate user adaptation (Kuivakari and Kangas, 2005) and hide some technical deficiencies (Negroponte, 1996). Multimodality is a key ingredient of spatial experience: apart from visual sensory input, the experience of space involves the perception of auditory, olfactory, thermal, and tactile input, and the sense of proprioception (Gibson, 1986: 111). All these sensory input contributes to the establishment of a sense of space. This approach is also in agreement with Hall's conception of space (1966: 41-63). Therefore, it may be suggested that the development and use of multimodal interfaces results in spatial interfaces affording a more complete spatial experience.
Lastly, adaptive systems adapt to the user's characteristics and environmental conditions2 via appropriate AI methods. Advances in adaptive technology and affective computing (e.g. Picard, 1997) will allow for the design and production of more personalised interfaces.
The aforementioned tendencies and technological advances contribute towards the replacement of the dominant, dialogue-like human-computer interaction mode and the adoption of a different, more implicit form of human computer interaction, which Schmidt (2005: 164) appropriately names "Implicit Human-Computer Interaction" (iHCI). In the context of iHCI, the user offers implicit input and receives implicit output. Implicit input refers to actions and behaviours of the user, which are not considered primarily as interaction-initiating, but are perceived as such by the system. Implicit output, similarly, refers to output, which occurs as a result of the reception and processing of implicit input. Implicit output is seamlessly integrated with the environment and supports the user's task. Essentially, the system detects subtle communicational cues inherent in the behaviour of a human through the use of appropriate devices. After processing these data, the system reaches some conclusions about the user's state and the task to be accomplished and may subtly act on the environment towards increasing the possibility of the user successfully completing the task.
User experience and communication in intelligent environments
Intelligent spaces may substantially alter the relationship between user and computer. From a communicational perspective, a person's experience of reality is altered by an additional layer of mediation that is placed between the user and the environment. This layer may have an impact on the users' conception of the computer their behaviour within such an enhanced environment. This section will focus on theories that have attempted to describe the relationship between the user and the context3 wherein interaction takes place, in order to develop a better understanding of intelligent spaces at a communicational level.
Paradigm shift: the computer as a potential interlocutor
The cognitive approaches are the earliest paradigms that describe the relationship between human and computer. According to these, the human brain is seen as a specific type of information-processing unit consisting of a sensory input subsystem, a (higher level) central information processing subsystem, and a motor output subsystem (Kaptelinin, 1996). Memory provides a comparison with the representations of past experiences. An action may take place in the cognitive conscious or the cognitive unconscious, depending on whether it can be performed automatically, without conscious effort4 (Raskin, 2000).
Human-computer interaction may be seen as a constant information-processing loop involving two systems, the human and the computer. One's output serves as the other's input. This approach has the advantage of simplicity, but it fails to take into account the wider context in which interaction takes place. The advent of IUIs solidifies a shift of our view of computers, which was initiated by the transition from mechanistic to linguistic interaction methods (Suchman, 1987). Computers are no longer seen as simple tools; they have been viewed as a social medium (McCullough, 2004: 3) and even as potential interlocutors or "social actors" (Nijholt, 2004); humans tend to attribute to them abilities and traits they do not have (e.g. intelligence) and are willing to interact with them in the same way as they do with other humans - especially in the case of anthropomorphic artificial entities. The experience of the former is, to them, similar to the latter (Bickmore, 2004).
Furthermore, it has been suggested that users who are aware of the possibility that their activity in an intelligent space is being monitored behave differently than if they were convinced to the contrary. Thus, intelligent spaces may have some of the characteristics of public spaces (Nijholt, 2004).
Plans and Situated Actions
Planning is often considered the driving force behind actions. Cognitive approaches view plans as the script for an activity, to be decided beforehand and followed. This model has been favoured by Artificial Intelligence research and models human activity as the formulation and execution of plans (Dourish, 2001). Plans are formulated with a goal in mind. The goal is divided into sub-goals, which are in turn divided into further sub-goals and so on until a level is reached in which no further division is possible.
The planning model, however, has been challenged by the Situated Action Theory (SAT) (Suchman, 1987; Mantovani, 1996). Proponents of SAT argue that plans do not determine the outcome of an activity; instead, they are fully formed after the activity has taken place in an attempt to describe or explain it. In short, the structuring of an activity may occur only as a result of the immediacy of the situation (Suchman, 1987; Nardi, 1996a). As Suchman (1987: 50) observes, "every course of action depends in essential ways upon its material and social circumstances". Humans often act on impulse and adapt to these circumstances, achieving intelligent action. Contexts in communication are not preset; rather, they are co-constructed by the participants. Communication is not viewed as the process of information exchange, but as the process of the exchange of meanings and interpretations of the situations the actors are involved in (Riva and Galimberti, 2001). In a sense, humans make sense of environments through activity in habitual contexts (McCullough, 2004: 21).
Intelligent spaces that adhere to the practices of multimodality, adaptation, and personalisation are expected to facilitate the creation of computers able to deviate from the planning model and assume a more human-like behaviour.
Activity theory
According to Activity Theory, artefacts such as tools and sign systems assume a crucial role in the shaping of all human experience. As such, human activity cannot be understood without an understanding of the role of artefacts in everyday social practice (Nardi, 1996b).
The interplay between actions and goals is highlighted by the hierarchical structure of activities (Kaptelinin, 1996). This structure consists of three distinct but interconnected levels. An activity sits at the top level and acts as the direct answer to a specific objective of a subject. This objective characterizes the activity as a whole and is formed in order to satisfy a need. Activities consist of a sequence of actions, which correspond to goals. The fulfilment of actions and goals brings the subject closer to the accomplishment of the objective and the activity closer to its completion. Actions are undertaken by the completion of operations, which reside at the lowest level of the activity hierarchical structure and refer to actions performed automatically, without requiring conscious effort5. Operations are regulated by conditions, which encompass any characteristics of a given artefact that may influence the outcome of the operation. Certain actions may, due to habituation, become operations, their successful completion no longer requiring conscious effort. Operations may become actions if the conditions that regulate them change, as for instance in the case of equipment malfunction. In this case, the subject is forced to expend more cognitive resources in order to complete these actions successfully6.
An intelligent space is dependent on the environmental setting, elements of which are the intelligent artefacts embedded throughout the environment. The arrangement of space and the artefacts in it may facilitate or impede habituation (and, consequently, influence the characterisation of an act as either action or operation), depending on whether it effectively supports the user's spatial reasoning skills. As Suchman (1987) notes, self-explanatory artefacts whose purpose and affordances7 become apparent upon examination, assist in the resolution of the paradoxical nature of modern technology, namely the requirement of increasingly complex technology being usable after decreasing amounts of user training. The "intelligence" and intelligibility of artefacts might contribute even more towards successful support of user activity in the context of real space.
The notion of planning in Activity Theory is not as rigid as the planning model described earlier. Activity Theory does not consider humans and artefacts as equally important, thus demonstrating a more "human-centred" outlook of the relationship between artefacts and human activity. Rather, artefacts are seen as external components that augment human activity. Thus, functional organs are formed (Kaptelinin, 1996) which impart humans with the ability to perform a previously impossible function or to perform an existing function more efficiently. Therefore, in the context of this view, the user's interaction with the environment is mediated by the computer, seen as a special type of functional organ. This view has led Kaptelinin (1996: 111) to suggest that two interfaces should be considered, one between the user and the computer and another one between the computer and the environment. In the case of an intelligent space, however, the computer is an essential part of the environment that should facilitate the decomposition of the user's objectives into actions and operations by presenting the user with appropriate affordances (essentially, the environment is the system, as far as the user is concerned). It should also ensure that the conditions of an operation remain largely unchanged, so that the task may retain its operation status and be accomplished with a minimum of cognitive effort. To that end, the system should limit the number of user interruptions (breakdowns) to a minimum, and only for the correction of significant problems or the overall improvement of the flow of an activity (Riva, 2005). Support of the user's modalities in the user's physical environment helps preserve intuitiveness, thus facilitating habituation and reducing cognitive load.
Discussing the spatial experience afforded by intelligent environments
Merleau-Ponty (1962: 252) has suggested that "being is synonymous with being situated" and that "… space is existential; we might just as well have said that existence is spatial" (1962: 293-4). Thiel (1961: 35) defines spatial experience as "a biological function, necessary for the continual adaptation of any organism to its environment, for the purposes of survival". Accordingly, Norberg-Schulz (1971: 9) explains the spatial experience of humans: "Most of man's actions comprise a spatial character, in a sense that objects of orientation are distributed according to spatial relations…Man orients to objects; he adapts physiologically and technologically to physical things… his cognitive or affective orientation to different objects aims at establishing a dynamic equilibrium between him and his environment". These physical objects, which are distributed in space, actually allow for space and the spatial experience as such, by virtue of their formal characteristics and position in space. Active intent is vital for the formation of context. According to McCullough (2004: 52), a context for an action is created due to "a coupling of perceived resources to active intent", and the sum of all present contexts forms the environment. When affordances are perceived in a similar fashion by different people, the identity of the environment is reinforced.
Human-computer interaction acquires such characteristics as disappearance, implicitness and transparency. Consequently, information technology has become ambient social infrastructure (McCullough, 2004: 21). It may be suggested that in the case of intelligent spaces, the interface is spatialised and user interaction now takes place in the context of an environment. This results in a transformed type of spatial experience, where interaction capability and "intelligence" is bestowed upon physical objects that partly determine spaces as well as on spatial entities themselves. Thus, the spatial experience in this case is determined by both static physical objects and dynamically evolving entities, which function as input/output devices for mediating information communication amongst the user and the system. Some of these mediating entities however8, may also be experienced as physical entities too and in their way contribute to determining the spatial experience as such.
The interaction between the user and the system takes place within a social and a physical (or virtual, but spatial nevertheless) context. Intelligent spaces may influence both of these types of context, which, in turn, may have an effect on the users' behaviour. Space is transformed into a quasi-intelligent entity that keeps track of the users' activities via multimodal techniques. As suggested earlier, enhancing the multimodal character of an environment may result in a more complete spatial experience. Also, embedding "intelligence" in real environments, combined with the implicitness of interaction afforded by such systems may result in an environmental experience, resembling the experience of a public space. Indeed, the fact that the user is aware of being monitored makes intelligent spaces similar to public spaces in terms of the way the user's behaviour is influenced. The system's adaptation to the user may facilitate user's habituation to the system's presence, which may ultimately lead users to consider such spaces as private once again.
|
References
-
Alcañiz, M. and Rey, B. (2005), New Technologies for Ambient Intelligence, in G. Riva, F. Vatalaro, F. Davide and M. Alcañiz (eds), Ambient Intelligence: The Evolution of Technology, Communication and Cognition Towards the Future of Human-Computer Interaction, IOS Press, Amsterdam, pp. 3-15.
-
Bickmore, T.W. (2004), Unspoken Rules of Spoken Interaction, Communications of the ACM, 47(4), pp. 38-44.
-
Dourish, P. (2001), Where the Action Is: The Foundations of Embodied Interaction, The MIT Press, Cambridge, MA.
-
Gibson, J. J. (1986), The Ecological Approach to Visual Perception, Lawrence Erlbaum Associates, London (first published in 1979).
-
Hall, E. T. (1966), The Hidden Dimension, Anchor Books: New York.
-
Kuivakari, S. and Kangas, S. (2005), Pleasure Platforms and Sensomotoric Interfaces: Notes from a Preliminary Survey of Adaptive User Interface Design, in M. Ylä-Kotola, S. Inkinen and H. Isomäki (eds), The Integrated Media Machine: Aspects of Future Interfaces and Cross-Media Culture, University of Lapland, Rovaniemi, pp. 77-99.
-
Kaptelinin, V. (1996), Activity Theory: Implications for Human-Computer Interaction, in B. Nardi (ed), Context and Consciousness: Activity Theory and Human-Computer Interaction, The MIT Press: Cambridge, MA, pp. 103-116.
-
Mantovani, G. (1996), Social Context in HCI: A New Framework for Mental Models, Cooperation, and Communication, Cognitive Science, 20(2), pp. 237-269.
-
McCullough, M. (2004), Digital Ground: Architecture, Pervasive Computing, and Environmental Knowing, The MIT Press, Cambridge, MA.
-
Merleau-Ponty, M. (1962), The Phenomenology of Perception, Routledge, London.
-
Nardi, B. (1996a), Studying Context: A Comparison of Activity Theory, Situated Action Models, and Distributed Cognition, in B. Nardi (ed), Context and Consciousness: Activity Theory and Human-Computer Interaction, The MIT Press: Cambridge, MA, pp. 69-102.
-
Nardi, B. (1996b), Activity Theory and Human-Computer Interaction, in B. Nardi (ed), Context and Consciousness: Activity Theory and Human-Computer Interaction, The MIT Press: Cambridge, MA, pp. 7-16.
-
Negroponte, N. (1996), Being Digital, Hodder & Stoughton, London.
-
Nijholt, A. (2004), Where Computers Disappear, Virtual Humans Appear, Computers and Graphics, 28(4), pp. 467-476.
-
Norberg-Schulz, C. (1971), Existence, Space and Architecture, Praeger Ltd., New York.
-
Picard, R.W. (1997), Affective Computing, The MIT Press, Cambridge, MA.
-
Raskin, J. (2000), The Humane Interface: New Directions for Designing Interactive Systems, Addison Wesley, Boston.
-
Riva, G. (2005), The psychology of Ambient Intelligence: Activity, Situation and Presence, in G. Riva, F. Vatalaro, F. Davide and M. Alcañiz (eds), Ambient Intelligence: The Evolution of Technology, Communication and Cognition Towards the Future of Human-Computer Interaction, IOS Press, Amsterdam, pp. 17-33.
-
Riva, G. and Galimberti, C. (2001), Virtual Communication: Social Interaction and Identity in an Electronic Environment, in G. Riva and F. Davide (eds), Communication Through Virtual Technology: Identity, Community and Technology in the Internet Age, IOS Press, Amsterdam, pp. 23-46.
-
Russell, D.M., Streitz, N.A. and Winograd, T. (2005), Building Disappearing Computers, Communications of the ACM, 48(3), pp. 42-48.
-
Schmidt, A. (2005), Interactive Context-Aware Systems Interacting with Ambient Intelligence, in G. Riva, F. Vatalaro, F. Davide and M. Alcañiz (eds), Ambient Intelligence: The Evolution of Technology, Communication and Cognition Towards the Future of Human-Computer Interaction, IOS Press, Amsterdam, pp. 159-178.
-
Streitz, N. and Nixon, P. (2005), The Disappearing Computer, Communications of the ACM, 48(3), pp. 33-35.
-
Suchman, L.A. (1987), Plans and Situated Actions: The Problem of Human Machine Communication, Cambridge University Press, Cambridge.
-
Thiel, P (1961), A Sequence-Experience Notation, Town Planning Review, 32, April 1961.
-
Weiser, M. (1991), The Computer for the Twenty-First Century, Scientific American, 265(3), pp. 94-104.
|
|