Heads-up Computing Homepage

Overview

Heads-Up Computing is an emerging field in Human-Computer Interaction (HCI) that focuses on creating computing systems that integrate smoothly into a user's natural environment and daily activities. The objective is to provide information and computing capabilities to users in a way that is unobtrusive and complementary to their current tasks, rather than distracting or requiring dedicated attention away from their immediate real-world context.

To clarify further, here's a breakdown of the components that contribute to Heads-Up Computing:

Body-Compatible Hardware Components

Current devices are not designed to support seamless use during daily activities.

Mobile phones allow us to interact with digital information on the go, but they often distract us, turning us into so-called smartphone zombies. This is because their input and output components are centralized on the device, requiring us to adjust our input and output to match the device's design.

Heads-up computing, on the other hand, is designed to align with human input/output capabilities. Rather than using a centralized design, it employs a distributed design and separates the computer into two pieces: the head-piece and the hand-piece.

    Responsibilities of the head-piece:

  • Provide real time visual and aural feedback
  • Understand what human sees and hears
  • Understand facial gestures & emotion
  • Understand attention
  • Speech input

    Responsibilities of the hand-piece:

  • Provide real time haptic feedback
  • Track hand position, posture, movements
  • Provide additional interaction commands.

This is merely a functional division, and its implementation can vary. For instance, Apple's vision pro integrates the hand-piece within the head-piece, eliminating the need for a separate hand-based device.

    A basic setup of the heads-up platform can be seen in the following papers:

  • Eyeditor, GlassMessaging, and PandaLens use a wearable ring mouse as their hand-piece. Note that this device does not have the ability to track hand position, posture, movements, or provide active haptic feedback.


Multimodal Voice, Gaze, and Gesture Interaction

For seamless interaction during daily activities, it's essential to utilize complementary channels. This is because most daily tasks engage human's visual attention and manual movements. If digital interaction also heavily depends on these, it could cause significant conflict and disrupt seamless interaction.

Heads-Up Computing makes use of natural, complementary human communication modalities for device interaction. Instead of relying solely on traditional input methods like keyboards, mice, or touchscreens that occupy the hands and eyes, it incorporates:

Environmental aware & fragmented attention friendly interaction design→Resource Aware Interaction Model

Fragmented attention friendly interaction

Of course, converting existing media into fragmented attention friendly interfaces can be a tedious process, thus, we have developed a tool to allow people to do this more easily.

VidAdapter: Adapting Blackboard-Style Videos for Ubiquitous Viewing: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies: Vol 7, No 3

A similar concept comes from the work of Doug Bowman on the topic of Glanceable AR:

In social situations, additional considerations are necessary. Maintaining visual attention on the conversation partner while managing other interaction tasks is crucial. This leads us to the concept of interfaces that maintain attention. Examples of such designs include:

Paracentral and near-peripheral visualizations: Towards attention-maintaining secondary information presentation on OHMDs during in-person social interactions | Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems

ParaGlassMenu: Towards Social-Friendly Subtle Interactions in Conversations | Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems

Resource Aware Interaction

Designing interfaces that are friendly to fragmented attention is one approach to creating versatile interfaces for heads-up computing. Another approach involves designing interfaces that are aware of and responsive to the user's current state, including their cognitive load and physical abilities. The system adapts its interactions based on the user's capacity, ensuring that information is delivered and tasks are managed in a way that aligns with the user's available resources, which we cal resource aware interaction.

For example, if a user is deeply focused on a complex task, the computing system would minimize interruptions and provide information subtly. On the other hand, if the user is in a more relaxed state, the system might present information that requires more attention.

To this end, we have studied the cognitive modeling of a heads-up multitasker and produced a computational rationality cognitive model to understand users’ mental states:

Conference Programs

Based on these models, we can design optimal interfaces to cater to various situations a user may encounter.

Summary

Heads-Up Computing aims to reshape the way we interact with technology by creating a computing experience that feels like a natural extension of the user's abilities and their surrounding environment. This approach signifies a significant shift from a device-centric to a human-centric model of computing, where the technology serves the user in a subtle and unobtrusive manner, enhancing their interaction with the real world rather than obstructing it.

The text above provides a succinct summary of the key features and objectives of heads-up computing. As a lab, we have made some progress in this exciting field, pushing the frontiers of what is possible with this technology. However, we've only just begun to uncover the vast potential that this field holds. There's still a significant amount of exploration and experimentation required to fully understand and leverage the capabilities of heads-up computing.

This exploration includes, but is not limited to, designing suitable input and output methods that cater to a variety of user needs and contexts. We need to accommodate simple interactions such as selection and confirmation, but also more complex tasks like generating a comprehensive report after carrying out an inspection in a factory environment. Moreover, understanding the cognitive and behavioral models that underpin these interactions presents a vast area for research exploration and investigations. This knowledge will allow us to create more intuitive and effective solutions, shaping the future of human-computer interaction.