Immersive Mixed-Reality Configuration of Hybrid User Interfaces

 

 

Keywords

Keywords:

Abstract

In user interfaces in ubiquitous augmented reality, information can be distributed over a variety of different, but complementary, displays. For example, these can include stationary, opaque displays and see-through, head-worn displays. Users can also interact through a wide range of interaction devices. In the unplanned, everyday interactions that we would like to support, we would not know in advance the exact displays and devices to be used, or even the users who would be involved. All of these might even change during the course of interaction. Therefore, a flexible infrastructure for user interfaces in ubiquitous augmented reality should automatically accommodate a changing set of input devices and the interaction techniques with which they are used. This project embodies the first steps toward building a mixed-reality system that allows users to configure a user interface in ubiquitous augmented reality.
This project has been conducted at Columbia University by a a visiting scientist from our research group.

 

 

Project description

In hybrid user interfaces, information can be distributed over a variety of different, but complementary, displays. For example, these can include stationary, opaque displays and see-through, head-worn displays. Users can also interact through a wide range of interaction devices. In the unplanned, everyday interactions that we would like to support, we would not know in advance the exact displays and devices to be used, or even the users who would be involved. All of these might even change during the course of interaction. Therefore, a flexible infrastructure for hybrid user interfaces should automatically accommodate a changing set of input devices and the interaction techniques with which they are used. This project embodies the first steps toward building a mixed-reality system that allows users to configure a hybird user interface.

A key idea underlying our work is to immerse the user within the authoring environment. Immersive authoring has been explored by Lee and colleagues, in a system that has a wider range of possible parameters than we currently support. While their system is restricted to a single view and interaction with real objects is limited to AR Toolkit markers, our system supports multiple coordinated views with different visualizations and interaction with a variety of physical controllers.

In our scenario, a user interacts with physical input devices and 3D objects drawn on several desktop displays. The input devices can be configured to perform simple 3D transformations (currently scale, translation, and rotation) on the objects. The user's see-through head-worn display overlays lines that visualize data flows in the system, connecting the input devices and objects, and annotates each line with the iconic representation of its associated transformation. The user wields a tracked wand with which s/he can reconfigure these relationships, picking in arbitrary order the three elements that comprise each relationship: an input device, a 3D object, and an operation chosen from a desktop menu.

While we have designed our initial scenario to be simple, we would ultimately like to support users in their daily tasks, such as specifying how a pocket-sized device can control the volume of a living-room audio system or the brightness of the lights in a bedroom. Thus, the objects used in our scenario are placeholders for aspects of an application a user might want to manipulate, while the 3D transformations are placeholders for more general operations a user could perform on them.

Pictures

 

 

Videos

Download video here.

Publications

Sandor,C., Bell,B., Olwal,A., Temiyabutr,S., Feiner,S. " Visual end user configuration of hybrid user interfaces (demo description)." In Proc. ACM SIGMM 2004 Workshop on Effective Telepresence. New York, NY, USA. October 15, 2004.

Location

Any opinions, findings, and conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of any organization supporting this work.