Home

picture of man wearing a head mounted display

About the Lab.

The Virtual Environment and Multimodal Interaction (VEMI) Laboratory, directed by Dr. Nicholas Giudice, is part of the Spatial Informatics program in the School of Computing and Information Science at the University of Maine. The VEMI Lab houses the university’s first, and Maine's only, research facility combining a fully immersive virtual reality (VR) installation with augmented reality (AR) technologies in an integrated research and development environment. Our facility incorporates head-mounted displays (HMD) to present computer-simulated virtual information, augmented reality goggles which allow virtual information to be superimposed on the physical environment, and inertial and optical sensors that allow three dimensional tracking of a person as they freely move about the lab while immersed in computer-simulated virtual reality, augmented reality, or mixed reality worlds. As is described on our Virtual and Augmented Reality page, there are many advantages for using these technologies as a research tool, as a development and usability platform, or as an informatic framework for visualization and multimodal imagization.

 

Research Interests in the VEMI Lab.

Research in the lab uses behavioral experiments to study human spatial cognition, to determine the optimal information requirements for the design of multimodal interfaces, and as a testbed for evaluation and usability research for navigational technologies. We adopt an interdisciplinary approach which combines basic and applied research. However, the overarching theme connecting our projects is an interest in multimodal spatial information—comparing 3-D spatialized audio, touch, vision, or spatial language; both input (encoding, processing, and representation of information) and output (behavioral expression and information transmision, e.g., interface technologies). For a good starting point that describes the motivation for what we do here in the lab, check out our Philosophy page. 

Our work can be categorized into three related programmatic themes:

• Functional equivalence of spatial representations. The basic question underlying this work asks: how well does non-visual information support spatial tasks which are generally carried out with vision? For instance, how is seeing a map different from feeling it? Will exposure to each of these input modes lead to the same level of learning and support the same performance on subsequent spatial behaviors? The take home message, based on several lines of research, provides clear evidence that information learned from different encoding modalities can lead to highly similar performance on a range of spatial tasks, irrespective of the input source, an outcome referred to as functional equivalence. We argue that this similarity is possible because separate inputs contribute to an amodal spatial representation in memory which is equally accessible to supporting action.  More on this research, the underlying theory, and our collaborators Jack Loomis and Roberta Klatzky, can be found on our Research Interests page.

• Development of multimodal interfaces for real-time navigation of outdoor and indoor spaces. The goal of this research is to design interactive spatial displays for supporting environmental learning, cognitive mapping, and wayfinding behavior in both outdoor (O space) and indoor (I space) environments, as well as seamless transitions between O/I spaces. Our primary interest is to determine the minimal information requirements that support the highest level of environmental learning and navigation performance. These results guide ongoing work in the lab on the design of the most effective visual, auditory, haptic, language-based, and multimodal navigation interfaces. Our primary research focus deals with spatial cognition of indoor environments and development of spatial displays for use in buildings, as learning and navigating indoor spaces pose significant perceptual, cognitive, and technological challenges as compared to the same tasks performed in outdoor settings. See our Research Interests and Indoor Navigation pages to read more about the work we are doing to study these issues and to learn about our collaborators on several funded projects.

Picture from the VEMI Lab Windmill Demo • Spatial cognition and navigation without vision. Most people have never considered the information they use to get from place to place. However, when asked to introspect on the matter, their answers are generally related to use of visual cues from the environment to guide their behavior. Despite this intuition, vision is not necessary for accurate spatial cognition. Good evidence comes from results showing the building up of functionally equivalent spatial representations between vision and non-visual inputs, as well as work demonstrating similar navigation performance between use of visual and non-visual spatial displays by sighted and blind users alike. Nevertheless, the literature suggests that in many instances, navigation by blind and low-vision people is less accurate and more cognitively effortful than their sighted peers. Our philosophy starts from the premise that the main challenge of blind spatial cognition is lack of information access rather than vision loss. Thus, we argue that the solution to blind navigation lies in providing the requisite spatial cues through non-visual channels. To investigate this hypothesis, our research aims to determine what spatial information is needed to support accurate spatial learning and navigation and the best methods for conveying this information through non-visual and multimodal spatial displays.

This research is not solely related to blind/low-vision individuals; indeed, our work has significant relevance for all matter of purposes and peoples navigating in indoor settings. For instance, we are interested in how development of non-visual and multimodal spatial technologies, coupled with building information models (BIM), could be implemented in an indoor navigation system similar to GPS-based systems used outdoors for vehicle navigation. A seamless O/I space navigator could complement what is already available outdoors by providing real-time indoor location-based services or indoor route guidance for shoppers, tourists, or visitors to a new building. Access to non-visual environmental information and real-time route guidance from speech or haptic output is also relevant to emergency response personnel, fire fighters, or anybody else operating in buildings in low-light or smoky situations.

 

picture of Nick, Kit, and Tim in the VEMI labFor more detail on any of these research areas, check out our Research Interests, Philosophy, and Current Projects pages, download our Publications, find out about our lab Personnel, and Collaborators, or send us an email.

Contact Us.

We are always looking for folks to participate in our studies, motivated students to join the lab, and collaborators for new research, so please get in touch if you are interested in our work. We believe that the best way to learn about what we do and to fully appreciate the capabilities of immersive virtual environment technology and augmented reality technology is to experience it for yourself. We are happy to give lab tours and provide demonstrations to anybody who wants first-hand experience being immersed in a variety of virtual and augmented reality worlds. If you have any questions or comments, or want to schedule a visit, feel free to contact the lab by email or by phone at  207-581-2151.