A Multimodal Interaction Design Concept for In-Car Navigation Systems
Graduate project for an IxD course at the University of Washington with Brandon Caruso, Branden Keller and Terrie Chan
Research, Sketching, Interaction Design, Visual Design, Motion Design
Nov - Dec 2016 (3 Weeks)
We set out to design a less distracting in-car navigation experience for drivers. To successfully accomplish this goal, we conducted thorough research, ideation, and concept refinement to design a solution.
This phase of our design process was all about trying to understand the current state of car navigation and empathizing with drivers who make use of these systems. Through secondary research, contextual inquiry, storytelling, and competitive analysis, we were able to identify key problems to inform our design.
To quickly get up to speed on the latest academic research and studies regarding car navigation, we conducted extensive secondary research.
Contextual Inquiry + Storytelling
Our team shared car navigation stories with each other. We also conducted a contextual inquiry of family members using their car navigation systems. These activities helped us build empathy.
To understand the competitive landscape on car navigation, we conducted a 2x2 competitive analysis matrix to look for potential opportunity spaces to explore. We looked at Google Maps, Waze, Apple Maps, Android Auto, CarPlay, Hudway and a standard Lexus navigation system.
Lack of Empathy: Navigation systems today lack the ability to empathize with the driver and their current emotional state and context, which can cause further driver distraction.
Cognitive Load: Smartphones, today's primary means of car navigation, are constantly bombarded with texts, emails and information that can distract a driver. Additionally, large integrated center console touch screens (as seen in the Tesla model S) can take a driver's attention off the road.
Poor Car Integration: The low performance of today's car voice control and navigation systems are so outdated that they actually increase driver distraction.
How might we increase the empathy of navigation systems, decrease their cognitive load, and increase their integration to decrease driver distraction?
In this stage of the design process, our team translated our research findings into ideas through brainstorming, sketching, and storyboarding.
Our team brainstormed how to create a less distracting interface given the research we found.
Branden and I conducted a UI Exploration to see how car dashboards are being designed today to share with the team.
I completed a series of sketches to demonstrate how the interface would look and function and shared them with the team.
Because we needed to integrate both motion graphics and audio together, I created a storyboard to outline how we'd showcase key features of the interface.
Now that we understood what we needed to design, we began to refine our sketches and storyboards into high fidelity interface concepts, motion designs, and eventually a video prototype to showcase the features of Oslo.
Oslo is an in-car navigation assistant that uses directional voice controls, a navigation-ready dashboard, a heads-up windshield display, and haptic feedback to keep the driver’s eyes and mind focused on the road while navigating. Oslo empathizes with the driver and their current context, and therefore provides contextually relevant feedback in the form of meaningful visual and auditory cues to enhance the navigation experience.
On the windshield, Oslo displays turn-by-turn navigational directions and a visual display of its current state to help the driver maintain the best route while their eyes remain fixed on the road.
Integration, Cognitive Load
A digital navigation-ready dashboard removes RPM in the event the driver asks for directions. It also provides drivers with a list of directions, a route map, and all other pertinent navigational information such as time to destination and distance to destination directly in front of them for their convenience.
Integration, Cognitive Load
In the event of a dangerous situation, Oslo gives the driver haptic feedback in the steering wheel along with visual and auditory cues to avoid an accident.
Driver Emapthy, Integration, Cognitive Load
To tie it all together, we combined our high fidelity motion designs, voice recordings, and research into a 4 minute video prototype. This prototype showcases how Oslo's features work together to create a well orchestrated multimodal interface to help a driver focus their attention on the road.
Given that our group only had 3 weeks to complete this high fidelity video prototype with limited experience in motion graphics, I'd say this was an excellent execution of the design process in a real world context. Had we have had more time it would have been nice to have obtained more of the following:
Driver Feedback: It would have been helpful to show our design concept to more drivers who use navigation systems to get their feedback on the final concept. This would have allowed us to further refine our interface.
Interactive Prototype: Although a video prototype is an effective means of communicating a concept, it would have been awesome to create an interactive prototype simulation of the experience to evaluate our concept. This would have allowed us to measure how our system empathizes with the driver, integrates with current cars interfaces, and reduces cognitive load while driving.
More Research: It would have been helpful to conduct additional primary research such as interviews and contextual inquiries to better inform our problem space.