Voice technology so often is talked about in isolation, but there are so many system interactions a user can have. What gestures are common to users? How do people want to engage their voice systems? How can the windshield of a car support voice technology for improved user experience? Our UX team conducts several studies each year globally where we consider the voice user first. In this talk we give a deep dive into our user experience research studies on HMI, AI, and automotive — looking into the future 3, 10, and 20 years down the road. Key takeaways from this talk will go beyond just understanding voice and automotive challenges. Instead, we will pull back the curtain for attendees and give an inside look at our unique methodologies and analyses. We have users hunt for taxi cabs while driving in our simulator. We have users draw their virtual assistants. We have users describe how smell would work to control a car. These unusual and creative techniques in research help our team see that voice does not exist in a vacuum, but can be enhanced with multimodal interaction.
Voice in a Vacuum
I am responsible for qualitative and quantitative testing, research, and analysis through the use of focus groups, in-depth interviews, eye tracking, surveys, and experiments with Nuance's DRIVE Lab. My research is focused around user perceptions and users' driving performance while engaged with their infotainment system and voice-enabled technology. I am interested in attention and cognitive processing in media communication and interactions. I am a former television news producer and assistant professor.