Printer Friendly

IVAs Are Popular in New Cars, but Concerns, Limits Remain.

Are you old enough to remember Knight Rider? The hit TV show from the 1980s featured David Hasselhoff as a crime fighter whose only partner was an artificially intelligent supercar named Kit. As a kid I was, frankly, obsessed. I wanted my own Kit. While cars aren't exactly fighting crime yet, they are becoming smarter and smarter, and speech technologies are at the heart of this transformation Intelligent virtual assistants (IVAs) are changing the way we do a lot of things, in and out of the car. We ask Siri to set alarms. Tell Alexa to give us reminders. Ask Google to play our favorite song. In fact, in January 2019, a research report from Global Market Insights found that the global IVA market is set to grow from its current market value of more than $1 billion to over $11.5 billion by 2024. One of the big factors behind that growth is the automotive industry.

According to the report, the IVA market in automotive applications is expected to grow rapidly over the forecast time span. IVAs have the power to provide personalized assistance to drivers and can be integrated with automatic parking systems, adaptive cruise control, lane change assist, and other ADAS controls. And of course the infotainment systems in today's connected cars are ripe to be transformed by speech-enabled tools.

In fact, speech recognition systems are already a popular feature in new cars. But they aren't quite standard yet. As Market Watch reports, "A total of 55 percent of all new motor vehicles produced in 2019 will incorporate voice recognition, up from 37 percent in 2012. The increase in voice-recognition units in vehicles will drive revenue to $170 million in 2019, more than double from $81 million in 2011."

For now, though, there are still workarounds. Juan E. Gilbert, Andrew Banks Family Preeminence Endowed Professor and chair of the computer science department at the University of Florida, says, "Most of [the new cars] will have this, but there are still some holdouts. However, you can connect your phone to your vehicle's infotainment system and use speech that way as an option."

Tom Schalk, vice president of voice technology at SiriusXM, says, "Most of today's car models support Apple's CarPlay and Google's Android Auto, both of which provide best-in-class infotainment systems that are powered by the driver's phone. Siri and Google Assistant, along with navigation, voice dialing, voice texting, and media management, are all speech-enabled via the cloud. Embedded speech technology is built into all new cars and does not leverage the cloud."

J.D. Power and Amazon's Alexa Auto teamed up for an online survey in July 2018. They found 59% of consumers are more likely to purchase a car from auto brands that support their preferred voice assistant used in the home. Additionally, according to the survey, 76% of U.S. consumers are interested in seeing their smart speaker voice assistant accessible from inside their next car.

This makes sense. If you're driving in your car at 8:30 a.m. and ask your voice assistant to remind you to perform a task before you go to bed, you need to know your assistants are talking to each other so you get the reminder when you're actually getting ready for bed.

It's clear that consumer expectations are changing. But is the reality of speech technology in the car keeping up?

Safety First

Many of the now (almost) standard speech technologies in cars were born out of safety concerns. As cell phones took over and moved into every facet of our lives--including the car--distracted driving became an issue, and speech recognition came to the rescue. Thanks to voice controls, drivers no longer have to look at their phones to read or send a text, get directions, or do any of the other distracting things they are, let's face it, still going to do regardless of distracted-driving laws.

Gilbert says, "People use speech mostly to make and receive calls in vehicles. Drivers are also using it for directions, sending messages, playing music, and getting information from their phone or the web."

"The most common uses of embedded speech in the car include voice dialing, address entry, and media management," Schalk adds. "For CarPlay and Android Auto, speech input is commonly used for destination entry--addresses, business names, categories, and more. Voice dialing and voice texting are common uses, as well as enhanced media management. Spoken requests for information are handled by Apple's or Google's personal assistant, depending on the driver's type of phone."

For some, it's hard to imagine that all this technology in the car doesn't amount to its own kind of distraction, but Gilbert argues that speech recognition is part of the solution: "The primary goal of the driver is driving. Speech is a secondary task, at best. Therefore, driver distraction is a major issue for all drivers. Speech is a way to reduce driver distraction. Some studies suggest otherwise, but there are ways to design high-quality speech interfaces that can limit distraction."

In fact, studies have been done on this topic. Research from AAA suggests that IVAs in the car might not be the cure-all some might have hoped for. The AAA Foundation for Traffic Safety found "potentially unsafe mental distractions can persist for as long as 27 seconds after dialing, changing music, or sending a text using voice commands." That means, even if all you're doing is asking your car to send a text or find you directions, you still might be taking your mind off the road. The study also found that even the least distracting systems distracted drivers enough that they remained impaired for more than 15 seconds after completing a task.

Let's face it, IVAs aren't perfect. They often fail to understand us, creating frustration that might lead to further distraction when you're on the road.

That being said, "It's a fact that speech input mitigates driver distraction," Schalk says. "Speech interfaces need to be designed correctly to avoid driver frustration and driver workload. And of course, the speech technology needs to work reliably. Infotainment systems have become increasingly more complex, and speech can be used to greatly simplify the user interface."

Take It to the Limit

We all know that speech recognition technology is changing quickly, but it still struggles with some of the usability issues we've been hearing about since its inception. For instance, Gilbert points to background noise as a potential pitfall, "but vehicle designs and speech recognition with advanced microphone technologies have reduced this issue."

He adds, "Speech in the car doesn't have major limitations, in my opinion, as long as it's designed and tested to limit distraction."

But research shows that users do still have some complaints. "If you pay attention to recent J.D. Power user satisfaction scores for embedded speech in the car, you get the impression that simple speech commands are difficult to recognize," says Schalk. "Speech in the car remains the most-complained-about feature in new cars.

But the complaint rate has gone down significantly, and there is no doubt that connecting to cloud speech resources will bring more improvement. With cloud speech, drivers don't have to use structured commands, which is something that drivers don't take the time to learn. And all speech systems in the car will eventually include connectivity to cloud speech resources."

Connectivity also continues to be a stumbling block for speech technologies in the car.

"Without connectivity, the speech experience will always be limited when compared to the connected speech experience," says Schalk. "For Car-Play and Android Auto, there are no obvious usability issues to solve, and we continue to see improvement in the user interface. We have not reached conversational speech interfaces, but as soon as our mobile devices and smart speakers support truly conversational speech, so will our cars. Perhaps a few years away."

But even those of us without speech-activated services built-in can still take advantage of them, and Gilbert thinks those experiences might still be superior to what's in your dashboard. "I think what Apple does with Siri is good because you can use it hands-free, eyes-free. To me, that's the key," he says.

Gilbert adds, "I would say Google Maps is an excellent example of speech for navigating, specifically.... It's possible to use it hands-free, eyes-free, and effectively navigate just as well as looking at the screen. Google Maps gives you early warnings of turns. It speaks through a back channel when you are on the phone, so you can get directions even when you are talking to someone. This is my best example of effective speech in the vehicle."

But Nuance's Dragon Drive is also making headway in the automotive sector. It's a more conversational interface that has the ability to process natural voice patterns and biometric capabilities that can recognize which user is speaking and respond accordingly. Dragon Drive also has a daily update feature that learns your road behaviors and preferences and then updates to meet your needs.

Schalk says, "The speech vendors that lead innovation when it comes to speech technology in the car include Google, Nuance, Amazon, and Apple--and perhaps in that order. Google has the best speech-to-text technology and has a wealth of audio data from the car via Android Auto. Google has recently developed Android Automotive OS, an operating system for the vehicle head unit, which includes infotainment and other vehicle components such as climate control. Rich with voice control, the newly announced operating system is already being showcased by Volvo and allows for in-vehicle control [and] all Android Auto capabilities, including home control like the Google Assistant supports."

The Future of Speech in the Car

As consumer demand drives innovation and availability of in-car speech-activated systems, it seems there is plenty to look forward to--and plenty of opportunity--in this market.

"Over the next year or so, we will see more Alexa implementations in the car that will work similarly to Alexa home devices and are also integrated with the vehicle for controlling the climate, phone dialing, destination entry, and media management," says Schalk. "Within the past year Mercedes and BMW began offering their own personal assistants (powered by Nuance), and we should expect more car makers to follow suit. But we can expect something even more significant from Google that's built into the car and goes beyond Android Auto--drivers won't even have to have their phones in the car."

Gilbert, on the other hand, alludes to the autonomous car revolution. As we inch closer to a future where cars drive themselves and talk to us, those of us who grew up with Knight Rider can't help but imagine what possibilities might be next.
COPYRIGHT 2019 Information Today, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2019 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:AUTOMOTIVE
Author:Cramer, Theresa
Publication:Speech Technology Magazine
Date:Jun 22, 2019
Words:1777
Previous Article:Graduating with Speech Tech Honors.
Next Article:Speech Understanding: The Next Leap in Speech Recognition: Deep learning and the data that feeds it will drive speech tech's commercial use.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters