Building fluidity into machines.
Starting with the interaction, there are many complexities that exist when trying to make machines speak with the same fluidity as humans. Today, speech technologists rely on natural language understanding (NLU) technology to accomplish this goal. Our cover story by Staff Writer Michele Masterson, "How Natural Is NLU?" (page 14), delves into the latest advances and expectations of NLU. It's come a long way in recent years. Apple has proven this with the release of its speech-enabled digital assistant application, Siri, on the iPhone 4S. Not to be outdone, IBM flexed its speech technology muscles when Watson, the company's talking machine, used NLU to beat two Jeopardy! game-show champions.
While these technologies have piqued many consumers' interest, there's certainly a lot of room for improvement. In fact, in his recent book, The Voice in the Machine, Roberto Pieraccini aptly points out the challenges of modeling speech-enabled machines after human speech--something we don't yet fully understand. He suggests that comparing talking machines to humans is analogous to comparing airplanes to eagles. To see his explanation, read "The Great Divide" (page 20), which we've excerpted from his book. His perspective will help speech technology buyers and builders get the most out of speech technology by setting appropriate expectations.
Despite its limitations, however, the technology continues to mature. Recently, for example, there have been some developments in emotion detection. During the natural course of a conversation, it's common for someone to get emotional about something, especially on a customer service call. Speech-enabled machines haven't traditionally been very good at picking up on these emotional cues from callers, which can add to a caller's frustration. Fortunately, the W3C Multimodal Interaction Working Group is making progress with the introduction of the first draft of the EMMA 1.1 specification in February. Updates include support for human annotation and better integration with Emotion Markup Language. Not only will emotion detection improve customer interactions, but the specification is enabling organizations to detect emotions over multiple modalities and channels, which will help them better utilize newer communication channels.
This brings me to my second point about fluidity. There's no denying that customers are using multiple channels when communicating with companies. Sue Ellen Reager's column, "Blasts Heard Round the Globe" (page 33), points out that text analytics and market research company Decooda can send out personalized voice messages to prospects and customers across multiple channels. Clearly, multimodal and multichannel technologies have moved out of the hype cycle and into practice. Our feature story, "Creating Fluidity Between Channels" (page 24), by Michele Masterson, underscores this shift and the need for multichannel customer support. Essentially, it's not enough to make the interaction as fluid as possible; organizations must also enable customers to move across channels freely without data loss. It's an area that will undoubtedly receive a lot more attention moving forward.
@dmyron on Twitter
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||EDITOR'S LETTERS|
|Publication:||Speech Technology Magazine|
|Date:||May 1, 2012|
|Previous Article:||Balancing content and control: when consumers use multiple devices, give help where it's needed.|
|Next Article:||Enter EMMA 1.1: getting more out of multimodal inputs.|