Printer Friendly

Audience Introduces NUE - Natural User Experience Inspired by Neuroscience.

Spain, March 4 -- Audience, Inc. (NASDAQ: ADNC), the leader in Advanced Voice and a pioneer in Multisensory processing and Natural User Experience (NUE) technology for consumer devices, today announced the first member of the NUE family of Multisensory processors -- the N100. NUE products are targeted at smartphones, tablets, wearables, and IoT devices, with the first N100 devices expected to be available for sampling mid-2015. NUE Multisensory technology is designed to derive intelligence about you and your environment from the exploding quantity sensor data available on modern devices -- enabling true contextual awareness -- so your device can provide insight and awareness in your daily life.

"We envision a world where people naturally interact with their devices, and in an enhanced way with the world around them, based on the insight and awareness that multisensory processing will provide. This vision is driving Audience's development of NUE technology, delivered in the form of Multisensory processors and software," said Peter Santos, President and CEO, Audience. "We designed NUE Multisensory technology to derive intelligence and context from sensor and microphone data about you and your environment, to deliver enhanced context that is useful and always-on, while preserving battery life."

About the NUE N100 Multisensory Processor

The N100 will be the first Audience product to enable a full-fledged Multisensory experience. It will incorporate Audience VoiceQ and MotionQ technology to enable applications with context awareness. Achieving context awareness involves layering sophisticated algorithms on top of the N100's sensory intelligence -- interpreting sensor and microphone data to deduce higher-level information. The N100 will be able to interpret context, such as, "the user is running," from a series of characteristic voice and motion features collected by the sensors.

VoiceQ Technology

VoiceQ is Audience's hands-free voice recognition technology that notifies the device to take action when a secure key word is spoken. The N100 VoiceQ implementation works in three stages. Using a low-power, Always-On voice activity detector, the N100 continuously listens for voice signals while staying in an ultra-low power mode. Upon voice detection, the incoming signal is compared to parameter-defined key phrases, or triggers. During these initial stages, only the N100 and digital microphone are awake. All other components in the device are in low-power sleep mode. When the key phrase is detected, the N100 wakes up the device, indicating the user's intent to talk with the device via a voice user interface. VoiceQ on the N100 is designed to support up to five keywords, which can be a combination of user-selected or OEM-chosen keywords.

VoiceQ technology on the N100 is designed to preserve power by dramatically lowering false acceptance rates compared to previous voice trigger implementations. The acceptance of one false keyword equates to about two hours of keyword listening or an additional 20 minutes of unintended phone use. The N100 is also designed to incorporate a power-optimized embedded implementation of Google hotword detection, allowing the N100 to simultaneously detect both VoiceQ and Google keywords in less than 17 MIPS.

Motion Q Technology

The N100 will feature the first hardware implementation of the MotionQ 1.0 library -- from the Audience acquisition of Sensor Platforms. The NUE MotionQ software is designed to include advanced algorithms, power conscious architecture and outstanding context awareness. In Always-On mode, the N100 is designed to continuously monitor the sensors it is connected to, recalibrating them as needed and deducing context from an intelligent mix of sensor inputs. The N100 works with sensors from a wide range of the leading suppliers, supporting any sensor driver with support for Open Sensor Platform (OSP).

The N100 MotionQ implementation design is partitioned to execute low-power, critical tasks such as sensor fusion and context detection in the N100 and memory-intensive context classification in the Application Processor (AP). The N100 implements the OSP Host API allowing it to communicate to the AP using the OSP protocol. The N100 MotionQ implementation is optimized for the Audience instruction set in the N100 with the aim of delivering best-in-class sensor hub processing, sensor fusion and auto-calibration in less than 2 MIPS.

For more information on Audience(R) processors and smart codecs, please go to

Published by HT Syndication with permission from India PRwire.

Copyright HT Media Ltd. Provided by SyndiGate Media Inc. ( ).
COPYRIGHT 2015 SyndiGate Media Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2015 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Publication:India PRwire
Date:Mar 4, 2015
Previous Article:2015 Stevie Winners Use ValueSelling Framework(R) to Succeed.
Next Article:Bombardier CS300 Aircraft Successfully Completes Maiden Flight.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters