DIAGNOSING DISEASE with Speech Analytics: It's now possible to detect everything from depression to Parkinson's disease with speech analytics, but privacy concerns are still being dealt with.
Their tool of choice? Voice. Medical practitioners have been using vocal analysis for diagnosis for years. The Diagnostic and Statistical Manual (DSM) of mental health has been using speech sounds and language usage to diagnose mental illness for at least 50 years (e.g., the second DSM, published in 1968, has "talkativeness" and "accelerated speech" as two common symptoms of what was then called manic depression, now termed bipolar). And that seems to make sense. When we produce sounds, there is a complex system at work that involves our lung power, energy level, physiology, and mental state--all things that can be affected by illness. We speak differently when in different moods, mind-sets, or emotional states, especially moods and emotional states at the extreme ends of the spectrum. But what about diagnosing something like heart disease, migraines, or Parkinson's disease using only the sound of someone's voice? We couldn't possibly detect such things just by listening.
Well, of course, humans cannot. But, as with inventions like the microscope, the telescope, and the stethoscope, technology can help us to see more--or in this case, to listen better. Speech analysis is being used to assess recordings of human voices to diagnose conditions such as depression, Parkinson's, Alzheimer's, heart disease, concussions, migraines, PTSD, and even suicide risk, among others. That list is expected to grow. This may sound like a dream come true, but there are always concerns.
Alexa, What's My Diagnosis?
So what exactly is speech analysis capable of detecting? Amazon has applied for a patent to analyze speech for signs of illness, such as a sore throat, so that it may tailor ads to someone with that illness, say for a specific medicine. In other words, Alexa knows when you have a cold. On a much more serious note, we may also be closer to preventing some suicides by using speech analysis to predict suicide risk: Researchers are studying the vocal characteristics of people at risk of becoming suicidal in the hopes of developing smartphone-based apps or other software that could detect changes imperceptible to the human ear.
One of the companies helping detect disease through voice is Canary Speech. It has several patents on mathematical models able to analyze speech using some of the more than 2,500 biomarkers that exist in the sub-language elements of human speech--"sub-language" meaning they are independent of specific words. Any recorded speech can be analyzed--even past recordings of people. And this can be done using a recording of only 300 words.
Speech analysis could enhance a clinician's ability to listen to her patients, says Canary Speech CEO Henry O'Connell, adding that "by selecting, through guided machine learning, the proper biomarkers to create the optimum model," such analysis can augment the natural senses of the clinician. Canary Speech has done the heavy lifting of analyzing thousands of recordings, matched with health data, to identify those biomarkers. Canary's software can help lend credence to a hunch a clinician has about a client or draw the clinician's attention to something they didn't even notice, such as an unusually depressed mood that isn't readily apparent to even the trained eye but is picked up by software that is able to detect minute changes in vocal qualities. The software is not making a diagnosis, per se, but simply providing more data to the clinician.
The "spectral characteristics" that companies like Canary Speech analyze are related to how the body physically creates words, not the words themselves. Indeed, they are not related to any particular words at all. That means a recording of any casual speech can be analyzed using these biomarkers. And to take an example from Canary Speech, only 15 biomarkers (out of more than 2,500) are required for it to identify and monitor depression in patients. Clinics that wish to use this diagnostic tool would record their interactions with their clients, in their offices, and have the software scan the recording for disease biomarkers. The clinician can then add that data to other diagnostic tests to make the final diagnosis.
The ability of algorithms to detect spectral characteristics, which cannot be detected by the naked eye (or, in this case, ear), could potentially allow for early diagnosis of illnesses, which can offer the opportunity of potentially life-saving early treatment.
On top of diagnosing illness through speech recordings, platforms that offer a chat-like interface (think Alexa and Google Assistant) allow for people to interact more naturally with algorithms that can monitor their health conditions from the comfort of the patient's own home. Imagine that you are older, living alone, and have a few medical concerns. You might not be driving much anymore. If you have an Amazon Echo or Google Assistant device in your home, you also have access to HIPAA-compliant "skills" like Livongo's Livongo for Diabetes skill, which can help a diabetic remember their latest blood glucose readings while adding health tips; or a skill by Atrium Health that can help users find and schedule appointments with urgent care providers. Phone-based apps are also in development, such as an app by Winterlight Labs that can "objectively assess and monitor cognitive health" through your smartphone.
HIPAA, Privacy, and Other Concerns
The Health Insurance Portability and Accountability Act (HIPAA) was signed into law two decades ago. Title I of HIPAA concerns ensuring access to health insurance through workplace plans even when changing jobs or if one has a preexisting condition. Title II, however, focuses on electronic transmission of private health data. There are strict rules guarding the privacy of patients' health data, also called protected health information (PHI). Companies that wish to use digital versions of PHI (which includes patient's voice recordings) must comply with HIPAA standards or risk significant monetary penalties (up to $1.5 million per violation).
One company that has done so is Orbita, which specializes in creating frameworks that allow for a more conversational experience with artificial intelligence. Bill Rogers, CEO of Orbita, says that AI can increasingly "carry the load" for interactions between a caregiver and a patient by making appointments, answering health questions, or making test results accessible without adding to a clinician's already full load. Orbita has also built in the ability for its AI to escalate interactions to humans when necessary, such as when it cannot answer a patient's questions. Since Orbita is also now HIPAA-compliant, its recorded health information can be shared directly with health care providers over platforms such as Alexa or Google Assistant. This certification means Orbita customers can develop skills similar to Boston Children's Hospital's MyChildren's Enhanced Recovery After Surgery program, which can monitor patients' recovery process and send information regarding their condition to their doctors.
Using relatively non-invasive speech analytics to detect disease may sound like a minor miracle, but it's not without its concerns. Imagine that you suffer from bouts of depression, and you are on a phone interview for a new job you hope to get. Unbeknownst to you, the organization is using speech analytics to screen you for potential health conditions. The algorithm picks up on the fact that you suffer from depression, so the company decides against hiring you because they don't want to take the chance that you would need to use sick days. What if a company decides it doesn't want to hire people at risk for other illnesses? Can that company use phone screening to discriminate against individuals suffering from Parkinson's or heart disease? The Americans with Disabilities Act (ADA) specifically stipulates that employers must not conduct a medical examination on an applicant. Will this continue to be enough protection for job seekers as technology advances?
What if the recordings we post to social media sites today can be used to assess our health without our knowledge? Facebook itself cautions users to "think before you post." And the EU was so concerned about our social media data lingering into the future that it built a "right to be forgotten" into the General Data Protection Regulation (GDPR), implemented last year.
Speech Analysis Skeptics
The first pair of glasses, the first blood test, the first vaccines. People were skeptical of all these in the beginning. (Some remain skeptical of what most of us consider routine medical care.) Using speech analytics as a diagnostic tool might also raise some suspicions--perhaps among those who remember the revelation from only two years ago that Facebook was using algorithms to assess the emotional states of teenagers to better target them with ads.
The companies behind these technologies know that there are plenty of privacy concerns, and they often take them into account during product development. Rogers says that Orbita builds patient privacy concerns into every step of its platform creation by ensuring that all of the policies and procedures in the current regulatory environment (such as HIPAA) are followed during the development of the code itself.
And Canary Speech is also addressing these privacy concerns within its software, says O'Connell. The app encrypts and transmits the recordings immediately and then erases the recording from the device, adding another layer of security to the recordings.
O'Connell says that each clinic that uses Canary's app is a "standalone" island--meaning doctors and nurses from one office cannot see what is happening at the other clinics using the app. Even at the master level--in which Canary Speech app system administrators might have to get into the app to debug or fix something--the information they are able to see has been de-identified (without names, addresses, dates of birth, or Social Security numbers).
Improving Care and Planning for Problems
Early diagnosis of illness can often lead to better disease outcomes. Even if there is no early treatment, an early diagnosis can lead to better tracking over the course of the disease, which may lead to innovative treatments in the future. O'Connell points out that the amount of "face time" patients get with their clinicians is relatively little. But this technology has the capacity to capture a wide range of information, over a long time period, often recorded from the comfort of the patients' own homes, and help a clinician get to know a patient better.
For example, an algorithm that can analyze speech for signs of pulmonary disease, such as COPD, can be gathering data on the disease's progression by having "conversations" with the patient through his Amazon Echo. With this information, a doctor can monitor a patient in near real time whom she might otherwise see every few months for only a few minutes. In fact, the future may bring entirely remote doctor "visits." This could lead to vastly improved health outcomes for people living in rural or otherwise hard-to-reach areas around the globe.
Canary Speech technology is also being used in drug trials to help researchers follow the progression of disease over time. This has the promise of offering near-real-time information on the efficacy of treatment, potentially leading to shorter wait times for new drugs or treatments to be offered to the public (though this is still just speculation).
These feats are nothing less than amazing. Yet we cannot ignore the ethical questions. A recent paper that appeared in the journal NPJ Digital Medicine titled "Data Mining for Health: Staking out the Ethical Territory of Digital Phenotyping," by Martinez-Martin, Insel, Dagum, Greely, and Cho, suggests that "stakeholders, including software developers, healthcare, patients, consumers, and other institutions, will need to be involved in the creation of standards and best practices that adequately address the ethical challenges raised here." In other words, problems will arise, and we need to be prepared. Making sure we get this right is paramount, as the technology has the ability to literally save lives.
BY BRIAN CHEVALIER
Brian Chevalier is a freelance writer based in Massachusetts.
|Printer friendly Cite/link Email Feedback|
|Publication:||Speech Technology Magazine|
|Article Type:||Cover story|
|Date:||Jun 22, 2019|
|Previous Article:||Survey Says Users Still Suspicious of Conversational AI.|
|Next Article:||Chatbot Development: YOUR GUIDE TO GETTING IT RIGHT: The race is on to include chatbots in marketing and CRM efforts, but many companies still aren't...|