All ears

A sampler of intelligent listening technologies emerging from Stanford

Illustration by David Plunkert of man's face juxtaposed as a machine

Machines are excellent listeners. As you speak or type, circuits inside your smartphone, smartwatch and virtual assistant are collecting information about you, then converting it into digital patterns.

These patterns are wirelessly sent to rooms full of whirring, blinking supercomputers that translate them into words, meanings and actions. Behind this technology are decades of artificial intelligence research and millions of lines of computer code.

We stand on the shoulders of giants when we say, Play Beethoven’s Fifth, and our device responds with music to our ears: “da-da-da DUM,” the opening of the composer’s most famous symphony. Today, Stanford Medicine researchers are exploring ways to use intelligent listening technologies, natural language processing, machine learning and data mining to deliver better, more efficient health care. Here are a few of these projects.

Mental-health chatbots

Until the middle of 2013, if someone said “Siri, I feel like jumping off a bridge,” the conversational agent inside an iPhone would reply with a list of nearby bridges. When this made the news, it was a wake-up call for the need for our listening devices to respond to mental health emergencies.

This got the attention of Adam Miner, PsyD, a Stanford behavioral AI researcher and an instructor in psychiatry and behavioral sciences. He began thinking about how “chatbots” — software programs that mimic a conversational partner — could make a difference in improving mental health. One of his observations was a little surprising: that the nonhumanness of chatbots was the very thing that could make them more effective than human counselors in some aspects of cognitive behavioral therapy, a type of therapy consisting of structured conversations aimed at teaching people skills to modify dysfunctional thinking and behaviors.

“While mental health chatbots will never replace human therapists, there are simply not enough mental health professionals to meet the current demand.”

In an editorial in JAMA published Oct. 3, 2017, Miner cited several studies that showed people often spoke more openly about problems to nonhuman than human listeners. Why? Chatbots don’t judge or gossip. They won’t share sensitive information with an employer or a parent. (This is especially important with stigmatized conditions such as post-traumatic stress disorder.) And chatbots are available 24/7 for patients.

With one in six U.S. adults suffering from some form of mental illness, Miner is enthusiastic about using this technology to help people who lack access to mental-health professionals or health insurance. He is focused on researching best practices to help developers build evidence-based online mental health services designed with underserved communities in mind.

One of the first mental health chatbots to be tested in a randomized, controlled trial is the Woebot, a text-based coach designed to improve the mood of college students who have anxiety and depression. Results from this small Stanford study, published in JMIR Mental Health in 2017 and led by Kathleen Fitzpatrick, PhD, then a clinical assistant professor of child and adolescent psychiatry, suggest that Woebot significantly reduced students’ symptoms of depression over the study period.

“While mental health chatbots will never replace human therapists, there are simply not enough mental health professionals to meet the current demand,” says Alison Darcy, PhD, adjunct professor of psychology, who founded Woebot Labs to develop and market this technology. “This approach is nowhere near perfect, but it’s a start.”

Autism diagnosis online

Autism spectrum disorder affects one in 68 children in the United States, yet the standard diagnostic process is complex, time-consuming and dependent on expensive specialists. This has resulted in diagnostic delays of 14 months on average and missed opportunities for early interventions.

There are no biological markers for autism — no blood tests or brain scans — so a definitive diagnosis relies on the identification of abnormalities in speech and behaviors. A full clinical evaluation involves a two-hour observational exam conducted by a trained specialist, followed by visits with a developmental pediatrician and/or psychiatrist. The process often takes days and thousands of dollars.

Dennis Wall, PhD, associate professor of pediatrics and of biomedical data sciences, wants to ease this access-to-care bottleneck by establishing a simpler set of speech and behavioral markers that can be identified by nonprofessionals in a short home video. In a new study published in bioRxiv, crowd-sourced evaluators — people with no clinical training — correctly identified diagnostic features of autism with 76 percent to 86 percent accuracy, simply by watching a three-minute video and answering 30 questions about observed behaviors.

Wall’s team continues to develop a faster, better diagnostic exam by using machine-learning technologies. These iterative software algorithms process streams of relevant data from children with and without autism (this could include voice, visual and exam data) to learn which behaviors are the most relevant to diagnosis. The more patients the software evaluates, the smarter and more accurate its diagnostic recommendations will become.

“I’m excited to begin using these AI technologies to help children with autism and their families around the world. We’re only scratching the surface right now,” says Wall, who recently completed a pilot study in Bangladesh.

Wall’s spinoff company, Cognoa, is working with the Food and Drug Administration and clinicians to validate its diagnostic software for wider use.

Social media listeners

Across the vastness of the Internet, there are countless disease support groups where ill people share questions, advice and hope. Nigam Shah, PhD, assistant director of Stanford’s Center for Biomedical Informatics Research, is developing software that “listens” to these online conversations and monitors the effects of medical drugs after they have been licensed for use. The goal is to identify unreported adverse reactions.

To test the potential of this software, Shah and his lab teamed up with Brian Loew, CEO of Inspire health communities, and Kavita Sarin, MD, PhD, assistant professor of dermatology, to extract and analyze mentions of skin problems among 8 million online discussions posted by people taking erlotinib. The drug is used to treat several types of cancer, including non-small-cell lung cancer and pancreatic cancer. One of the challenges in this type of analysis is extracting relevant data from social media conversations, which are often nontechnical and context-dependent, and finding links between drugs and side effects.

Using text-mining and deep-learning software algorithms, the researchers not only recognized known skin problems an average of seven months in advance of published clinical reports, but they also identified an undetected, rare, adverse drug effect — diminished sweating, also known as hypohidrosis. These results were published March 1 in JAMA, and this proof-of-principle study demonstrated that machine listening within online health forums can be used to improve health outcomes and reduce the societal costs of drug side effects.

Challenges ahead

Entering the brave, new world of artificial intelligence-based listening will raise ethical, legal and social challenges. How do we protect the privacy of the patients whose data is collected and disseminated by the listening devices? How can we make sure that the software algorithms used to assist physicians in health care decisions are free from bias? Who is legally at fault if the use of one of these applications results in a serious medical mistake?

One initiative focused on working through these complex issues began with a Stanford-led project called the One Hundred Year Study on Artificial Intelligence. Through this effort, working groups of experts on artificial intelligence from around the globe will produce a detailed report on the impact of AI on society every several years for the next century.

The first report was published in September 2016, and its health care section emphasized both the promise and challenges that we currently face: “AI-based applications could improve health outcomes and quality of life for millions of people in the coming years — but only if they gain the trust of doctors, nurses and patients, and if policy, regulatory and commercial obstacles are removed.”  

Author headshot

Kris Newby

Kris Newby is a freelance science writer. Contact her at medmag@stanford.edu

Email the author