Developing tech to spot risk of dementia
An effort to use voice-assistant devices like Amazon’s Alexa to detect signs of memory problems in people has gotten a boost with a grant from the federal government.
Researchers from Dartmouth-Hitchcock and the University of Massachusetts, Boston will get a four-year $1.2 million grant from the National Institute on Aging. The team hopes to develop a system that would use machine and deep learning techniques to detect changes in speech patterns to determine if someone is at risk of developing dementia or Alzheimer’s.
“We are tackling a significant and complicated data-science question: whether the collection of long-term speech patterns of individuals at home will enable us to develop new speech-analysis methods for early detection of this challenging disease,” said Xiaohui Liang, an assistant professor of computer science from the University of Massachusetts, Boston, in a statement.
“Our team envisions that the changes in the speech patterns of individuals using the voice assistant systems may be sensitive to their decline in memory and function over time.”
John Batsis, a member of the team and associate professor of medicine at the Geisel School of Medicine at Dartmouth, said the system would help families better plan for care should someone develop a cognitive impairment.
“Alzheimer’s disease and related dementias are a major public health concern that lead to high health costs, risk of nursing home placement, and an inordinate burden on the whole family,” Batsis said. “The ability to plan in the early stages of the disease is essential for initiating interventions and providing support systems to improve patients’ everyday function and quality of life.”
Privacy, language barriers exist
Batsis admitted this was a novel approach and that challenges lie ahead in developing a system he and the other researchers plan to eventually test in people’s homes.
The system, in theory, would aim to pick up changes in a person’s speech pattern, intonation and lexicon, he said. But researchers also would have to figure out how to make the system work for a myriad of languages, when there are multiple people speaking in the room, or when someone mumbles or doesn’t speak clearly.
“These are all pragmatic and practical issues,” Batsis said.
Should a system one day be sold commercially, researchers envision that patients, the family or caregivers would choose to enable the system on their voice assistant.
“A huge challenge is that of privacy,” he said. “You need to think about these things. Older adults that may be at risk or whose family member are concerned about this need to have buy-in for that.”
Several experts who were not part of the research welcomed its focus.
“Imagine if we had another tool to help diagnose this, and if that tool helped us detect it early,” said Alicia Nobles, an assistant professor in the Department of Medicine at University of California, San Diego, and the co-founder of the Center for Data-Driven Health at the Qualcomm Institute, in an email. She noted that detecting impairments early may be “crucial” to helping patients and their caregivers manage their care.
Sarah Lenz Lock, the senior vice president for policy at AARP and the executive director of the Global Council on Brain Health, also said the research looked promising.
“We need to assure that people’s privacy is maintained through the expanded use of technology in this way,” she said. “But speech patterns present a promising area for early screening of cognitive decline.”
—AP