Model predicts cognitive decline due to Alzheimer’s, up to two years out
by Rob Matheson for MIT News:A new model developed at MIT can help predict if patients at risk for Alzheimer’s disease will experience clinically significant cognitive decline due to the disease, by predicting their cognition test scores up to two years in the future.The model could be used to improve the selection of candidate drugs and participant cohorts for clinical trials, which have been notoriously unsuccessful thus far. It would also let patients know they may experience rapid cognitive decline in the coming months and years, so they and their loved ones can prepare.Pharmaceutical firms over the past two decades have injected hundreds of billions of dollars into Alzheimer’s research. Yet the field has been plagued with failure: Between 1998 and 2017, there were 146 unsuccessful attempts to develop drugs to treat or prevent the disease, according to a 2018 report from the Pharmaceutical Research and Manufacturers of America. In that time, only four new medicines were approved, and only to treat symptoms. More than 90 drug candidates are currently in development.Studies suggest greater success in bringing drugs to market could come down to recruiting candidates who are in the disease’s early stages, before symptoms are evident, which is when treatment is most effective. In a paper to be presented next week at the Machine Learning for Health Care conference, MIT Media Lab researchers describe a machine-learning model that can help clinicians zero in on that specific cohort of participants.They first trained a “population” model on an entire dataset that included clinically significant cognitive test scores and other biometric data from Alzheimer’s patients, and also healthy individuals, collected between biannual doctor’s visits. From the data, the model learns patterns that can help predict how the patients will score on cognitive tests taken between visits. In new participants, a second model, personalized for each patient, continuously updates score predictions based on newly recorded data, such as information collected during the most recent visits.Experiments indicate accurate predictions can be made looking ahead six, 12, 18, and 24 months. Clinicians could thus use the model to help select at-risk participants for clinical trials, who are likely to demonstrate rapid cognitive decline, possibly even before other clinical symptoms emerge. Treating such patients early on may help clinicians better track which antidementia medicines are and aren’t working.“Accurate prediction of cognitive decline from six to 24 months is critical to designing clinical trials,” says Oggi Rudovic, a Media Lab researcher. “Being able to accurately predict future cognitive changes can reduce the number of visits the participant has to make, which can be expensive and time-consuming. Apart from helping develop a useful drug, the goal is to help reduce the costs of clinical trials to make them more affordable and done on larger scales.”Joining Rudovic on the paper are: Yuria Utsumi, an undergraduate student, and Kelly Peterson, a graduate student, both in the Department of Electrical Engineering and Computer Science; Ricardo Guerrero and Daniel Rueckert, both of Imperial College London; and Rosalind Picard, a professor of media arts and sciences and director of affective computing research in the Media Lab.Population to personalizationFor their work, the researchers leveraged the world’s largest Alzheimer’s disease clinical trial dataset, called Alzheimer's Disease Neuroimaging Initiative (ADNI). The dataset contains data from around 1,700 participants, with and without Alzheimer’s, recorded during semiannual doctor’s visits over 10 years.Data includes their AD Assessment Scale-cognition sub-scale (ADAS-Cog13) scores, the most widely used cognitive metric for clinical trials of Alzheimer’s disease drugs. The test assesses memory, language, and orientation on a scale of increasing severity up to 85 points. The dataset also includes MRI scans, demographic and genetic information, and cerebrospinal fluid measurements.In all, the researchers trained and tested their model on a sub-cohort of 100 participants, who made more than 10 visits and had less than 85 percent missing data, each with more than 600 computable features. Of those participants, 48 were diagnosed with Alzheimer’s disease. But data are sparse, with different combinations of features missing for most of the participants.To tackle that, the researchers used the data to train a population model powered by a “nonparametric” probability framework, called Gaussian Processes (GPs), which has flexible parameters to fit various probability distributions and to process uncertainties in data. This technique measures similarities between variables, such as patient data points, to predict a value for an unseen data point — such as a cognitive score. The output also contains an estimate for how certain it is about the prediction. The model works robustly even when analyzing datasets with missing values or lots of noise from different data-collecting formats.But, in evaluating the model on new patients from a held-out portion of participants, the researchers found the model’s predictions weren’t as accurate as they could be. So, they personalized the population model for each new patient. The system would then progressively fill in data gaps with each new patient visit and update the ADAS-Cog13 score prediction accordingly, by continuously updating the previously unknown distributions of the GPs. After about four visits, the personalized models significantly reduced the error rate in predictions. It also outperformed various traditional machine-learning approaches used for clinical data.Learning how to learnBut the researchers found the personalized models’ results were still suboptimal. To fix that, they invented a novel “metalearning” scheme that learns to automatically choose which type of model, population or personalized, works best for any given participant at any given time, depending on the data being analyzed. Metalearning has been used before for computer vision and machine translation tasks to learn new skills or adapt to new environments rapidly with a few training examples. But this is the first time it’s been applied to tracking cognitive decline of Alzheimer’s patients, where limited data is a main challenge, Rudovic says.The scheme essentially simulates how the different models perform on a given task — such as predicting an ADAS-Cog13 score — and learns the best fit. During each visit of a new patient, the scheme assigns the appropriate model, based on the previous data. With patients with noisy, sparse data during early visits, for instance, population models make more accurate predictions. When patients start with more data or collect more through subsequent visits, however, personalized models perform better.This helped reduce the error rate for predictions by a further 50 percent. “We couldn’t find a single model or fixed combination of models that could give us the best prediction,” Rudovic says. “So, we wanted to learn how to learn with this metalearning scheme. It’s like a model on top of a model that acts as a selector, trained using metaknowledge to decide which model is better to deploy.”Next, the researchers are hoping to partner with pharmaceutical firms to implement the model into real-world Alzheimer’s clinical trials. Rudovic says the model can also be generalized to predict various metrics for Alzheimer’s and other diseases.SourceImage: Christine Daniloff, MIT