When it comes to teasing out the earliest whiffs of cognitive impairment, nothing beats a comprehensive in-person exam at the memory clinic … right? Well, that may be changing. New findings presented at the Clinical Trials in Alzheimer’s Disease conference, held November 4-7, suggest that performance on smartphone tests tracks closely with more traditional paper-and-pencil tests taken in the clinic, and aligns with AD biomarkers. In fact, repeated mobile tests may be particularly sensitive detectors of deficits in learning—one of the earliest harbingers of cognitive decline in the long preclinical phase.
- Mobile memory test results closely match standardized in-clinic tests.
- Performance on smartphone tests correlates with AD biomarkers.
- Repeated mobile testing picks up learning deficits, an early harbinger of cognitive decline.
COVID-19 has thrust the need for remote assessments into sharp relief. But remote tests of cognition and health will prove essential for clinical research long after the pandemic has passed, Jeffrey Kaye of Oregon Health Sciences University in Portland said in his keynote address. For one thing, older people have many other reasons to stay home—toxic air from wildfire smoke among them, Kaye noted.
Going remote offers a plethora of advantages, such as including larger and more diverse participants in clinical studies, conducting more frequent longitudinal assessments, and making measurements more sensitive. The latter could boost the signal-to-noise ratio of cognitive outcomes in treatment studies, Kaye believes, allowing for smaller, shorter trials.
At CTAD, researchers showed results of their work to validate smartphone tests of memory in established observational cohorts, compare them head-to-head with tests in the clinic, and exploit opportunities inherent in frequent smartphone testing.
Emrah Düzel of the German Center for Neurodegenerative Diseases in Magdeburg compared widely used in-clinic neuropsychological tests to a smartphone memory test. Created by neotiv, a German company co-founded by Düzel, the smartphone test was designed to detect subtle deficits in episodic long-term recall. This type of memory relies on circuitry in the entorhinal cortex and hippocampus, and begins to falter in the preclinical stages of AD. In the clinic, the Free and Cued Selective Reminding Test (FCSRT) and parts of the Preclinical Alzheimer’s Cognitive Composite (PACC) put this form of memory to the test. In neotiv’s objects-in-room recall (ORR) smartphone test, participants see a series of 25 three-dimensional rooms, each containing two unique objects. Immediately after viewing each room, participants are asked to select, from a group of three objects, which one belongs in a designated spot in the room. Thirty minutes later, they are asked to do this again for each room. The result is a total recall score, comprising both immediate and long-term recall.
Düzel and colleagues tried ORR on a subset of participants in the DZNE Longitudinal Cognitive Impairment and Dementia Study, aka DELCODE. This observational project tracks cognition in 200 healthy controls, 100 first-degree relatives of AD patients, 400 people with subjective memory problems, 200 with mild cognitive impairment, and 200 with AD dementia. DELCODE participants undergo an extensive battery of cognitive tests in the clinic, hence are suitable to validate the mobile test.
Düzel reported data from the first 58 DELCODE participants to try out the ORR test, of whom 44 had taken it twice, spaced two weeks apart. The test-retest scores on the two time points were highly correlated for each participant, suggesting this unsupervised smartphone assessment was consistent and reliable. Importantly, Düzel said, a person’s total recall scores on the smartphone ORR were highly correlated with their scores on the in-clinic FCSRT and PACC. Düzel hopes to enlist 200 DELCODE participants in the smartphone substudy, and measure total recall biweekly for 48 weeks.
Another neotiv co-founder, David Berron of the University of Lund in Sweden, described at CTAD how a smartphone test matched up with in-clinic tests as well as AD biomarkers. Berron had previously designed computerized cognitive tests that target specific brain regions known to be affected by aging and AD pathology. In an object-discrimination task, participants are presented with an object, for example, a sofa. Then, they are shown more sofas, and asked whether each is the same, or different, from the original. Previously, Berron had reported that this task employs circuitry in the anterior medial temporal lobe; it falters with age and even more so when tau accumulates in the region (Berron et al., 2018; Maass et al., 2019).
Berron and colleagues designed a smartphone-based version of the object-discrimination task, and tried it on 59 participants from BIOFINDER, a Swedish longitudinal study tracking changes in CSF and imaging biomarkers and cognition. Fifty-one were cognitively normal, including 39 without and 12 with biomarker-confirmed amyloid accumulation. The remaining eight had brain amyloid and mild cognitive impairment.
After installing the app on their phones in the clinic, the participants took the tests at home once a month. At CTAD, Berron reported data from the first two unsupervised sessions. Firstly, Berron saw a strong correlation between a person’s performance on the in-clinic, supervised version of the test and their scores on the unsupervised, smartphone version. Secondly, performance on the smartphone test correlated with that of the delayed-word-recall portion of the ADAS-Cog.
Intriguingly, Berron reported that while no one scored more than 80 percent correct on the mobile test, several reached maximum scores on the ADAS-Cog delayed word recall. This “ceiling effect” suggests that the ADAS-Cog was too easy for this mostly cognitively normal cohort. The mobile test had neither ceiling nor floor effects, suggesting it was well-suited to test cognition in this cohort.
Berron previously found that transentorhinal cortex activity is essential for this object-discrimination task and, at CTAD, he reported that scores on the test’s mobile version correlated with tau accumulation, as gauged by tau-PET, in the transentorhinal region. The thickness of this region also correlated with better performance on the task. Finally, Berron found that higher CSF p-tau217 concentration came with lower scores on the smartphone test.
Curiously, these associations were absent for another smartphone test that asks participants to detect changes in scenes, rather than objects. Berron’s previous work suggested that scene discrimination involves more posterior regions of the medial temporal lobe, where tau accumulates later. Berron hypothesized that this task might pick up deficits at later stages of AD.
Smartphone Sees Your Learning Curve
While memory loss is the hallmark symptom of Alzheimer’s, new studies increasingly point to problems with learning as emerging even earlier during the long preclinical phase of the disease. Differences in learning have long muddled cognitive outcomes in clinical studies, because some participants benefit more than others from practicing the same tests at each sitting. Some studies have indicated that this practice effect is smaller in people with brain amyloid than in their amyloid-negative peers (Baker et al., 2019; Hassenstab et al., 2015). This prompted the idea that, rather than being a thorn in trialists’ sides, perhaps a practice effect could serve as a canary in the coal mine. In other words, is the loss of this effect in fact a sensitive cognitive discriminator of people with preclinical AD?
At CTAD, Kate Papp of Brigham and Women’s Hospital, Boston, described the development of a smartphone app that can rapidly detect these learning deficits through repeated assessments.
Previously, Papp and colleagues noticed weaker practice effects among amyloid-positive than -negative participants in the Harvard Aging Brain Study (HABS). For an in-clinic memory test, this difference played out over years on annual tests, but more recently, she noticed diverging learning curves between amyloid-positive and -negative participants within months, on memory tests taken monthly at home. The researchers gave 94 cognitively normal HABS participants iPads, and tracked their performance on a monthly Cogstate face-name-matching test over a year. Participants started off with similar scores regardless of amyloid status, but those with low amyloid improved more with practice than those with high amyloid. Their learning curves started to diverge by the second test session.
Could even more frequent tests—on a smartphone—tease out learning differences within days, instead of months? To address this, Papp and colleagues developed the Boston Remote Assessment for Neurocognitive Health (BRANCH). The assessment can be taken on any device with internet access. It consists of a battery of cognitive tasks aimed at picking up deficits in associative memory and pattern separation. The tests are relevant to daily life, and include tasks related to groceries, traffic signs, face/name matching. The scientists developed BRANCH over the course of a year, with input from HABS volunteers.
At CTAD, Papp showed first data validating BRANCH in a subset of 168 cognitively normal HABS participants who had undergone extensive cognitive testing in the clinic. They ranged from 50 to 90 years old; 78 had amyloid and tau PET scans. Their scores on a single session of BRANCH correlated well with scores on the PACC5, the paper-and-pencil battery designed for preclinical AD. Notably, BRANCH performance also tracked with levels of both amyloid and tau. The data cast BRANCH as a measure of cognition that could be sensitive to preclinical AD, Papp believes.
Papp also showed early work aimed at using BRANCH to measure learning curves. A subset of 32 HABS participants completed BRANCH on their phones for five consecutive days. The slope of their learning curves over the five-day stint correlated with their previous performance on PACC5, albeit not to a statistically significant degree in this small sample. Papp called this result initial validation of using BRANCH to detect learning curves. She is currently testing whether these curves are consistent on test-retest, developing more versions of the tests so they can be repeated at different times, and linking learning curves to AD biomarkers.
A BRANCH a Day. Learning curves took shape as participants logged BRANCH sessions for five consecutive days. [Courtesy of Kate Papp, Brigham and Women’s Hospital.]
Besides tapping HABS, Papp and colleagues are evaluating BRANCH in remote cohorts from online registries. Papp believes tests like it might help in prescreening for secondary prevention trials, which are increasingly relying on such registries to recruit (Nov 16 conference news).
Papp’s BWH colleague, Reisa Sperling, told Alzforum that she aims to try BRANCH in a subset of participants in the AHEAD 3-45 study, which is testing the BAN2401 anti-Aβ antibody in amyloid-positive people who are cognitively normal (Nov 2020 conference news). Sperling believes that learning curves could prove far more sensitive than other cognitive measures in preclinical AD, and may even be useful in screening for primary prevention trials.
BRANCH reflects a growing appreciation in the field for the promise of using learning curves in preclinical AD, especially with smartphones. At CTAD, Jason Hassenstab of Washington University, St. Louis, showed a smidgen of early data on a newly developed mobile phone version of his groups’ Online Repeated Cognitive Assessment. Originally designed for home computers, ORCA measures people’s progress in learning Chinese characters over six days. A recent study had showed that people without plaques learned the characters quicker than those with (Sep 2020 news). At CTAD, Hassenstab reported that the smartphone version of this test—called Mobile ORCA—can pick up learning curves within two days, with a total testing time of just 24 minutes.
Hassenstab also presented findings on practice effects from the DIAN-TU study, which missed its primary endpoint (Apr 2020 conference news). In a nutshell, asymptomatic carriers of autosomal-dominant AD mutations showed substantial practice effects on several cognitive tests, compared to mutation carriers who were already symptomatic at the beginning of the trial, or who became symptomatic during the trial; the latter two groups did not benefit from practice. Noncarriers had the strongest practice effects.
Hassenstab said that the DIAN investigators were surprised by the extent of these practice effects in their first treatment trial. He believes that tests such as Mobile ORCA could potentially turn these learning differences into an asset—as sensitive measures in future clinical trials.—Jessica Shugart
Berron D, Neumann K, Maass A, Schütze H, Fliessbach K, Kiven V, Jessen F, Sauvage M, Kumaran D, Düzel E.
Age-related functional changes in domain-specific medial temporal lobe pathways.
Neurobiol Aging. 2018 May;65:86-97. Epub 2018 Jan 31
Maass A, Berron D, Harrison TM, Adams JN, La Joie R, Baker S, Mellinger T, Bell RK, Swinnerton K, Inglis B, Rabinovici GD, Düzel E, Jagust WJ.
Alzheimer’s pathology targets distinct memory networks in the ageing brain.
Brain. 2019 Aug 1;142(8):2492-2509.
Baker JE, Pietrzak RH, Laws SM, Ames D, Villemagne VL, Rowe CC, Masters CL, Maruff P, Lim YY.
Visual paired associate learning deficits associated with elevated beta-amyloid in cognitively normal older adults.
Neuropsychology. 2019 Oct;33(7):964-974. Epub 2019 Aug 1
Hassenstab J, Ruvolo D, Jasielec M, Xiong C, Grant E, Morris JC.
Absence of practice effects in preclinical Alzheimer’s disease.
Neuropsychology. 2015 Nov;29(6):940-8. Epub 2015 May 25
No Available Further Reading