Poster | Poster Session 04 Program Schedule
02/15/2024
12:00 pm - 01:15 pm
Room: Shubert Complex (Posters 1-60)
Poster Session 04: Neuroimaging | Neurostimulation/Neuromodulation | Teleneuropsychology/Technology
Final Abstract #32
Is briefer better? Examining the construct validity and test-retest reliability of one-week ecological momentary assessment protocols for the assessment of executive functioning
Libby DesRuisseaux, University of Utah, Salt Lake City, United States Namita Mahtta, University of Utah, Salt Lake City, United States Lucy Atwood, University of Utah, Salt Lake City, United States Elizabeth Curtis, University of Utah, Salt Lake City, United States Yana Suchy, University of Utah, Salt Lake City, United States
Category: Assessment/Psychometrics/Methods (Adult)
Keyword 1: executive functions
Keyword 2: ecological validity
Keyword 3: computerized neuropsychological testing
Objective:
Ecological Momentary Assessment (EMA) of cognitive functioning has gained popularity, as it allows for assessment of abilities in the context of an individual’s daily life. EMA assessment may be particularly beneficial for executive functioning (EF), which is more difficult to assess in highly structured clinical environments and often impacted by various contextual factors, such as sleep disturbances and chronic pain (Berryman et al., 2014; Tinajero et al., 2018). Although useful, EMA assessment can be burdensome to participants, as it requires daily measurements. However, decreasing the number of measurements poses a threat to the reliability and validity of an EMA protocol. Therefore, the goals of the present study were 1) to determine whether we could replicate prior findings (Brothers & Suchy, 2022), in which we demonstrated correlations between a standard clinical measure of EF and an EMA EF measure assessed via a three-week protocol, using a one-week protocol instead, and 2) to determine the test-retest reliability of these shortened testing periods to examine the stability of EF assessed via this method across weeks.
Participants and Methods:
A total of 93 community-dwelling older adults (ages 60-95, 70% female, 97% white, Meducation= 16.4 years) completed four Delis-Kaplan Executive Function System subtests in the office, and performances on these subtests were combined into an EF composite score. Participants then completed three weeks of brief nightly EF tasks (Stroop and digit span backward). We computed mean EMA EF performance scores separately for each of the three weeks and examined their correlations with the office-based EF composite. We also examined whether EMA assessments spanning two-week periods incrementally improved validity over the one-week periods.
Results:
Correlations with office-based EF were strong across all three one-week periods (r= .534–.582, p<.001), which was comparable to the three-week protocol (r=.601, p<.001). The test-retest reliability of the one-week periods was acceptable between Weeks 1 and 2 (r=.711, p<.001 and Weeks 2 and 3 (r=.701, p<.001), but below threshold for Weeks 1 and 3 (r=.648, p<.001). In hierarchical linear regressions predicting office-based EF, all three one-week periods predicted a significant amount of variance (all p values <.001) when entered alone, but adding the remaining two weeks (beyond any one week) to the model significantly increased the amount of explained variance (accounting for 5-11% of variance, p=<.001–.023).
Conclusions:
These results demonstrate that while EF measured using one-week EMA periods still correlates strongly with office-based measures of EF and demonstrates largely acceptable test-retest reliability, longer periods of data collection allow for the prediction of additional variance in office-based EF. These results also support the notion that EF abilities fluctuate across time, likely due to the impact of various contextual factors. Future research should examine the optimal length of EMA protocols to determine when adding additional weeks does not incrementally and appreciably improve our ability to assess EF.
|