Symposia 3 Program Schedule
02/15/2024
09:00 am - 10:30 am
Room: West Side Ballroom - Salon 4
Symposia 3: Current Trends and Future Frontiers in Neuropsychology and Digital Technologies
Simposium #1
Symptom/performance validity during videoconference neuropsychological testing: Existing evidence and remaining questions
Timothy Brearly, Penn State College of Medicine, Hershey, United States Paul Ingram, Texas Tech University, Lubbock, United States Ali Sapp, Texas Tech University, Lubbock, United States Robert Shura, VA Mid-Atlantic Mental Illness Research, Education, and Clinical Center, Salisbury, United States
Category: Teleneuropsychology/ Technology
Keyword 1: teleneuropsychology
Keyword 2: symptom validity
Keyword 3: performance validity
Objective:
Videoconference administration of neuropsychological tests (normed for onsite in-person use in highly controlled environments) introduces novel confounds to test interpretation. This may be uniquely true for performance and symptom validity tests, where previous work has identified a relationship between unsupervised in-person testing and increased rates of validity test failure. For example, there might be higher non-content responding on symptom validity tests when distractions are present in remote settings. Remote testing could also result in different observer-related demand characteristics, perhaps changing frequency of over/under-reporting. Performance validity tests, which may be shorter and more reliant on cut scores than other tasks, may be particularly vulnerable to videoconference-specific interference (e.g., blurry stimuli) resulting in increased “false-positives.”
Few studies have evaluated the possible effects of videoconference administration on validity tests. There is great need for additional work as validity evaluation is essential to evidence-based neuropsychological testing, and teleneuropsychological services have been associated with high patient satisfaction and increased accessibility of care. Further, the collection of valid neuropsychological data from a distance has many potential benefits (e.g., more representative sampling, lower attrition, establishing telehealth-specific norms).
This talk will review the limited available studies specific to videoconference evaluation of validity (seven at the time of abstract submission). In sum, case-control and crossover studies have not identified differences in symptom or performance validity scores gathered during remote administration. An emphasis will be placed on our recent work evaluating pertinent scales on the most current editions of the Minnesota Multiphasic Personality Inventory (MMPI; either 2-Restructured Form or 3) and performance on the Dot Counting Test (DCT) in veterans.
Participants and Methods:
Participants were veterans (N = 498) evaluated at a Veterans Affairs hospital in one of two IRB-approved studies. Symptom validity was investigated using a retrospective case control design comparing MMPI validity scale scores acquired during videoconference administration and typical in-person testing within the context of a subspecialty ADHD clinic. Performance validity was evaluated using a prospective, counterbalanced crossover design comparing videoconference administered and in-person administered DCT performance.
Results:
Appropriate parametric and non-parametric tests did not identify significant differences between in-person and videoconference acquired symptom and performance validity scores. The effect size for the DCT was weak (r= 0.18), with the raw videoconference mean falling above in-person scores. The effect sizes for MMPI validity scales varied (g = 0.01 to 0.88), with 11 of the 18 mean scores compared being qualitatively higher during videoconference testing.
Conclusions:
The limited evidence to date has not identified significant differences in validity scores when neuropsychological tests are administered by videoconference, providing some evidence in support of their use. Observed trends in our data suggest the possibility of subtly increased failure rates in videoconference evaluations, indicating a need for further research. There is also a need for studies evaluating the extent to which any identified effects or variability are explained by interference specific to videoconference communication of stimuli/responses, versus other potential confounds specific to remote testing (e.g., increased vulnerability to distraction/interruption, self-selection of invalid performers into remote testing clinics, variability in stimulus presentation/videoconference technology).
|