INS NYC 2024 Program

Poster

Poster Session 04 Program Schedule

02/15/2024
12:00 pm - 01:15 pm
Room: Majestic Complex (Posters 61-120)

Poster Session 04: Neuroimaging | Neurostimulation/Neuromodulation | Teleneuropsychology/Technology


Final Abstract #108

Development and reliability of the Arrows test: An online neuropsychological assessment for attention, information processing speed, and executive function.

Michael Lopez, University of California Irvine, Irvine, United States
John Fulton, University of Utah, Salt Lake City, United States
Elizabeth Stuart, Insight Collective, Pasadena, United States
Sahra Kim, VA Boston Healthcare, Boston, United States
Patrick Chen, University of California Irvine, Irvine, United States
Rohan Roy, Self, Brentwood, United States
Aaron Thomas, University of California Irvine, Irvine, United States

Category: Teleneuropsychology/ Technology

Keyword 1: working memory
Keyword 2: executive functions
Keyword 3: information processing speed

Objective:

Remote healthcare, including neuropsychological assessment, can help bridge gaps in access and bring needed care to underserved populations. To this end, we have created the Arrows test, an online, computer-administered, neuropsychological assessment, designed to assess simple attention, processing speed, and executive functioning easily and accessibly across settings and populations. The Arrows test has seven subtests, administered in the following order: three response time trials (arrows [T1], words [T2], arrows and words [T3], three inhibition trials (arrows [T4], words [T5], arrows and words [T6]), and one inhibition/switching trial (arrows and words [T7]). This initial feasibility study of the Arrows test as a secure, online, remote option assessment method seeks to establish baseline reliability data for the Arrows test in a non-clinical adult population and to serve as primary evidence supporting the theoretical concept of the technique.

Participants and Methods:

Participants for this non-experimental, correlational study were recruited from Mechanical Turk to complete the Arrows test and fill out a demographic questionnaire about their age, ethnicity, education, sexual orientation, region, and employment history. Participants were provided a link to the tests on a custom-built site with an informed consent form and detailed instructions for taking each test. The 60” timed trial began after the participant completed an example trial and pushed the spacebar to start. Participants were shown either a direction arrow or a corresponding word (up, down, left, right) and aimed to press the corresponding arrow key. Each subtest was individually configured using a stochastic design to reduce potential for habituation and maximize the need for consistent executive control. Reliability of the test was assessed using Cronbach’s alpha. We conducted correlations to determine the relationship between each subtest and multiple regression analyses to determine whether age, education, and race/ethnicity predicted performance on each subtest.

Results:

Of the 2,000 participants recruited, 683 were excluded due to incomplete or obviously incorrect data (e.g.,10,000+ keystrokes per item). The final sample (n=1,317) ranged in age from 18 to 78 (M=39, SD=12) with slightly more self-identified males (55.6%) than females (43.7%). The internal consistency of the Arrows test was excellent (α=0.960). Correlations between each subtest were positive, strong (ρ>0.7), and significant (p<.001). Education significantly predicted performance on all tasks (T1: β=0.158, p<.001; T2: β=0.156, p<.001; T3: β=0.167, p<.001; T4: β=0.128, p<.001; T5: β=0.130, p<.001; T6: β=0.129, p<.001; T7: β=0.128, p<.001), as did race/ethnicity (T1: β=0.69, p=<.05; T2: β=0.103, p<.001; T3: β=0.077, p=<.01; T4: β=0.076, p=<.01; T6: β=0.068, p=<.05; T7: β=0.085, p<.01). Age significantly predicted performance for T1 (β=-0.103, p=<.01) and T4 (β =-0.072, p<.05).

Conclusions:

The results demonstrate promising psychometric potential for the newly developed Arrows test. The test shows both excellent internal consistency and strong and positive correlations from subtest to subtest, suggesting that it holds promise as a useful tool for remote assessment of simple attention, processing speed, and executive functioning. Further research in a clinical setting is currently being conducted, including the assessment of individuals with and without known cognitive impairment and correlation with other well-established neuropsychological tests.