Cognitive function is a key determinant of health, economic, educational, and social outcomes across the life course. Incorporating measures of cognition into large-scale surveys enables researchers to track population trends, detect early signs of decline, and evaluate interventions. However, administering cognitive assessments in surveys is challenging. Many instruments, especially those adapted from clinical test batteries, are designed for in-person administration. As surveys increasingly transition to web-based data collection, this creates significant difficulties, and there is strong evidence that cognitive assessments are particularly vulnerable to mode effects.
This workshop will present findings from an evidence review conducted for Research Strand 5 of Survey Futures. The review drew on evidence from high-quality probability-based longitudinal and cross-sectional surveys across multiple countries that have implemented online cognitive assessments, as well as experimental studies designed to test online cognition measures. We will also consider non-traditional and indirect approaches to assessing cognitive ability, including mobile apps, digital tracking tools, and survey paradata. Building on this evidence, we will summarise the types of batteries and measures employed, identify key sources of mode effects, and present a survey practice guide based on its findings. This workshop is aimed at both survey researchers and practitioners, and will offer practical recommendations for administering cognitive tests in online survey settings, enhancing data quality and navigating the challenges of mixed-mode design.