Putting cognitive tasks on trial: A measure of reliability convergence

Jan Kadlec, Catherine Walsh, Uri Sadé, Ariel Amir, Jesse Rissman, Michal Ramot*

*Corresponding author for this work

Research output: Contribution to journalArticle

Abstract

The surge in interest in individual differences has coincided with the latest replication crisis centered around brain-wide association studies of brain-behavior correlations. Yet the reliability of the measures we use in cognitive neuroscience, a crucial component of this brain-behavior relationship, is often assumed but not directly tested. Here, we evaluate the reliability of different cognitive tasks on a large dataset of over 250 participants, who each completed a multi-day task battery. We show how reliability improves as a function of number of trials, and describe the convergence of the reliability curves for the different tasks, allowing us to score tasks according to their suitability for studies of individual differences. To improve the accessibility of these findings, we designed a simple web-based tool that implements this function to calculate the convergence factor and predict the expected reliability for any given number of trials and participants, even based on limited pilot data.Competing Interest StatementThe authors have declared no competing interest.
Original languageEnglish
JournalBioRxiv
DOIs
Publication statusIn preparation - 3 Jul 2023

Bibliographical note

We thank Sasha Devore for insightful comments during the writing process. We would also like to thank all participants that took part in this study. This work was generally supported by ISF grant 829/22, and the Zuckerman STEM leadership program. M.R. is the incumbent of the Roel C. Buck Career Development Chair.

Fingerprint

Dive into the research topics of 'Putting cognitive tasks on trial: A measure of reliability convergence'. Together they form a unique fingerprint.

Cite this