Q-interactive™

Grounded in Solid Research

All of Q-interactive’s tests and subtests are backed by rigorous studies.

Research plays an integral role in all aspects of clinicians’ work. Q-interactive™ is no different; it has a solid foundation of research supporting its use. Prior to inclusion in the Q-interactive assessment library, each new type of subtest undergoes an equivalency study to evaluate whether scores generated via testing with Q-interactive are interchangeable with those generated via testing with our standard paper-and-pencil versions. Currently, raw scores obtained using Q-interactive are interpreted using paper-pencil norms, and the equivalency studies provide support for the validity of this practice. You can access findings from each of the equivalency studies below:

Digital assessment platforms not only allow for additional accommodations for examinees, but they also help to accommodate the needs of test examiners with disabilities, an issue which is rarely discussed in scholarly literature on psychological testing.

Increased accessibility options available through Q-interactive and other platforms translate to more opportunities for assessments to be completed by a wider range of examiners, including those with motor, learning, visual, and auditory disabilities.

This paper seeks to describe some of the barriers faced by examiners with disabilities and to explain how digital assessment—and specifically Q-interactive—notably increases the accessibility of psychological testing for examiners with disabilities.

» Read Report

WAIS®–IV

This study evaluated the equivalence of scores from Q-interactive and paper-and-pencil versions of the Wechsler Adult Intelligence Scale, Fourth Edition (WAIS–IV; Wechsler, 2008). Overall, it demonstrated that scores generated from Q-interactive are interchangeable with those obtained from the paper-pencil format.

» Read Report

WISC®–IV

This study evaluated the equivalence of scores from Q-interactive and paper-and-pencil versions of the Wechsler Intelligence Scale for Children, Fourth Edition (WISC–IV; Wechsler, 2003). Overall, it demonstrated that scores generated from Q-interactive are interchangeable with those obtained from the paper-pencil format.

» Read Report

WISC®–V

WISC®–V

WISC-V Digital Coding and Symbol Search Subtests

Reliability, Validity, Special Group Studies, and Interpretation technical report that supports the use of the new fully digitized versions of Coding and Symbol Search subtests.

» Read Report

Equivalence of Q-interactive and Paper Administrations of Cognitive Tasks: WISC–V

This is the eighth Q-interactive equivalence study. In this study, the equivalence of scores from digitally assisted and standard administrations of the Wechsler Intelligence Scale for Children®–fifth edition (WISC®–V; Wechsler, 2014) was evaluated.

» Read Report

CVLT®-II
D-KEFS™

This study evaluated the equivalence of scores from Q-interactive and paper-and-pencil versions of the CVLT-II (2000) and four D-KEFS (2001). In the CVLT-II and the D-KEFS subtests, the recording and scoring processes are the only plausible sources of format effect. The examinee does not interact with a tablet and the examiner does not intervene while the examinee answers.

» Read Report

NEPSY®-II
CMS

This study evaluated the equivalence of scores from digitally assisted and standard administrations of three NEPSY®–II (2007) subtests (Memory for Designs—Immediate, Picture Puzzles, and Inhibition) and two Children’s Memory Scale (1997) subtests (Picture Locations and Dot Locations). In this study, there were no format effects that reached the threshold for non-equivalence. Therefore, it was concluded that scores from digital and paper administrations are interchangeable.

» Read Report

KTEA™-3

The Q-interactive research team determined that the Q-interactive and paper versions of KTEA–III subtests do not require studies of equivalence. The KTEA-III task analysis and Q-interactive study results of other tests suggest that the examinee interfaces do not cause a difference in performance results.

» Read Report

GFTA™-2
PPVT™-4

In adapting existing tests to the Q-interactive format, the development team carefully evaluates whether an empirical study is needed to support the equivalency of the digital and paper formats of each new instrument. The decision is based on two factors: whether there is a plausible reason why the two formats might give different results, and if there is, whether the administration or scoring features that are of concern have previously been studied and found to not jeopardize equivalency.

Ready to give Q-interactive a try?