J Am Acad Audiol 2019; 30(07): 649-650
DOI: 10.3766/jaaa.18018
Letter to the Editor
Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

The Mystery of Unexplained Variance—Some Comments on Brenneman et al (2017)

Dennis J. McFarland
1   National Center for Adaptive Neurotechnologies, Wadsworth Center, New York Department of Health, Albany, NY
› Author Affiliations
Further Information

Publication History

16 March 2018

28 March 2018

Publication Date:
25 May 2020 (online)

[Brenneman et al (2017)] examined the relationship between scores on several measures of central auditory processing (CAPD) and tests of language and cognitive abilities. An investigation of these relationships can potentially provide important information that would clarify what these tests measure. The authors conclude that “the majority of variance in these CAPD measures was not accounted for by these particular measures of language and cognition.” They base this conclusion on observed correlations between language and cognitive test performance that they state do not account for ∼80% of the variance in CAPD tests. But what are we to conclude about this ∼80%? The implication is that CAPD tests measure a trait that is relatively independent of what is measured by standard tests of language and cognition.

There are at least two problems with this conclusion. First of all, the claim that language and cognitive tests do not account for ∼80% of CAPD test variance is misleading. For example, [Brenneman et al (2017)] report a correlation between the dichotic digits (DD) with left ear testing and Wechsler Intelligence Scale for Children (WISC) full-scale IQ of 0.450 (their Table 3). Squaring this value gives an r 2 value of 0.2025, so in this sense, IQ only accounts for ∼20% of the variance in this test. However, as noted by [Ozer (1985)] the correlation coefficient rather than r 2 is appropriate for measurement of the common variability shared by two tests. Trait models suggest that some latent variable underlies the shared variability of scores on both tests. Estimating common variability is a different problem than predicting one test score from another ([Beatty, 2002]).

Another issue concerns how one interprets this unexplained variability in CAPD test performance. This could be due to any one or all of several factors. One factor is random “noise” that is not consistent between test occasions and could be estimated by considering test-retest correlations. [Musiek et al (1991)] state that the test-retest reliability of the DD test was 0.77, based on results from four adults (8 ears) with brain lesions. Although such an estimate is highly unreliable given the small sample size and involves a different population of individuals, this estimate can serve to illustrate the role of test reliability. By this estimate, the reported correlation of left ear DD scores with WISC IQ (0.45) represents a substantial proportion of the consistent variability in DD scores across two separate test administrations. Thus, IQ may account for a large portion of left ear DD test scores when considering reliable variability. Although estimates of test-retest reliability from the [Musiek et al (1991)] study are tenuous, [Brenneman et al (2017)] did not include this important information in their study.

The unexplained variance in left ear DD test scores might also be due to factors that are not modality specific. For example, [Lawfield et al (2011)] found that the correlation between scores on a DD reproduction task correlated with an analogous visual dichoptic digits task (r = 0.67). Although these results were from a slightly different test and a different population, they suggest that a sizable portion of DD test score variability may not measure a modality-specific trait. Although the WISC may be viewed as a “gold standard,” it may be incomplete and not sample the entire breadth of cognitive abilities ([Flanagan et al, 2000], p. 23). A more accurate way to assess nonauditory factors is to compare tests in two or more modalities ([Cacace and McFarland, 2005]).

The issue that [Brenneman et al (2017)] seem to be addressing is whether CAPD tests provide incremental validity beyond that provided by language and cognitive tests such as the Clinical Evaluation of Language Fundamentals (CELF) and WISC. Incremental validity refers to the extent to which a test predicts some criterion over and above alternative tests ([Sechrest, 1963]). This cannot be established simply by documenting a null effect (i.e., documenting unexplained variance). Rather, incremental validity is established by simultaneously measuring the tests and criterion of interest and modeling the results with statistical techniques such as multiple regression. A problem with this approach in the present case is that [Brenneman et al (2017)] have not articulated the sort of outcome variables they seek to predict.

Studies such as that of [Brenneman et al (2017)] are important for identifying what is and is not assessed by tests of CAPD. However, such investigations would be improved by inclusion of more complete information such as test-retest correlations, multimodal testing, and relevant outcome variables. Otherwise the nature of unexplained variance remains a mystery.

 
  • REFERENCES

  • Beatty MJ. 2002; Do we know a vector from a scalar? Why measures of association (not their squares) are appropriate indices of effect. Hum Commun Res 28 (04) 605-611
  • Brenneman L, Cash E, Chermak GD, Guenette L, Masters G, Musiek FE, Brown M, Ceruti J, Fitzegerald K, Geissler K, Gonzalez J, Weihing J. 2017; The relationship between central auditory processing, language, and cognition in children being evaluated for auditory processing disorder. J Am Acad Audiol 28 (08) 758-769
  • Cacace AT, McFarland DJ. 2005; The importance of modality specificity in diagnosing central auditory processing disorder. Am J Audiol 14: 112-123
  • Flanagan DP, McGrew KS, Ortiz SO. 2000. Intelligence Scales and Gf-Gc Theory: A Contemporary Approach to Interpretation. Boston, MA: Allyn & Bacon;
  • Lawfield A, McFarland DJ, Cacace AT. 2011; Dichotic and dichoptic digit perception in normal adults. J Am Acad Audiol 22 (06) 332-341
  • Musiek FE, Gollegly KM, Kibbe KS, Verkest-Lenz SB. 1991; Proposed screening test for central auditory disorders: follow-up on the dichotic digits test. Am J Otol 12 (02) 109-113
  • Ozer DJ. 1985; Correlation and the coefficient of determination. Psychol Bull 97 (02) 307-315
  • Sechrest L. 1963; Incremental validity: a recommendation. Educ Psychol Meas 23 (01) 153-158