For over 30 years, teachers have used miscue analysis as a tool to assess and evaluate the reading abilities of hearing students in elementary and middle schools and to design effective literacy programs. More recently, teachers of deaf and hard-of-hearing students have also reported its usefulness for diagnosing word- and phrase-level reading difficulties and for planning instruction. To our knowledge, miscue analysis has not been used with older, college-age deaf students who might also be having difficulty decoding and understanding text at the word level. The goal of this study was to determine whether such an analysis would be helpful in identifying the source of college students’ reading comprehension difficulties. After analyzing the miscues of 10 college-age readers and the results of other comprehension-related tasks, we concluded that comprehension of basic grade school-level passages depended on the ability to recognize and comprehend key words and phrases in these texts. We also concluded that these diagnostic procedures provided useful information about the reading abilities and strategies of each reader that had implications for designing more effective interventions.
In the United States, the reading levels of deaf and hard-of-hearing (D/HH) college-age students has remained remarkably stable over the past 30 years. The review of Qi and Mitchell (2007) of normative studies conducted on five editions of the Stanford Achievement Test Series shows that reading comprehension levels of 17-year-olds have increased but never exceeded the fourth-grade level. That is, reading comprehension subtest scores for deaf students increase each year from 1974 to 2003, but at age 17, their median performance never exceeded the fourth grade equivalent. Allen (1994) estimated that only college-bound high school graduates read at the eighth-grade level or higher. Because even an eighth-grade reading level is insufficient to handle the reading demands of college curricula, the challenge facing instructors in college preparatory courses is similar to the one high school teachers of deaf students face: What is the reason that many deaf students do not comprehend text as well as their hearing peers?
We know that “accuracy and automaticity of word recognition are separate but related aspects of learning to read” (Stahl, Kuhn, & Pickle, 2006, p. 380) and that good readers (deaf and hearing) are able to recognize words and group these words into meaningful phrases quickly and automatically (Kelly, 2003). Accuracy, as the term implies, refers to the number of words read correctly. Automaticity is the ability to recognize words rapidly enabling the reader to focus more on comprehension of the text. “Although it is not the only component of the reading process, word identification has to be automatic enough to allow comprehension to take place” (Rego, 2006, p. 152). Based on their review of the literature, Stahl and Hiebert (2006) make the case that word recognition is central in defining, not only the reading performance of beginning readers, but in accounting for a significant proportion of the variance in student performance among challenged readers as well. In sum, both accuracy and automaticity at the word level are central to developing the fluency necessary for comprehension. In addition, we know that good readers are able to use metacognitive strategies to enhance or check their comprehension (Kelly, Albertini, & Shannon, 2001; Strassman, 1997) and that they have had early, meaningful exposure to language and print in the home and at school (Toscano, McKee, & Lepoutre, 2002).
In a review of reading research undertaken with students who are deaf or hard of hearing between 1963 and 2005, Luckner and Handley (2008) conclude that this body of research only tentatively supports certain top-down evidence-based practices, with additional research needed in bottom-up reading processes, such as decoding and vocabulary recognition (p. 31). Although this review was conducted on research that reported on children and youth between 3 and 21 years of age, comparatively little research has been conducted on the reading processes of older D/HH readers and even less on bottom-up reading processes. That said, there is substantial evidence that, as with children, adults with reading disabilities have difficulties at the level of word recognition and the processes that underlie this skill (Bruck, 1990; Scarborough, 1983; Stanovich, 2000). In a recent meta-analysis of the literature on reading disabilities in adults, Swanson and Hsieh (2009) reported that “results support the notion that the primary processes that underlie reading disabilities in children are the same as those in adults” (p. 1384). More specifically, their review indicated reading disabilities in this population are related to phonological processing with processes related to verbal memory, vocabulary, and naming speed playing equally important roles. Thus, the focus of this article is on the nature of the bottom-up reading processes of college-age deaf readers experiencing difficulties in reading college-level texts and the extent to which these difficulties are related to word recognition.
It is now the case that the most widely held view to conceptualize the process of reading is some form of interactive model (Gillon, 2004; McCardle & Chhabra, 2004; Pressley, 2006). In an interactive view of reading (Rumelhart, 1977; Stanovich, 2000), it is argued that readers must integrate both micro (bottom-up) and macro (top-down) strategies to read efficiently and effectively. At the microlevel, readers must be able to match letters (and letter combinations) to sounds (or articulations of sounds) in order to retrieve the memory (and meaning) of a word. Good readers must also be able to make inferences, ask themselves questions, and monitor comprehension while reading. In an “interactive-compensatory model” of reading, an additional assumption is that “deficiencies at any level in the processing hierarchy can be compensated for by a greater use of information from other levels” (Stanovich, 1984, p. 15). This may be particularly important in an investigation of college-age readers where the assumption is that they have learned to draw on different sources in constructing meaning while reading.
Such a view has particular relevance when thinking about the word recognition strategies employed by D/HH readers as the phonological processing route presents singular challenges in the presence of a hearing loss (Paul, 1998; Trezek, Paul, & Wang, 2009)—an issue that has been at the center of a long-standing and on-going debate in the field (see Allen et al., 2009; Paul, Wang, Trezek, & Luckner, 2009). A review of the research (Marschark & Harris, 1996) indicates that, although there may be a delay, D/HH readers do develop phonological awareness, sometimes with the support of visual and kinesthetic-tactile strategies such as contact sign, speechreading/mouthing, fingerspelling, cued speech, and visual phonics (Mayer, 2007; Paul, 2003). But although the efficacy of these alternative coding strategies continues to be explored, research evidence consistently indicates that deaf readers who use phonological coding strategies while reading tend to be better readers than those who do not (see Trezek, Wang, & Paul, 2009; Wang et al., 2008 for discussions). Thus, given the importance, phonological processing and the likelihood that college-age D/HH readers employ a diverse array of coding and word recognition strategies, we determined to examine bottom-up processes by means of a miscue analysis.
Miscue analysis is an assessment tool, which measures oral reading accuracy at the word level by identifying when and the ways in which the reader deviates from the text while reading aloud (Goodman, 1969; Goodman K & Goodman Y, 1977). An analysis of these miscues or deviations provides information on the nature of the cueing systems—graphophonic, syntactic, and semantic—that is used for word recognition (Rhodes, 1993). According to Goodman, these miscues should not be viewed as errors but rather “examined to illuminate the reader's thinking process during reading. As such, he referred to the examination of miscues as windows into a reader's mind” (Tracey & Morrow, 2006, p. 59).
Counting the number of these miscues also provides important information. “When a reader misses a sizeable proportion of words, comprehension will suffer. A critical question for instruction as well as assessment pertains to the size of the corpus of words that are recognized incorrectly, before comprehension breaks down” (Stahl & McKenna, 2006, p. 412). In other words, readers begin to lose meaning as deviations from the text increase (Leslie & Osol, 1978), especially if these miscues change the meaning of the text.
Given the rich information it provides, miscue analysis has become a widely used tool for assessing hearing readers in both elementary and middle school. It has also been used with modifications with school-aged deaf readers (Chaleff & Ritter, 2001; Gennaoui & Chaleff, 2000; Luft, 2009), but not, to our knowledge, with older deaf readers who might also be having difficulty decoding and understanding text at the word and phrase level. The goal of this study was to determine whether results of a miscue analysis would provide useful information for instructors about the word recognition strategies and comprehension abilities of these students. To answer these questions, we present qualitative reading profiles for five students who together represent the range of communication preferences and reading styles found in the sample of college-age readers in our study.
Ten deaf college students at a technical university in the northeast United States volunteered to take part in this study. Nine of the ten students were associate degree students, and one was enrolled in a bachelors program. The two women and eight men ranged in age from 19 to 25 and all had moderately severe-to-profound hearing losses. All were the children of hearing parents and only one had deaf siblings. All reported that English was the only spoken language used in the home. Their reported use of sign language in the home ranged from the use of ASL only to simultaneous communication to speech only. When asked for their preferred mode of communication “most of the time,” four preferred ASL only, five preferred English-based signing and speech together, and one preferred English-based signing with English mouth movement and no voice.
The nine associate degree students were recruited from a developmental reading course designed for students not yet able to read college-level materials. According to a 1988 nationally representative sample, the mean reading score for college-bound grade 12 students was 18.13 (SD 4.56) (ACT Technical Manual, 2008). The mean score for the nine associate degree students in this study was 14.66 (range 12–21; SD 2.83). The deaf bachelor degree student recruited for the study received a 34 on the ACT Reading subtest and was recruited to pilot the methodology. We wanted to make sure that this qualitative procedure would be appropriate for this age group (see Table 1 and discussion below).
Group assessments of reading
|ID||ACT Reading subtesta||NTID Reading Testb (grade equivalent)|
|Jordan||14||76 (below 7)|
|Clive||14||118 (8 to 9)|
|ID||ACT Reading subtesta||NTID Reading Testb (grade equivalent)|
|Jordan||14||76 (below 7)|
|Clive||14||118 (8 to 9)|
Because scores of 12 and below are at chance levels (Dowaliby et al., 1997), all entering students who score at 14 or below are required to take the school's own reading and writing placement measures. So, for example, a student who scores 144 or above on the National Technical Institute for the Deaf (NTID) Reading Test is judged to be a proficient college-level reader and not required to take courses in the developmental course sequence. Students’ scores on the NTID Writing test ranged from 39 to 65 (mean 47.22; SD 10.97). A student scoring 68 or above on the writing test is judged to be proficient and ready to begin the university's first year writing sequence.
In order to examine decoding (word recognition) strategies and assess comprehension levels of these ten deaf college students, we selected the Qualitative Reading Inventory (QRI)-4 (Leslie & Caldwell, 2006). This inventory is grounded in the most current reading theory and research literature and provides normative data (on a hearing, school-age population) (Leslie & Caldwell, 2006, pp. 440–478). In describing the QRI-3, Harp (2006) writes that “it is clearly one of the best-documented, most thoughtfully conceived, and most complete informal reading inventories available” (p. 240). We believed that using such an inventory with D/HH college students would provide insights into the nature of these older learners’ reading processes. Piloting the complete procedure with the deaf bachelor degree student indicated that it was appropriate for this age group. It was easy to administer and results indicated that he was indeed a proficient reader.
The diagnostic sequence for each student was videotaped and took less than an hour per participant. Protocols were followed as outlined in the QRI-4. Screening word lists were used to identify appropriate grade-level passages for the assessment (from pre-primer to high school) with students being asked to read words from the list “aloud” in whatever modes were comfortable for them—speech alone, sign alone, or some combination. When a student read a list with 80%–85% accuracy (defined as instructional level in the QRI-4), an associated grade-level text was chosen for the passage administration.
Before students read a passage, they were asked several general content questions to establish a context for the text. They were then asked to read the passage “out loud” in their preferred manner, as they had done for the word lists. The students were then asked to retell what they had read including as many key points and supporting details as they could remember. Finally, they were asked to respond to a series of both explicit and implicit comprehension questions.
Seven of the ten students returned for a second hour and read a different passage. The “pilot” student was not asked to return, and though repeated attempts were made to solicit participation of the remaining two, they were unable to oblige because of workload and scheduling conflicts. In the second session, two new tasks followed the comprehension questions. The students were asked to complete a CLOZE passage (from the word “closure”) based on the text and to retell what they had read in writing including as much detail as possible. The additional tasks are commonly used comprehension measures and were added to provide a more robust comprehension battery.
All the video recordings were transcribed for purposes of scoring and analysis. All portions of the video recordings that were signed without voice were translated into English and transcribed by a native user of American Sign Language and certified sign language interpreter. Because the translator was told about the context of the tapings but not about the content of the passages or questions, the transcripts were checked for ambiguities in the translation. For example, the participants often used the ASL sign, MACHINE, to indicate the English, machine, engine, and locomotive. Because such variations would be scored as substitution miscues, we reviewed the tapes and in instances where mouth movements, context referents, or other cues indicated a specific term, we modified the transcripts.
What follows are representative excerpts from the transcripts to illustrate the nature of the miscue analysis and four other diagnostic procedures used in this study. Figure 1 shows an example of how the transcript of Clive's “read aloud” (of a fourth-grade passage) was scored for miscues. Lines from the passage are paired with lines from the read-aloud transcripts (in italics). Miscues are underlined.
Miscues were scored as per the QRI-4 manual (pp. 72–82). A total accuracy count was done in which any deviation from the printed text is counted as a miscue. This includes insertions, omissions, substitutions, reversals, and self-corrections. Although there are concerns with using a total accuracy count versus a count of only those miscues that distort or change meaning, often referred to as a total acceptability count (see Harp, 2006, p. 240), it was decided to use total accuracy as there were so few instances of miscues that did not interfere with meaning (e.g., reading “Maria” as “Mary”). Conceptually inaccurate renditions of a word or phrase were scored as miscues. For example, in the section describing the race (“At first the horse pulled ahead. Then the train picked up speed and soon it was neck and neck with the horse.”), Jeffrey signed PULLED + AHEAD and NECK + IN + NECK; that is, he signed the phrases literally, word for word. In the few instances when a word was fingerspelled, it was noted on the transcript with a “+” or “−.” In the whole passage, Ginny fingerspelled only one word, “ton,” in the sentence, “It [the engine] was small and weighed barely a ton.” The notation, “fs+,”on the transcript meant that she appeared to understand the meaning of T-O-N.
A challenge when scoring the reading of students who used signs was that there were many instances when they did not sign the words on the page but rather attempted to sign a translation of what they believed was meant. These “unread” words were scored as omissions, accounting for the large number of miscues recorded for some of the students. This approach to scoring was taken because one of the goals of the study was to determine whether this translation strategy was effective in making meaning from the text.
Retells (see Figure 2) were scored using the Retelling Scoring Sheets from the QRI-4, which are composed of the important ideas contained in a passage. For example, the fourth-grade passage “Early Railroads” has a potential score of 57 idea units. The examiner places a check next to each explicit idea listed on the scoring sheet that was recalled by the student. Students are not expected to remember the exact wording of the text. Synonyms and paraphrases are acceptable and it is up to the examiner to determine whether the student's recall matches the meaning of the text. “Although the retelling is not used to determine independent, instructional and frustration levels, it can provide valuable information with implications for instruction” in the areas of text structure and sequence, identifying main ideas and supporting details and accuracy (Leslie & Caldwell, 2006, p. 86).
Comprehension questions (Figure 3) are scored as per the suggestions provided in the manual with each question getting one point if it is answered correctly. In the case of explicit questions, the answer must come from the passage. Implicit questions may not be scored as correct “if the answer is not related to a clue in the passage …. if the answer comes from prior knowledge only, it is not counted as correct” (Leslie & Caldwell, 2006, p. 88). This was an important consideration in scoring the responses of college-age students who were reading passages at levels intended for much younger readers (i.e., fourth grade). The assumption is that these older learners come to the text with a depth of background knowledge that surpasses that of the younger readers for whom these passages were designed and therefore may answer questions based on prior knowledge rather than what has been gleaned from a reading of the text.
Scoring the CLOZE passage (Figure 4) was done as per the protocol described in McKenna and Stahl (2003). Credit was given only for verbatim responses (i.e., for the exact word that was deleted in each case), although minor misspellings are counted as correct (e.g., scralet for scarlet). Verbatim scoring is preferred as it is more objective, easier to grade, and correlates well with scores based on accepting synonyms or other reasonable responses. Most importantly, using any other approach makes it “nearly impossible to interpret the results” (McKenna & Stahl, 2003, p. 173). A calculation was made of the percentage of answers correct and then scored using the following guide: Independent Level (above 60%), Instructional Level (40%–60%), and Frustration Level (below 40%).
Scoring the written retells (Figure 5) followed the same procedure as that described for the oral retells in the previous section.
Yetta M. Goodman, regents professor emerita in the College of Education, Department of Language, Reading and Culture, University of Arizona, Tucson, remains energetically engaged in her research, writing, teaching and consulting in the fields of early literacy development, miscue analysis and understandings of literacy processes. Her 2014 publications include Making Sense of Learners Making Sense of Written Language, coauthored with Ken Goodman, published by Routledge and The Essential RMA: A Window into Readers' Thinking coauthored with Prisca Martens and Alan Flurkey published by Richard C. Owens. She received the NCTE James R. Squire Award for outstanding contributions to the profession internationally and nationally. She has popularized the concept of kid watching to legitimatize knowledgeable teachers who engage in professional in-depth observations of the language and learning constructions of their students and who use their knowledge to develop rich and engaging curriculum in response to their students' needs, interests and inquiries. Her whole language philosophy highlights valuing innovative learning opportunities, professional teachers and all learners. She consults and speaks at professional literacy institutions and conferences around the world and contributes significantly to professional literacy organizations. She is a past president of NCTE, CELT, and has served on the IRA Board of Directors. She was inducted into the Reading Hall of Fame in 1994, is a past president and served as co-historian until 2015.