Paginated PDF Version
Advancing the design and use of computer-assisted language learning (CALL) activities is a key concern for researchers. As Chapelle (1997, pp. 19-22) explains, critical questions need to be answered about how CALL can be used to improve instructed second language acquisition (SLA). Two such questions are:
Chapelle (1997) describes how the answer to the second question is dependent upon beliefs concerning what types of language use are expected to be beneficial for second language development. For those espousing an interactionist view of SLA (Lantolf, 2000; Lightbown & Spada, 1999; Long, 1981; Pica, 1996; Van Lier, 1996), there is an assumption that L2 acquisition is facilitated by learners' interaction in the target language, thereby providing opportunities to comprehend message meaning. Accordingly, to ensure that L2 tasks meet such assumptions, and to facilitate SLA, researchers need to specify ideal observable features of learner language, such as signals that focus attention on language and features that may elicit a repetition or an expansion of previously acquired language.
In line with Chapelle's recommendations, a key concern for research is how these ideal features and appropriate tasks can be incorporated into an experimental reading programme. This concern is relevant due to the two goals of the current course, namely:
Throughout this paper, focus is placed on the first exercise that the students meet in the course, a reading comprehension exercise. It was hypothesised that increased interaction could be facilitated by requiring students to collaborate in pairs at a single computer (Beatty & Nunan, 2004; Stevens, 1992), and by providing Elaborative feedback in the form of hints to promote discussion as students self-correct errors. This type of feedback was provided as an alternative to Knowledge of Correct Response (KCR) feedback, which replicates traditional paper-based answer sheets by providing correct answers. It was also hypothesised that increased interaction through pair work with Elaborative feedback would be an effective method for promoting comprehension of a reading text. Results are analysed both quantitatively and qualitatively.
Context of the Study
During the second term of the reading course (see Murphy & Imrie, 2003, for a description), students are encouraged to choose from a series of activities and create, with guidance, an individualized syllabus. Students select, complete and then check their answers to the exercises with answer papers provided by the teachers. However, this procedure has proven to be problematic on the paper-based course for the following reasons:
In a bid to overcome these challenges, this research focuses on the contribution that computer-mediated feedback can make. A key question that arises is: how and what kind of feedback maximizes comprehension? It is towards this issue that the following discussion is directed.
INTERACTION IN THE READING PROCESS
CALL researchers have turned to the work of interactionist SLA researchers when evaluating the quality of learner language. As Chapelle (1997) explains, the linguistic form of: " … a good interaction is hypothesized to occur when the normal interactional structure has been modified because the learner has requested, for example, a repetition, clarification or restatement of the original input" (pp. 25-26). This modified interaction is thought to be good because it can function to promote both the negotiation of meaning of the input (Beatty, 2003; Chapelle, 2001; Long, 1985; Nunan, 1993; Pica, 1994) and greatly contribute to language acquisition (Ellis, 1998; Krashen, 1985; Van den Branden, 2000). From a reading proficiency perspective, Larsen-Freeman and Long (1991) note:
Following research that points to the importance of comprehensible output to the acquisition of the target language (Chapelle, 1997; Swain, 1985), a conscious effort was made in this study to investigate the effects of feedback designed to promote negotiation of meaning, form and / or content in situations similar to those described by Swain and Lapkin (1995):
Although the importance of both negotiation of meaning and comprehensible output is well documented, few studies have investigated the effects on reading comprehension (Van den Branden, 2000); nevertheless, the design of this study was informed by research that was available and specifically by studies that point to the usefulness of promoting reading proficiency through interaction (Grabe & Stoller, 2002; Shanker & Ekwall, 2003). Despite the fact that the studies such as Eldredge and Butterfield (1986), Koskienen and Blum (1986) and Nes (2003) were carried out in non-computer-mediated environments, they provide positive implications for promoting interaction through paired online reading activities.
Quality Student Interaction Around computers
As with non-computer-mediated environments, it is important to consider the interaction that is generated in computer-based tasks (Beatty, 2003; Stevens, 1992), and the type of interaction that is desirable for promoting comprehension, learning and language acquisition around computers. Based on findings from Fisher's (1992) study, students working on tutorial software exhibited the same IRF (Initiation, Response, Follow-up / Feedback) discursive structure. However, researchers have attempted to increase levels of interaction between students in various ways. For example, Wegerif and Mercer (1996) proposed a transformation to an IDRF (Initiation, Discussion, Response, Follow-up / Feedback) structure by including a discussion stage. Furthermore, software can also be developed to replicate techniques which teachers use to stimulate interaction, notably: (a) eliciting knowledge from students, (b) responding to what students say (confirmations, repetitions, elaborations and reformations) and (c) describing significant aspects of shared experiences ('we' statements) (Mercer, 2004).
When considering the quality of interaction around computers, two key features are particularly desirable: (1) learners need to be actively involved (Van den Branden, 2000); and (2) learners need to produce Exploratory talk in which partners engage critically and constructively with each other's ideas (Mercer, 1995). Regarding the former, Mercer (2004) explains how it is helpful for the analyst to perceive the degree in which students in joint activities are: "(a) behaving cooperatively or competitively and (b) engaging in the critical reflection or in the mutual acceptance of ideas" (p. 146). As for promoting Exploratory talk among learners, Wegerif, Mercer, & Dawes (1998), having been influenced by findings of research into effective collaborative learning (summarized in Wegerif & Mercer, 1996), note the importance of: sharing relevant information, reaching agreement, expecting reasons and challenges, discussing alternatives and encouraging peers. A key concern for research, however, is how quality interaction and reading comprehension can be promoted through computer-based activities.
Reading, Computers and the Internet
Despite the fact that models and guidelines recommending pedagogically sound practices for incorporating Internet-based materials exist (Berry, 2000; Brandl, 2002; Chun & Plass, 2000), a major concern is that the number of such examples remains limited. Likewise, guidelines for offering a reading course via the Internet (Caverly & McConald, 1998; Jones & Wolf, 2001; Mikulecky, 1998) are similarly few. However, evidence exists to support the assumption that integrating reading with computer-mediated support improves ESL students' reading skills (Chun & Plass, 1996; Hong, 1997; Stakhnevich, 2002; Williams & Williams, 2000). A common theme in studies such as these is that learners benefit from facilities offering support and assistance in web learning environments, for instance, from online dictionaries, glosses, graphics, blogs, bulletin boards or chat rooms. As a further example of a potentially advantageous facility that can also be offered through computer-assisted language learning (CALL), the following section comprises a discussion of the merits of computer-mediated feedback.
One area in which computers are playing an increasingly important role in SLA concerns the identification of students' errors and the subsequent provision of appropriate feedback (Brandl, 1995; Clark & Dwyer, 1998; Tsutsui, 2004). Buscemi (2004) notes how, according to cognitive learning models, learning can take place only after comprehension, thus emphasizing the need for targeted corrective feedback. However, many software products opt for a generic form of feedback and rarely go above the level of indicating whether an answer to a question is correct or incorrect (Sales, 1993). Beatty (2003) explains:
Clariana (2000), who has published extensively on the topics of computer-mediated feedback, provides a succinct summary of the traditionally investigated types of feedback in CALL:
Knowledge of response (KR) that states "right" or "wrong" or otherwise tells learners whether their response is correct or incorrect; Knowledge of correct response (KCR) that states or indicates the correct response; and Elaborative feedback that includes several more complex forms of feedback that explains, directs, or monitors (Smith, 1988). Elaborative feedback includes the forms listed below:
As can be seen, several forms of feedback exist. Accordingly, numerous researchers have tried to identify the most effective forms in different contexts. This issue is addressed in the next section.
How Effective is Feedback?
It is difficult to say which type of feedback is best for SLA as results are mixed (Clariana, 2000; Mory, 1994). For example, summarizing findings by Bangert-Drowns, Kulik, Kulik, and Morgan (1991), AUC and Elaborative feedback are considered to be the most effective:
However, following a review of 30 studies, Clariana's (1993) findings, which are consistent with both Schimmel's (1983) meta-analysis of 15 studies and also Kulhavy and Wager's (1993) research, show feedback has proven more effective than no feedback:
In addition to investigating the most effective type of feedback, various other aspects of computer-mediated feedback have also been the focus of research. For example:
A key question is whether computer-based feedback can offer advantages over traditional paper-based answer papers. Of particular interest, therefore, is the fact that Nagata (1996) found ongoing intelligent computer feedback to be more effective than simple workbook answer sheets for developing grammatical skill in producing Japanese particles and sentences. However, Clariana (2000, p. 2) draws on research to show how, in contrast to Nagata (1996) and Bangert-Drowns et al. (1991), Elaborative forms of feedback often produce no significant improvement over KR feedback despite requiring considerable development and implementation cost (Merrill, 1987). Nevertheless, Ferris (2003) explains how indirect feedback, or Elaborative feedback from a CALL perspective, is generally thought to be conducive to long-term student development; it forces students to think about their own errors and self-correction, thereby leading to: " … increased student engagement and attention to forms and problems" (p. 52). Indeed, de Bot (1996) explains how students need to be active when producing language to discover what they can and cannot do. Noticing a problem, possibly through feedback, may be the incentive learners need to reengage with information in the input, thereby providing an opportunity for learning. Therefore, by placing the onus on the students to identify and correct their own errors, it would seem that the potential for interaction and negotiation of meaning, form and / or content is increased.
THE NEED FOR RESEARCH
When providing feedback, what messages should be supplied to the students on the screen? If feedback is presented in the form of KR or KCR feedback, it is simple to envisage what is displayed (a message such as right or wrong, a highlighted answer or a mark indicating the correct answer); however, with Elaborative feedback, what is it exactly that should be displayed to promote interaction and comprehension? This is the crucial question for anyone creating software to provide such feedback.
Among the extensive literature related to the theme of feedback, the number of studies researching Elaborative feedback is relatively small; however, notable examples exist (Brandl, 1995; Clark and Dwyer, 1998; Mory, 1994; Nagata, 1996; Van der Linden, 1993). Even more difficult to find are examples of guidelines for its presentation. For example, Van der Linden (1993) notes: "Long feedback (exceeding three lines) is not read and for that reason is not useful" (p. 56). Van der Linden concludes that: " … feedback, in order to be consulted, has to be concise and precise." Mory (1994) recommends that isolated feedback should be avoided as it may provide little context for revision of an erroneous response. Consequently, Mory advises designers to: "… include the learner's answer and other alternative choices on the same screen as the feedback" (p. 287). As Chapelle (2001) explains, therefore, further research is vital: "What is needed are theoretically and empirically based criteria for choosing among the potential design options and methods for evaluating their effectiveness for promoting learners' communicative L2 ability" (p. 2). Accordingly, the following study comprises an investigation into the effectiveness of Elaborative feedback with the aim of identifying guidelines for researchers creating software to provide such feedback.
The literature shows that feedback has the potential to promote comprehension of a reading text; however, as to which type of feedback is more effective, results have been varied. The literature also shows that interaction between students can promote comprehension; however, both (a) what kind of interaction is generated through pair work as a result of Elaborative feedback, and (b) whether the interaction is sufficient to promote comprehension need investigating. The effect of a student's English proficiency level also needs to be determined. Stemming from this discussion, the following hypotheses were formed:
Hypothesis 1: Elaborative feedback will be more effective for promoting comprehension of the reading text than KCR feedback.
Hypothesis 2: Pair work will be more effective for promoting comprehension of the reading text than individual work.
Hypothesis 3: Students with a higher level of English proficiency will demonstrate higher levels of comprehension of the reading text than those with a lower level.
Hypothesis 4: Students studying in pairs and receiving Elaborative feedback will demonstrate higher levels of comprehension of the reading text than other students.
Hypothesis 5: Students with higher proficiency receiving Elaborative feedback will demonstrate higher levels of comprehension of the reading text than other students.
Hypothesis 6: Students with higher proficiency studying in pairs will demonstrate higher levels of comprehension of the reading text than other students.
Hypothesis 7: Students with higher proficiency studying in pairs and receiving Elaborative feedback will demonstrate higher levels of comprehension of the reading text than other students.
The participants are first-year English majors at Kanda University of International Studies in Japan, who are streamed according to a test of global proficiency (Bonk & Ockey, 2003). In the 2005-6 academic year, 407 students (15 classes) were assigned to one of four bands: advanced (three classes), upper intermediate (four classes), intermediate (four classes) or lower intermediate (four classes). For the purpose of this study, the top two bands were grouped together and the bottom two bands were also grouped together.
Materials developed for this study were trialed with 162 of the 407 first-year students. 14 others were late or absent for lessons, and the main study was conducted with the remaining 231 students. Six of these 231 students (two pairs from the Elaborative feedback condition and one pair from the KCR feedback condition) scored 100% on the first comprehension exercise on the first attempt. Due to the fact that these students made no errors, and, therefore, received no feedback to promote comprehension (the focus of this research), these records were omitted from the study. Therefore, the statistical data analysis was performed on the data from the remaining 225 students. Videos were recorded of 12 volunteer students (six pairs) as follows: (a) four pairs were recorded during the trials and (b) two pairs were recorded during the main study. From analysis of the transcripts of these recordings, qualitative data is used to support / reject the various research hypotheses. All students agreed to participate in the study.
Materials: Reading Materials
Materials comprised one reading text (see Appendix A for an excerpt of the text) and two multiple-choice comprehension exercises, each with 15 questions (see Appendices B and C for example questions). Therefore, the maximum score on each exercise was 15 points. While the questions in the two comprehension exercises were different, the same content points were covered by corresponding questions.
Materials: Feedback Treatment
The methodology for displaying feedback is based upon the results of a study undertaken by Murphy (2005). Research was conducted into identifying the types of errors that students made in response to multiple-choice comprehension questions about a reading text and how students changed their answers following Elaborative feedback with an Answer-until-correct methodology. Students were allowed to change their answers until they answered correctly, until they gave up or until they ran out of time. Each question was associated with one piece of Elaborative feedback. Whenever a question was answered incorrectly, the corresponding Elaborative feedback was supplied irrespective of how many times the answers had been checked. Based upon interviews with students and an analysis of the way in which answers were changed as a result of feedback, recommendations for displaying Elaborative feedback are summarised as follows:
As can be seen, the Elaborative feedback does not comprise any new information that is not included in the original text. Instead, the Elaborative feedback: (a) redirects students to certain parts of the text, and / or (b) restates or rephrases questions and / or text, the aim being not only to provide some support and guidance to students during their studies, but also to encourage them to reengage with the materials by rereading and then interacting with a partner (if in a pair).
After an introduction to the lesson (15~20 minutes) students were given 40 minutes to read the text and complete the first multiple-choice comprehension exercise. Students were randomly selected to undergo different treatments (see Table 1 below for the descriptive statistics):
To investigate the effects of these different treatments (English proficiency level, Manner of study, and Type of feedback) on the comprehension of the text during the first comprehension exercise, all students were given 20 minutes to complete a second comprehension exercise related to the same text. This time, however, all students received KCR feedback. The score on this second exercise was the dependent variable. All input data were stored in a database and analyzed quantitatively. Furthermore, transcripts of the video sessions were written by the students themselves. The transcripts were then checked and formatted by the researcher before being analyzed qualitatively. The results and their implications are discussed below.
Scores on the second comprehension exercise were stored in the computer log. A three-way ANOVA was then performed. Results are listed below in Table 1.
A quantitative analysis of the results in this study shows that there is no support for Hypothesis 1 as the main effect of Type of feedback was not statistically significant (F (1,217)=0.01, p>.05). Therefore, Elaborative feedback was found to be equally as effective as KCR feedback. Students receiving Elaborative feedback (M=10.39, SD=2.00) in the second comprehension exercise scored slightly higher than, but not significantly higher than, those receiving KCR feedback (M=10.33, SD=2.15). The quantitative analysis also shows that there is no support for Hypothesis 2 as the main effect of Manner of study was not statistically significant (F(1,217)=1.19, p>.05). Therefore, individual work (M=10.24, SD=2.21) was found to be equally effective as pair work (M=10.46, SD=1.95). In contrast, the main effect of the level of English proficiency level was found to be statistically significant (F(1,217)=29.19, p<.05). Hypothesis 3 is supported, therefore, as higher proficiency students (M=11.04, SD=1.93) performed significantly better than lower proficiency students (M=9.66, SD=1.98) on the second comprehension exercise.
The interaction between Type of feedback and Manner of study (see Figure 1 below) was found to be statistically significant (F(1,217)=4.93, p<.05). Therefore, Hypothesis 4 is supported as it made a difference to scores on the second comprehension exercise whether students received Elaborative or KCR feedback, and whether they worked individually or in pairs during the first comprehension exercise. When receiving Elaborative feedback, students who worked in pairs (M=10.77, SD=1.96) scored higher than those who worked individually (M=9.89, SD=1.95). For those receiving KCR feedback, students who worked in pairs (M=10.16, SD=1.91) scored lower than those who worked individually (M=10.54, SD=2.39).
Hypothesis 5 is not supported as the interaction between Type of feedback and English proficiency level was not statistically significant (F(1,217)=2.96, p>.05). In other words, there was no significant difference between students' scores on the second comprehension exercise whatever the feedback and whatever the proficiency level. When receiving Elaborative feedback, higher proficiency students (M=10.83, SD=2.03) scored higher than lower proficiency students (M=9.94, SD=1.88). For those receiving KCR feedback, higher proficiency students (M=11.23, SD=1.83) scored higher than lower proficiency students (M=9.39, SD=2.06).
Hypothesis 6 is also not supported as the interaction between English proficiency level and Manner of study was not statistically significant (F(1,217)=2.05, p>.05). It did not make a significant difference to scores on the second comprehension exercise whether higher or lower proficiency students worked alone or in pairs. Higher proficiency students scored higher when working alone (M=11.12, SD=2.00) than when working in pairs (M=10.98, SD=1.88). Lower proficiency students scored lower when working alone (M=9.31, SD=2.06) than when working in pairs (M=9.94, SD=1.90).
Finally, Hypothesis 7 is not supported as there was no statistically significant interaction between Type of feedback, Manner of study and English proficiency level (F(1,217)=0.91, p>.05). For Elaborative feedback, the highest score in this study was achieved when higher proficiency students worked in pairs (M=10.94, SD=2.24) and the lowest score was when lower proficiency students worked alone (M=9.13, SD=1.85). For KCR feedback, the highest score was achieved when higher proficiency students worked alone (M=11.45, SD=2.15) and the lowest was when lower proficiency students worked in pairs (M=9.31, SD=1.93).
To summarise the findings of this quantitative analysis of the data obtained, significant results were obtained for: (1) the main effect of English proficiency level and (2) the interaction between Manner of study and Type of feedback. To help clarify the interpretation of these results, focus is now placed on examples of the recorded interactions which are analysed qualitatively.
The desirability of quality interaction between students around a computer was discussed earlier. Therefore, in order to provide a suitable context for interaction, the volunteer students were asked to work in pairs and the sessions were recorded and transcribed. To investigate the perceived degree in which students are: "(a) behaving cooperatively or competitively and (b) engaging in the critical reflection or in the mutual acceptance of ideas" (Mercer 2004, p. 146), this study employs the notion of Exploratory talk (Mercer, 1995). Furthermore, to investigate the quality of the resulting interactions and the degree to which students engage in the activities, reference is also made to Wegerif et al.'s (1998) pragmatic ground rules for promoting Exploratory talk and Wegerif's and Mercer's (1996) IDRF (Initiation, Discussion, Response, Follow up / feedback) structure. Examples of interactions illustrating the effects of the different kinds of feedback are discussed below.
A Typical Scenario – Non-Elaborative Feedback
A common observation in reading classes is that students often copy answers to comprehension questions directly from answer sheets without actually considering why their own answers are different and / or incorrect. The following excerpt highlights this typical dilemma when two students (Student 1 & Student 2) receive non-Elaborative feedback, in this case, online. Having completed answering all the questions, Student 2 suggests that they should get the computer to check their answers. Following the feedback, Student 1 comments that it is only possible to check their answers once with KCR feedback. There was no attempt to self-correct all five errors, having received the KCR feedback, and the students considered the exercise finished.
Interaction with Elaborative Feedback 1
The transcript below comprises part of the interaction between two students as they read and progress through the first comprehension exercise. Following the first check of their answers, Elaborative feedback (lines 403-405) provided by the computer refers students back to relevant paragraphs in the reading text for three incorrect answers. Students negotiate with each other and attempt to identify which questions are wrong. Students manage to identify one of the incorrectly answered questions (lines 439-440) and decide together to change one of their incorrect answers (line 443).
Interaction with Elaborative Feedback 2
In the next transcript, students interact as they discuss another potentially incorrect answer. The point of the question is for students to find out that it gets darker earlier in Japan than in England in the summer. As part of the interaction, students negotiate the meaning of fading light. They locate their second error (lines 552-556) and change their answer accordingly. However, both agree that they are not sure what the third incorrect answer is so they agree to check their answers once more (lines 568-572) and receive the next round of feedback.
Interaction with Elaborative Feedback 3
The next transcript comprises the interaction as students receive the second round of feedback. Having correctly changed two of their three incorrect answers, one error still remains. Following Elaborative feedback (lines 573-575), students are delighted to identify which question is answered incorrectly (line 578-580). They negotiate the meaning of the hint and also the information inferred in the text about what time shops usually close. The point of the question is for students to determine that shops generally close at 5:30pm in England, much earlier than in Japan. Students then correct their third error and check their answers for a third time (lines 580-596). Finally, all questions are correctly answered (lines 597-599).
The example transcripts of Elaborative feedback above exhibit numerous examples of quality interaction. Firstly, the transcripts contain examples of the IDRF structure as students: (1) initiate interaction following the feedback (lines 406-407 and 576-577), (2) discuss the feedback by trying to identify their errors (lines 412-418 and 578-595), (3) respond to the feedback by selecting different answers and then clicking to check them again (lines 443 and 596-597), and (4) receive further feedback following the changes which stimulates further interaction (lines 403-405 and 573-575). Throughout the examples, the question marks following several of Student 3's utterances, which were said with rising intonation, represent this student's call for confirmation to his suggestions of possible answers (lines 415, 441, 540, 544, 555, 582, and 583). There are also examples of Exploratory talk in which students engage critically and constructively with each other's ideas (lines 440-443); for example, sharing information (lines 552-556), discussing alternatives (lines 439-442 and 582-583), disagreeing (lines 441-442), and seeking agreement as a pair (lines 415-416, 540-548 and 582-595). Further examples of language describing a shared experience as students worked together are highlighted by the inclusion of words such as "We" (line 557) and "Let's" (lines 559 and 568). In summary, therefore, the transcripts include several of the features that have been associated with quality interaction between students while working at a computer. It should be noted that volunteers were all higher English proficiency level students. However, from observation by the researcher of all data collection sessions in the study, all students working in pairs were seen interacting with their partners. Whilst contents of interactions varied among the pairs of students, quality interaction was observed on numerous occasions regardless of English proficiency level. As per the departmental requirement, all students interacted in English.
In contrast to Bangert-Drowns et. al's (1991) and Nagata's (1996) research, but consistent with findings by Clariana (2000) and Mory (1994), a quantitative analysis of the results in this study shows that the main effect of Type of feedback was not statistically significant. Accordingly, therefore, it could be argued that KCR feedback with its cheaper development and implementation costs is a more logical choice for an appropriate form of computer-mediated feedback. However, results also show a statistically significant interaction between Manner of study and Type of feedback with mean scores as follows: (1) Elaborative feedback: pair work 10.77 (SD=1.96), (2) KCR feedback: individual work 10.54 (SD=2.39), (3) KCR feedback: pair work 10.16 (SD=1.91) and (4) Elaborative feedback: individual work 9.89 (SD=1.95). As can be seen, students receiving Elaborative feedback scored higher when working in pairs and students receiving KCR feedback scored higher when working alone. It is clear from these results, therefore, that simply providing students with correct answers to questions in all situations may not necessarily be the most effective way to promote reading comprehension.
Following an interactionist view of SLA, the discussion above highlights the desirability of collaborative pair work in the creation of opportunities conducive to promoting quality interaction. It was no surprise, therefore, that the combination of Elaborative feedback and pair work resulted in the highest score; however, it was surprising that students scored lower when working in pairs with KCR feedback. As to why this result may have occurred, the transcript above of a typical scenario, in which non-Elaborative feedback is supplied, highlights the problem inherent with KCR feedback in that students tend not to reengage with materials once they have been supplied with the correct answers. For this reason, opportunities are often lost for rereading, self-correction, self-reflection, interaction and negotiation, all of which have been shown to facilitate reading comprehension and SLA.
From purely a quantitative analysis perspective, it is true to say that there was no significant advantage of Elaborative over KCR feedback. Indeed, while not significant, it is interesting to note that the highest score (M=11.45, SD=2.15) was obtained by higher English proficiency students working alone with KCR feedback. Furthermore, the results also suggest that higher proficiency students do better working alone whereas lower proficiency students do better in pairs. However, in contrast to individual work and KCR feedback, the first example transcript of interaction with Elaborative feedback above is an exemplification of how the intriguing nature of the hints stimulated students into reengaging with the materials once again following Elaborative feedback. Quality interaction was also generated between students working in pairs. Therefore, from both the qualitative analysis and an interactionist view of SLA, the combination of pair work and Elaborative feedback is more desirable because of the opportunities afforded the students in developing, not only their comprehension of reading texts, but also their English language proficiency skills through the quality interaction that is generated.
Considering the interaction between Manner of study and Type of feedback once again, it was also surprising that students working individually and receiving Elaborative feedback scored the lowest (M=9.89, SD=1.95). A possible explanation for this result is connected to: the amount of time allotted to complete the two comprehension exercises and the amount and complexity of the feedback supplied. It is possible that certain students suffered from cognitive overload as they attempted to process the Elaborative feedback in the limited amount of time, hence the low average score. This assumption may be particularly true for lower proficiency students (M=9.13, SD=1.85). Pair work may have been responsible for alleviating this situation with the additional support of a partner to process and discuss the Elaborative feedback, hence the higher average score (M=10.77, SD=1.96). Although differences are not statistically significant, it is interesting to note that lower English proficiency level students working in pairs with Elaborative feedback scored high (M=10.60, SD=1.65). This score was some 1.12 points higher than the next highest treatment for lower proficiency students (individual work with KCR feedback (M=9.48, SD=2.26)) and almost equivalent to higher proficiency students working alone receiving Elaborative feedback (M=10.70, SD=1.75).
Based upon these results, and also in accordance with an interactionist view of language learning, it is argued that pair work and Elaborative feedback, despite its more expensive development and implementation costs, is a preferable form of computer-mediated feedback in online multiple-choice reading comprehension exercises. The guidelines below for providing Elaborative feedback, therefore, can be used in future research studies. Moreover, and perhaps more importantly, the guidelines can be used as a reference point in the search for more effective methods for providing Elaborative feedback.
IMPLICATIONS FOR PROVIDING ELABORATIVE FEEDBACK
In summary, the following guidelines are proposed to promote reading comprehension through Elaborative feedback and interaction:
When studying alone, students should be provided with KCR feedback until more effective guidelines for providing Elaborative feedback to individuals are available.
Certain questions remain unanswered with this research:
With conflicting results regarding the effectiveness of different types of feedback reported in the literature, it was hypothesized that, rather than the effect of feedback alone, higher scores on a multiple-choice comprehension exercise would be dependent upon a combination of feedback and other possible factors namely the Manner of study of the students and their English proficiency level. In contrast to KCR feedback, which can be problematic for generating discussion, the focus was placed on Elaborative feedback. Taking an interactionist view of language learning, it was assumed that comprehension of the text as measured by a multiple-choice exercise would be promoted by both: (1) Elaborative feedback in the form of hints to stimulate discussion, and (2) the interaction generated through pair work as students discussed how to correct mistakes. Quantitative data analysis showed the interaction between Type of feedback and the Manner of study of the students was statistically significant. Students scored highest on the second comprehension exercise when studying in pairs and having been provided with Elaborative feedback. When studying alone, students benefited from KCR feedback.
Pica, Young, and Doughty (1987) explain how comprehension and confirmation checks, and clarification requests, serve as mechanisms for native speakers' modification of input when interacting with non-native speakers. Furthermore, they note how triggering repetition and rephrasing of input content plays a crucial role in comprehension. While in no way insinuating that computers can completely replace teachers and native speakers in all areas of teaching, the results here support the assumption that there are certain teaching roles that computers can perform to great effect. As shown in this study, with suitably written software and Elaborative feedback, incorrect responses triggered feedback providing rephrased and / or paraphrased questions and text. Students benefited by being provided with these hints about necessary corrections, thereby encouraging them to engage with the material once again and to interact with a partner about their mistakes.
An underlying concern throughout this study has been the lack of clear rules for writing Elaborative feedback. Therefore, the effectiveness of any such feedback in promoting comprehension in the reading process may be due to a number of factors such as the length and complexity of such Elaborative feedback, in addition to the factors considered in this study (students' English proficiency level, Manner of study and Type of feedback). However, results from this study suggest that: (a) the traditional answer paper (KCR feedback) may not always be the optimal tool for learning from mistakes and (b) certain combinations of factors (Manner of study and Type of feedback) can have significant beneficial effects on students' learning outcomes. Both educators and students should be aware of these potential effects so that informed decisions can be made when selecting appropriate support to foster learning. Findings of the study suggest that educators would do well to promote the advantages of Elaborative feedback and pair work, and that students should be encouraged to study in such a way.
With the ability to recommend different types of feedback to complement different manners of studying, these findings contribute to the process of custom-designing online materials for language learners generally. This project shows that designers can cater for different levels of second language proficiency by providing feedback that may promote both reading comprehension and interaction. It also shows that they can cater to learning preferences by offering different forms of feedback. In this respect, this project represents an implementation of a research-informed design that is consistent with both an interactionist view of language learning and a belief in learner autonomy.
I would like to thank the following people: (i) Christopher Candlin, my doctoral supervisor, for his words of wisdom, unending support, guidance and encouragement, (ii) Francis Johnson, the creator of the original reading course, (iii) Michael Torpey, Siwon Park and William Bonk for their much appreciated support, (iv), Kanda University of International Studies for the opportunities and funding I have received and the wonderful students who helped me so much, (v) Nana, Yuko, Tomoko, Rei and Tomomi for the translations, (vi) the three anonymous reviewers, Marlise Horst and Hunter Hatfield for their valuable feedback, and (vii) my wife Kuniko, my son Leon, my daughter Leana and my parents for their loving support.
ABOUT THE AUTHOR
Originally from England, Philip Murphy has taught in Japan since 1990. With a background in computer science and an M.Ed. in TESOL, he is currently working towards a Doctorate in Applied Linguistics at Macquarie University in Sydney Australia. His research interests lie with computer-assisted language learning and promoting learner autonomy.
Bangert-Drowns, R. L., Kulik, C. C., Kulik, J. A., & Morgan, M. (1991). The instructional effect of feedback in test-like events. Review of Educational Research, 61(2), 213-238.
Beatty, K. (2003). Teaching and researching computer-assisted language learning. London: Longman.
Beatty, K., & Nunan, D. (2004). Computer-mediated collaborative learning. System, 32, 165-183.
Benson, P. (2001). Teaching and researching autonomy in language learning. London: Longman.
Berry, L. H. (2000). Cognitive effects of web page design. In B. Abbey (Ed.), Instructional and cognitive impacts of Web-based education (pp. 41-55). London: Idea Group Publishing.
Bonk, W. J., & Ockey, G. (2003). A many-faceted Rasch analysis of the L2 group oral discussion task. Language Testing, 20(1), 89-110.
Brandl, K. K. (1995). Strong and weak students' preferences for error feedback options and responses. The Modern Language Journal, 79(2), 194-211.
Brandl, K. K. (2002). Integrating internet-based reading materials into the foreign language curriculum: From teacher- to student-centered approaches. Language Learning & Technology, 6(3), 87-107.
Caverly, D., & McConald, L. (1998). Techtalk: Distance development education. Journal of Developmental Education, 21(3), 36-37, 40.
Chapelle, C. (1997). CALL in the year 2000: Still in search of research paradigms. Language Learning & Technology, 1(1), 19-43.
Chapelle, C. A. (2001). Computer applications in second language acquisition: Foundations for teaching, testing and research. Cambridge: Cambridge University Press.
Chun, D. M., & Plass, J. L. (1996). Facilitating reading comprehension with multimedia. System, 24, 503-519.
Chun, D. M., & Plass, J. L. (2000). Networked multimedia environments for second language acquisition. In M. Warschauer and R. Kern (Eds.), Network-based language teaching: Concepts and practice (pp.151-170). New York: Cambridge University Press.
Clariana, R. B. (1993). A review of multiple-try feedback in traditional and computer-based instruction. Journal of Computer-Based Instruction, 20(3), 67-74.
Clariana, R. B. (2000). Feedback in computer-assisted learning. NETg University of Limerick Lecture Series. Retrieved September 28, 2007, from http://www.personal.psu.edu/faculty/r/b/rbc4/NETg.htm.
Clariana, R. B. (2003). The effectiveness of constructed-response and multiple-choice study tasks in computer aided learning. J. Educational Computing Research, 28(4), 395-406.
Clariana, R. B., & Koul, R. (2005). Multiple-try feedback and higher-order learning outcomes. International Journal of Instructional Media, 32(3), 239-245.
Clariana, R. B., & Koul, R. (2006). The effects of different forms of feedback on fuzzy and verbatim memory of science principles. British Journal of Educational Psychology, 76, 259-270.
Clariana, R. B., & Lee, D. (2001). The effects of recognition and recall study tasks with feedback in a computer-based vocabulary lesson. Educational Technology Research & Development, 49(3), 23-36.
Clariana, R. B., Wagner, D., & Murphy, L. C. R. (2000). Applying a connectionist description of feedback timing. Educational Technology Research & Development, 48(3), 5-21.
Clark, K., & Dwyer, F. (1998). Effect of different types of computer-assisted feedback strategies on achievement and response confidence. International Journal of Instructional Media, 25(1, winter), 55-63.
de Bot, K. (1996). Review article the psycholinguistics of the output hypothesis. Language Learning, 46(3), 529-555.
Eldredge, J. L., & Butterfield, D. D. (1986). Alternatives to traditional reading instruction. Reading Teacher, 40, 32-37.
Ellis, R. (1998). Discourse control and the acquisition-rich classroom. In Renandya, W. A. & Jacob, G. (Eds.), Learners and Language Learning, 39, 145-71.
Ferris, R. D. (2003). Response to student writing: Implications from second language students. Mahwah, NJ: Lawrence Erlbaum Associates.
Fisher, E. (1992). Characteristics of children's talk at the computer and its relationship to the computer software. Language and Education, 7(2), 187-215.
Grabe, W., & Stoller, F. L. (2002). Teaching and researching reading: Applied linguistics in action. Harlow, UK: Longman.
Hong, W. (1997). Multimedia computer-assisted reading in business Chinese. Foreign Language Annals, 30, 335-344.
Jones, H. J., & Wolf, P. J. (2001). Teaching a graduate content area reading course via the Internet: Confessions of an experienced neophyte. Reading Improvement, Spring 2001, 38(1), 2-9.
Koskinen, P. S., & Blum, I. H. (1986). Paired repeated reading: A classroom strategy for developing fluent reading. The Reading Teacher, 40, 70-75.
Krashen, S. (1985). The input hypothesis: Issues and implications. London: Longman.
Kulhavy, R. W., & Wager, W. (1993). Feedback in programmed instruction: Historical context and implications for practice. In J. V. Dempsey & G. C. Sales (Eds.), Interactive instruction and feedback (pp.3-20). Englewood Cliffs, NJ: Educational Technology Publications.
Lantolf, J. P. (2000). Sociocultural theory and second language learning. Oxford: Oxford University Press.
Larsen-Freeman, D., & Long, M. (1991). An introduction to second language acquisition research. Essex: Longman.
Lightbown, P., & Spada, N. (1999). How languages are learned. Oxford: Oxford University Press.
Long, M. (1981). Input, interaction and second language acquisition. Annals of the NY Academy of Sciences, 379, 259-278.
Long, M. H. (1985). Input and second language acquisition theory. In S. M. Gass & C. G. Madden (Eds.), Input in second language acquisition (pp.377-93). Rowley, MA: Newbury House Publishers.
Mercer, N. (1995). The guided construction of knowledge: Talk amongst teachers and learners. Clevedon: Multilingual Matters.
Mercer, N. (2004). Sociocultural discourse analysis: Analysing classroom talk as a social mode of thinking. Journal of Applied Linguistics, 1(2), 137-168.
Merrill, J. (1985). Levels of questioning and forms of feedback: Instructional factors in courseware design. A presentation at the Annual Meeting of the American Educational Research Association in Chicago, Illinois. (ERIC Document Reproduction Service No. ED 266 766).
Merrill, J. (1987). Levels of questioning and forms of feedback: Instructional factors in course ware design. Journal of Computer-Based Instruction, 14(1), 18-22.
Mikulecky, L. (1998). Diversity, discussion, and participation. Comparing web-based and campus-based adolescent literature classes. Journal of Adolescent & Adult Literacy, 42(2), 84-92.
Mory, E. H. (1994). Adaptive feedback in computer-based instruction: Effects of response certitude on performance, feedback-study time, and efficiency. J. Educational Computing Research, 11(3), 263-290.
Murphy, P. M. (2005). Interactivity, usability and flexibility in an online reading course. Studies in Linguistics and Language Education of the Research Institute of Language Studies and Language Education, Kanda University of International Studies, 16, 383-433.
Murphy, P. M., & Imrie, A. (2003). Implementing computers in a reading classroom. Studies in Linguistics and Language Education of the Research Institute of Language Studies and Language Education, Kanda University of International Studies, 14, 123-166.
Nagata, N. (1996). Computer vs. workshop instruction in second language acquisition. CALICO Journal, 14(1), 53-75.
Nes, S. L. (2003). Using paired reading to enhance the fluency skills of less-skilled readers. Reading Improvement, 40(4), 179-192.
Nielson, M. C. (1990). The impact of informational feedback and a second attempt at practice questions on concept learning in computer-aided instruction. Unpublished doctoral dissertation, University of Texas at Austin.
Nunan, D. (1993). Introducing discourse analysis. London: Penguin.
Pica, T. (1994). Research on negotiation: What does it reveal about second-language learning conditions, processes and outcomes? Language Learning, 44, 493-527.
Pica, T. (1996). Do second language learners need negotiation? International Review of Applied Linguistics, 34(1), 1-19.
Pica, T., Young, R., & Doughty, C. (1987). The impact of interaction on comprehension. TESOL Quarterly, 21(4), 737-758.
Pressey, S. L. (1926). A simple apparatus which gives tests and scores – and teaches. School and Society, 23, 373-376.
Sales, G. C. (1993). Adapted and adaptive feedback in technology-based instruction. In J. V. Dempsy & G. C. Sales (Eds.), Interactive instruction and feedback (pp.159-175). Englewood Cliffs, NJ: Educational Technology Publications.
Schimmel, B. J. (1983). A meta-analysis of feedback to learners in computerized and programmed instruction. Paper presented at the Annual Meeting of the American Educational Research Association, Montreal, Canada. (ERIC Document Reproduction No. ED 233 708).
Shanker, J. L., & Ekwall, E. E. (2003). Locating and correcting reading difficulties. Upper Saddle River, NJ: Merrill Prentice Hall.
Smith, P. L. (1988). Toward a taxonomy of feedback: Content and scheduling. A paper presented at the Annual meeting of the Association for Educational Communications and Technology, New Orleans, LA.
Spock, P. A. (1987). Feedback and confidence of response for a rule-learning task using computer-assisted instruction. (Doctoral dissertation, University of Texas, Austin). Dissertation Abstracts International, 48(5), 1109.
Stakhnevich, J. (2002). Reading on the web: Implications for ESL professionals. The Reading Matrix, 2(2).
Stevens, V. (1992). Humanism and CALL: A coming of age. In M. C. Pennington and V. Stevens (Eds.), Computers in applied linguistics: An international perspective (pp. 11-38). Clevedon, UK: Multilingual Matters.
Swain, M. (1985). Communicative competence: Some roles of comprehensible input and comprehensible output in its development. In S. M. Gass & C. G. Madden (Eds.), Input in second language acquisition (pp. 235-253). Rowley, MA: Newbury House Publishers.
Swain, M., & Lapkin, S. (1995). Problems in output and the cognitive processes they generate: A step towards second language learning. Applied Linguistics, 16, 371-391.
Tsutsui, M. (2004). Multimedia as a means to enhance feedback. Computer Assisted Language Learning, 17(3-4), 377-402.
Van den Branden, K. (2000). Does negotiation of meaning promote reading comprehension? A study of multilingual primary school classes. Reading Research Quarterly, 35(3), 426-443.
Van der Linden, E. (1993). Does feedback enhance computer-assisted language learning? Computers & Education, 21, 61-65.
Van Lier, L. (1996). Interaction in the language curriculum: Awareness, autonomy & authenticity. Harlow, UK: Longman.
Wegerif, R., & Mercer, N. (1996). Computers and reasoning through talk in the classroom. Language and Education, 10(1), 47-65.
Wegerif, R., Mercer, N., & Dawes, L. (1998). Software design to support discussion in the primary curriculum. Journal of Computer Assisted Learning, 14, 199-211.
Williams, H. S., & Williams, P. N. (2000). Integrating reading and computers: An approach to improve ESL students reading skills. Reading Improvement, Fall 2000, 37(3), 98-100.
Contact: Editors or Editorial Assistant
Copyright © 2007 Language Learning & Technology, ISSN 1094-3501.
Articles are copyrighted by their respective authors.