https://doi.org/10.17323/jle.2022.16516
Citation: Reynolds B. L., & Kao C.W. (2022). A Research Synthesis of Unfocused Feedback Studies in the L2 Writing Classroom: Implications for Future Research. Journal of Language and Education, 8(4), 5-13. https://doi. org/10.17323/jle.2022.16516
Correspondence:
Chian-Wen Kao,
Received: November 15, 2022 Accepted: December 10, 2022 Published: December 26, 2022
A Research Synthesis of Unfocused Feedback Studies in the L2 Writing Classroom: Implications for Future Research
Barry Lee Reynolds12 , Chian-Wen Kao 3
1 Faculty of Education, University of Macau, Macau
2 Centre for Cognitive and Brain Sciences, University of Macau, Macau
3 Department of Applied English, Chihlee University of Technology, Taiwan
ABSTRACT
Introduction. The issue of whether or not teachers should correct second language learners' grammatical errors has been hotly contested in the literature. Researchers who studied corrective feedback were particularly interested in determining what kinds of feedback may help students commit fewer errors in subsequent writing. One of the primary points of contention in this discussion is whether language teachers should provide focused (i.e., only one or a few types of grammar errors are targeted for correction) or unfocused written corrective feedback (i.e., all error types are corrected). Although focused feedback has been found to be more effective than unfocused feedback (Kao & Wible, 2014), focused feedback has been questioned to ecologically invalid in authentic classrooms (Xu, 2009). Because little attention has been paid to unfocused feedback effects, the present study looked into not only the short-term but also the long-term learning effects of unfocused feedback.
Methods. The present study adopted the meta-analysis software Comprehensive Metaanalysis (Borenstein, Hedges, Higgins, & Rothstein, 2005) to calculate an effect size across previous studies. Several keywords were used to search for relevant studies in online databases and selection criteria were set to determine whether these studies were appropriate to be synthesized. 40 studies which met the criteria were included for analyses. Results and Discussion. This meta-analysis revealed that unfocused grammatical feedback was effective, as assessed by immediate posttests, and that the benefits of unfocused feedback increased over time, as revealed by delayed posttests, potentially contradicting Truscott's (1996; 2007) conclusions on grammar correction. This finding needs to be carefully interpreted because only 10 out of 40 studies provided statistical data in delayed posttests. Furthermore, publication bias seemed to be minimal, and both immediate and delayed posttest effect sizes were heterogeneous.
Conclusion. It is strongly suggested that more future studies should investigate the long-term learning effects of unfocused feedback. In addition, because the effect sizes obtained for unfocused feedback practices were heterogeneous, other moderating variables need to be considered such as instructional settings (Mackey & Goo, 2007; Truscott, 2004a), type of feedback (Lee, 2013), focus of feedback (Ellis, 2009), learners' revisions (Ferris, 2010), intervention length (Li, 2010; Lyster & Saito, 2010) and so on. It is essential to conduct more meta-analyses to look into the potential effects of such moderating variables.
KEYWORDS
meta-analysis, unfocused feedback, unfocused correction, comprehensive feedback, comprehensive correction
INTRODUCTION Truscott (1996) reviewed related studies
and argued that corrective feedback (CF) Second language (L2) instructors must does not benefit learning outcomes. In decide whether to correct student errors. response, Ferris (1999) asserted that a
conclusion that CF has no place in writing courses would be premature given the incomparability of related studies. In response to Ferris, Truscott (1999) stated that it is reasonable to conclude that CF should be abandoned because similar results obtained in different circumstances led to the same conclusion that CF is ineffective. Numerous researchers have since conducted empirical studies to examine the effect of CF. Many of them have examined the efficacy of CF in the field of L2 writing instruction. Truscott (2004a, 2009) responded to Chandler's (2004, 2009) arguments in favor of CF, indicating that her study results were conjecture rather than evidence of the benefits of CF owing to the presence of several flaws in her research design.
After her debates with Truscott, Ferris stopped focusing on whether studies provided proof of the benefits of CF. She conceded that studies did not sufficiently prove the effects of CF and focused on ideas for future studies, providing general suggestions for researchers and instructors in the field of L2 writing (Ferris, 2004). Guenette (2007) analyzed the design of related studies and highlighted some design problems. However, Guenette recommended that teachers continue providing CF to students. Although Ferris and Guenette have exhibited optimism toward future research and practice related to CF, they have failed to offer a clear research direction for future studies. Truscott (2007) conducted a small-scale meta-analysis of CF studies and concluded that corrections negatively affect the ability of students to write accurately. The results indicated that even if CF is beneficial to students, the effect is small. In 1996, Truscott strongly argued that CF has no educational benefits, but his position seems to have changed. Because his analyses are based on small-scale studies, his results remain dubious. For example, Russell and Spada (2006) conducted a related meta-analysis of large-scale studies, and their findings support the beneficial role of CF.
CF researchers wish to determine the types of feedback that reduce student errors. These researchers apply various feedback mechanisms and examine the effect they have on students' writing accuracy. Most error correction-related studies involve comparisons of feedback. Many researchers believe that feedback comparisons can help determine the most effective form of feedback. However, researchers are still unsure which type of feedback has the most benefits for learners (Ellis, 2009; Hyland & Hyland, 2006). Other variables that could influence the effect of corrections have been discussed, including the type of error corrected (e.g., Bitchener et al., 2005; Ferris & Roberts, 2001; Kao, 2022; Shao & Liu, 2022), the number of error types corrected (e.g., Ellis et al., 2008; Sheen et al., 2009), students' L2 ability (e.g., Ammar & Spada, 2006; Bitchener, 2009; Iwashita, 2001), the research design adopted (e.g., Ferris, 2004; Guenette, 2007; Truscott, 2007), the instructional settings (Sheen, 2004), and the ethnic background of students (e.g., Bitchener & Knoch, 2008). They all intended to investigate and discover which variables
contribute the most to the effectiveness of CF in L2 learning and teaching. Although their foci vary, the aforementioned researchers all gave feedback to students and explored its effects on students' grammatical errors in language production.
In Truscott's (1996) review in which he argued against grammar corrections in L2 writing classes, he asserted that the effects of correction should be evaluated in discourse writing instead of grammar exercises. His argument is that if corrections are proven to be ineffective at improving discourse writing, then they are harmful to students' writing ability and should be abandoned. Truscott's assertions have drawn considerable research attention. Researchers interested in feedback have considered his concerns when evaluating the effects of corrections. Such researchers have generally assigned writing tasks to students and determined whether students' writing accuracy improved upon receiving CF (e.g., Bitchener & Knoch, 2009; Doughty & Varela, 1998; Fazio, 2001; Muranoi, 2000; Polio et al., 1998; Sheen, 2007); however, their findings have been inconsistent.
Despite Truscott's criticism of CF, several researchers have expressed optimism regarding the potential of CF and research related to it (e.g., Ellis, 2009; Ferris, 2004; Guenette, 2007; Hyland & Hyland, 2006). For example, Ferris (2004) conceded that several studies have not sufficiently proven the positive effects of CF but provided general suggestions to L2 writing researchers and instructors. Guenette (2007) highlighted research design problems in related studies but recommended that teachers continue providing CF to students. Although most related researchers have expressed optimism with regard to CF research and practice, they have failed to provide a clear research direction for future studies.
Russell and Spada (2006) conducted a meta-analysis of studies that involved oral and written feedback to examine the extent to which CF improved the grammatical skills of L2 learners. A large effect size was identified, and they concluded that such feedback was effective. In a meta-analysis centered on written feedback, Truscott (2007) revealed that the effect of correction on students' written accuracy was small and negative. He contended that the results of Russell and Spada were in line with his findings because the studies they included in their meta-analysis examined only whether learner performance in artificial grammar tests improved after receiving corrections or whether they could successfully revise their writing on the basis of teachers' corrections. These studies have not examined whether corrections helped learners speak and write accurately in realistic contexts (Truscott, 2007). Truscott has been criticized for reiterating most of his evidence against the utility of written correction from his review in 1996 in his meta-analysis in 2007 (the average publication year in Truscott's [p. 262] Table 1 was 1999); it was thus unsurprising that he again found error correction to be ineffective (Bruton, 2010).
Teachers' correction of language learners' grammatical errors has been hotly debated in the published literature. Error feedback researchers have been interested in investigating what types of feedback will effectively reduce students' errors in subsequent writing. One of the main areas of this debate concerns whether the written corrective feedback administered by language teachers should be focused (i.e., only one or a few grammar error types are targeted for correction) or unfocused (i.e., all error types are targeted for correction). These debates and the empirical research studies inspired by them have been insightful to language learners and teachers alike; however, the arguments concerning teacher feedback continues to be complicated and controversial even to this day. Kao and Wible (2014) pursued a much more persuasive line of investigation based upon this leading idea that the meta-analyses showing little or negative effects of correction had conflated important distinctions in ways of giving grammar feedback. Specifically, they re-analyzed the published meta-analysis data, adding more recently published studies that meet the criteria used in the published meta-analysis. Further, they added to their me-ta-analysis a crucial distinction between focused feedback and unfocused feedback. Their findings show that conflating focused and unfocused corrective feedback in research distorts the effects of both. Conflation over-estimates the effect of unfocused feedback (unfocused feedback is shown to be even less effectual when considered separately from focused feedback studies), and under-estimates the effect size of focused feedback. Taken separately, focused feedback studies showed large positive effect sizes.
Subsequent meta-analyses seemingly point towards the conclusion that unfocused feedback (i.e., feedback provided on all errors that occur in a piece of writing) is less effective than focused feedback (i.e., feedback provided on one or only a select number of errors) (Kang & Han, 2015; Lim & Renandya, 2020). However, the majority of these studies collectively drawing this conclusion have overwhelmingly been concerned with the improvements of one grammatical error type (usually English article usage). These studies have overwhelmingly been concerned with feedback given to grammatical rule-based errors at the expense of the investigation of unfocused feedback on phraseological or lexical errors. Furthermore, these studies have often compared focused feedback to unfocused feedback for several rounds instead of investigating the effects of a single round of unfocused feedback on the grammatical accuracy of subsequent writings. Furthermore, while the lion's share of the research has been conducted in language classrooms in the form of quasi-experimental studies, what occurs in the classrooms where the data for feedback giving studies was collected does not mimic the type of feedback giving practices that often occur in classrooms. Therefore, more ecologically valid studies that include the administration of unfocused feedback are needed in order to measure its effectiveness more accurately in the correction of multiple L2 writing grammar and lexical error types.
We considered the potential drawbacks of meta-analyses such as those of Truscott (2007) and followed the study selection criteria of Truscott insofar as possible. Additionally, we included only studies published after his meta-analysis and only those that met his selection criteria. In Truscott's (2004b) critique of the meta-analysis of Norris and Ortega (2000), he criticized that Norris and Ortega's finding favoring grammar instruction might be misleading because most included studies only investigated the immediate effects of grammatical instruction. The purpose of the present meta-analysis, therefore, was to investigate not only immediate but also delayed effects of unfocused CF. The following research question was proposed:
Does written unfocused CF have short- and long-term effects on students' linguistic accuracy?
METHODOLOGY
Meta-analysis is a useful method of answering research questions not posed in original studies and can illuminate moderator variables of interest to those involved in empirical research. Meta-analyses may enable researchers to account for conflicting results because such analyses yield increased statistical power for detecting the effects of moderating variables when they exist. Therefore, a meta-anal-ysis was conducted to comprehensively examine extant grammar correction research.
Studies were identified from six online databases: Science Direct, the Chinese Periodical Index, the Education Resources Information Center, Linguistics and Language Behavior Abstracts, Google Scholar, and SCOPUS (Elsevier). The following keywords were used: (a) "error correction," (b) "grammar correction," (c) "written corrective feedback," (d) "unfocused correction," (e) "unfocused feedback," (f) "comprehensive feedback," (g) "comprehensive correction," and (h) "feedback in L2 writing."
The CF-related studies focusing on L2 writing that were reviewed herein are chiefly from publications in the field of L2 pedagogy with an international readership. Most of such studies had been reviewed by Ferris (1999, 2004) and Truscott (1996, 1999, 2007). Studies published in recent years were included. Certain selection criteria were used to determine whether studies were appropriate for inclusion. Truscott (1996) indicated that feedback is used to correct grammatical errors and not content or the organization or clarity of composition. In the present meta-analysis, only studies related to the effect of CF on students' grammatical errors were reviewed. Secondly, studies with a single-group pretest-posttest research design were not considered for review (Truscott, 2007) because such designs involve various uncontrolled variables. Studies comparing at least two groups (i.e., experimental and control groups) are held in higher regard. Third, to determine students' improvements
in grammar as a result of CF, only studies in which participants composed essays were included; this approach was employed because students' metalinguistic knowledge and grammar skills cannot be appropriately measured using multiple-choice questions or cloze tests (Truscott, 1996). Finally, only unfocused feedback studies are included in the analyses because the research focus of this meta-analysis is on the effectiveness of unfocused feedback practices.
When using the Comprehensive Meta-Analysis program (Borenstein et al., 2005), a researcher must extract an effect size for each study and then synthesize these effect sizes across studies. The principle of "one study, one effect size" is followed because when one study has more than one effect size, the sample size is inflated, data points lose their independence, and standard errors are distorted (Borenstein et al., 2009; Lipsey & Wilson, 2001). Furthermore, meta-analyses (e.g., Li, 2010; Russell & Spada, 2006) related to CF have also adhered to the aforementioned principle.
To ensure the reasonable interpretations of the quantitative effect sizes identified, meta-analyses involve standard approaches of accounting for various factors. First, in me-ta-analytic approaches, two statistical models are widely employed to overcome problems related to variation: random- and fixed-effect models (Borenstein et al., 2009; Hunter & Schmidt, 2004). These two models are based on different assumptions. Under the fixed-effect model, all studies are assumed to be identical with only one true effect size. Any variation is attributable to sampling variability. By contrast, under the random-effects model, the true effect size is assumed to vary by study, and studies are presumed to be similar rather than identical. Any variation is ascribed to heterogeneous factors. Because the assumption that all studies included in this meta-analysis are identical would be unreasonable, the random-effects model was adopted to calculate relevant effect sizes. Second, Cohen's d (1992) is widely adopted for effect size interpretations in meta-analyses. A small, medium, and large effect size is indicated by a value of 0.2-0.5, 0.5-0.8, and >0.8, respectively. Common formulas for effect size calculations are as follows.
_ Mean difference J= IF(Nt+~N~27 J = t ' Harmonic N =
f ~ Pooled SD V Harmonic N/s/2 Ni 4
Third, to accurately provide an average effect size, in addition to Cohen's d, the 95% confidence interval (CI) should be considered. When a 95% CI does not include zero, the certainty that a study's true effect size is represented in the statistical result is 95%. The smaller the CI is, the more precise related statistics are (Larsen-Hall, 2010). The Begg and Mazumdar rank correlation test was performed to investigate whether a publication bias existed among the studies included in the meta-analysis. Finally, a test for heterogeneity was conducted to determine whether any moderator variables influenced feedback effectiveness.
To investigate the effectiveness of CF, students' language accuracy was based on immediate posttests in selected studies. According to Li (2010), a short-term immediate posttest is an assessment given within one week post intervention. Therefore, posttests conducted immediately after participants had read feedback (see Bitchener, 2008; Ellis et al., 2008) or within approximately one week after participants had read feedback (see Sheen et al., 2009; Van Beuningen et al., 2012) were considered to be immediate posttests. Because some studies have provided information of students' grammatical performance on posttests administered at least three weeks after participants had read feedback, we also examined the long-term effects of feedback in this meta-analysis.
RESULTS
A total of 40 unfocused feedback studies published between 1984 and 2018 met the criteria and were included in the
meta-analysis (Data collection was completed by November 2022). Most studies were published journal articles, and few studies published as conference papers or book chapters were included. In looking at the 40 studies included in our meta-analysis, a rapid growth in the number of studies on written corrective feedback in 2010 and 2014 was found (Figure 1).
This section reports the overall effects of unfocused CF as determined by immediate and delayed posttests. 40 studies were included for analyses in this section. Table 1 presents the overall effect size related to unfocused CF as determined through immediate posttests. The effect sizes in these 40 studies varied considerably; the effects ranged from large and positive to medium and negative. According to the random-effects model, CF had a medium effect size (d = 0.532). Because the 95% CI excluded zero, the observed effect sizes were deemed to be reliable. In addition, the Begg and Mazumdar rank correlation test suggested that the effect sizes obtained in this meta-analysis were not confounded by publication bias (z value for tau = 1.957, p > .05). In addition, a heterogeneity test indicated that the effect size was moderately heterogeneous (I2: 64.141).
Because only 10 studies provided statistical data on the delayed posttests, Table 2 presents the effect sizes from delayed posttests in these 10 studies. The delayed posttest results of these studies revealed a medium positive effect size for grammar error correction (d = 0.756). Additionally, because the 95% CI excluded zero, the effect size obtained was deemed to be reliable. In addition, the Begg and Ma-
Figure 1
Publication Frequency of Studies from 1984 to 2018
Publication Years
Table 1
Overall Effect Sizes of Unfocused CF in Immediate Posttests (k = 40)
Table 2
Overall Effect Sizes for Unfocused CF in Delayed Posttests (n = 10)
Random-Effects Model Statistical Data
Random-Effects Model Statistical Data
Effect Size 0.532
Standard Error 0.078
Variance 0.006
Minimum -0.565
Maximum 1.950
Upper CI 0.684
Lower CI 0.379
P value 0.000
Effect Size 0.756
Standard Error 0.324
Variance 0.105
Minimum -0.421
Maximum 5.736
Upper CI 1.391
Lower CI 0.120
P value 0.020
zumdar rank correlation test suggested that the effect sizes obtained in this meta-analysis were not confounded by publication bias (z value for tau = 1.878, p > .05). Additionally, a heterogeneity test indicated high heterogeneity (I2: 92.128) in effect sizes across these included studies.
IMPLICATIONS FOR FUTURE RESEARCH
The motivation for conducting this meta-analysis was based upon the fact that the number of studies included in Trus-
cott's (2007) meta-analysis was too small. Thus, studies published after his meta-analysis were included. Potentially contradicting Truscott's (1996; 2007) conclusions on grammar correction, this meta-analysis suggests that unfocused grammatical feedback is effective, as determined by immediate posttests, and that the benefits of unfocused feedback even increase over time, as indicated by delayed posttests. The finding, nevertheless, needs to be carefully interpreted because the majority of unfocused feedback studies do not investigate whether the corrective feedback effect persists
after at least three months. Therefore, it is suggested that more research should be carried out to analyze the long-term learning effects of unfocused corrective feedback. Additionally, publication bias appeared to be negligible, and the effect sizes obtained for both immediate and delayed posttests were heterogeneous. Other moderating variables might need to be considered when investigating the effectiveness of CF in the future. For example, certain variables that might influence feedback effectiveness are as follows: instructional settings (Mackey & Goo, 2007; Truscott, 2004a), type of feedback (Lee, 2013), focus of feedback (Ellis, 2009), learners' revisions (Ferris, 2010), and intervention length (Li, 2010; Lyster & Saito, 2010). More meta-analyses should be conducted to investigate the possible effects of those moderating moderators.
The question of whether CF is effective is complicated, and the answer is context dependent. For example, researchers must consider error types and the form and content of corrections, among many other factors. Much of the empirical research on error correction effectiveness has conflated different error types. Such errors have been categorized too broadly, and the content of feedback has been loosely defined. Future research should explore distinctions in other moderator variables to provide a comprehensive understanding of the roles of these variables in CF effectiveness.
AN OVERVIEW OF THE SPECIAL ISSUE ON RETHINKING THE (IN)EFFECTIVENESS OF UNFOCUSED FEEDBACK IN THE L2 WRITING CLASSROOM
Echoing the findings from the present meta-analysis on unfocused feedback studies, the authors of the articles included in the present special issue, Rethinking the (In)effectiveness of Unfocused Feedback in the L2 Writing Classroom, discuss unfocused feedback practices from multiple perspectives. This issue consists of 2 editorials, 10 research papers, 1 opinion paper, and 2 book reviews. The issue begins with the present meta-analysis and editorial followed by the second editorial written by Lilia Raitskaya and Elena Tikhonova titled Writing Feedback from a Research Perspective. They retrieved 194 papers regarding writing feedback retried from the Scopus database, finding many studies reporting on computer mediated-automated forms of feedback on writing (i.e., automated writing evaluation).
Ten research articles appear after the two editorials. The first research article titled Learning Outcomes Generated through the Collaborative Processing of Expert Peer Feedback by Nicholas Carr and Paul Wicking reports a qualitative study investigating whether Japanese learners could benefit from written corrective feedback from their expert peers in the United States. The results indicated that the Japanese learners considered themselves as language users and their language skills improved from the expert peer feedback.
The second paper titled The Effects of Coded Focused and Unfocused Corrective Feedback on ESL Student Writing Accuracy by Chunrao Deng, Xiang Wang, Shuyang Lin, Wenhui Xuan and Qin Xie reported on a mixed-method approach to investigated whether the scope of feedback (i.e., focused or unfocused feedback) influenced ESL learners' acquisition of linguistic features (i.e., articles, singular/plural nouns and verb forms). The study showed that students who received focused indirect feedback significantly outperformed those who received unfocused indirect feedback in terms of their acquisition of English article usages in new writing tasks. The in-depth interviews further revealed that coded focused feedback led to a deeper understanding of English article usages. In addition, focused indirect feedback helped learners successfully correct errors involving singular/plural nouns in their revised essays. The third paper titled Towards Understanding Teacher Mentoring, Learner WCF Beliefs, and Learner Revision Practices through Peer Review Feedback: A Sociocultur-al Perspective by Yang Gao and Xiaochen Wang used a mixed methods design to investigate learners' writing practices on an online platform and their beliefs about WCF through interviews. They found peer feedback and teacher mentoring facilitated learners' revision practices and there existed a strong need for scaffolding in the L2 writing classroom. The fourth paper titled Writing Task Complexity, Task Condition and the Efficacy of Feedback by Esmaeil Ghaderi, Afsar Rou-hi, Amir Reza Nemat Tabrizi, Manoochehr Jafarigohar and Fatemeh Hemmati explored the role of task complexity and task condition in learner gains from WCF. They found task condition played a greater role than task complexity in the writing improvements exhibited by the learners involved in their study. The fifth paper titled The Effectiveness of Direct and Metalinguistic Written Corrective Feedback to Deal with Errors in the Use of Information-Structuring Connectors by Ste-ffanie Kloss and Angie Quintanilla aimed to determine the effectiveness of direct and metalinguistic focused written corrective feedback on information structuring connectors. They found both the direct WCF and metalinguistic feedback groups improved but only the improvements for the latter were statistically significant. The sixth article titled Accuracy Gains from Unfocused Feedback: Dynamic Written Corrective Feedback as Meaningful Pedagogy by Kendon Kurzer explored the impact of unfocused direct WCF on students' writing. Statistically significant improvements in a number of different error types were shown regardless of the unfocused nature of the feedback. The seventh article titled Moroccan EFL Public University Instructors' Perceptions and Self-Reported Practices of Written Feedback by Abderrahim Mamad and Tibor Vigh aimed to explore EFL instructors' perceptions and their self-reported practices of product- and process-based written feedback. They found Moroccan university instructors considered written corrective feedback and written feedback important for product-oriented teaching of writing and written feedback for process-oriented teaching of writing. However, some mismatches between teachers' reported practices and their actual teaching were found. For example, teachers tended to apply written feedback less
often than they claimed. The eighth paper titled Unfocused Written Corrective Feedback and L2 Learners' Writing Accuracy: Relationship between Feedback Type and Learner Belief by Syed Muhammad Mujtaba, Manjet Kaur Mehar Singh, Tiefu Zhang, Nisar Ahmed, and Rakesh Parkash found both direct and indirect feedback effective at increasing the accuracy of student writing. Furthermore, no relationship was found between the effectiveness of written corrective feedback and learners' beliefs about its effectiveness. In the ninth paper, Experienced and Novice L2 Raters' Cognitive Processes while Rating Integrated and Independent Writing Tasks, Kobra Tavassoli, Leila Bashiri and Natasha Pourdana explored how the rating experience of L2 raters might affect their rating of integrated and independent writing tasks. Experience mattered when rating language use, mechanics of writing, organization, and the total. They also found that referencing of the writing rubric was mediated by the type of writing being rated. The tenth paper titled EFL University Students' Self-Regulated Writing Strategies: The Role of Individual Differences by Atik Umamah, Niamika El Khoiri, Utami Widia-ti, Anik Nunuk Wulyani aimed to investigate EFL university students' preference towards self-regulated writing strategies. Their results pointed out that students' self-regulated writing strategies served as a significant predictor of their writing performance, and they used help-seeking strategies the most frequently. The authors suggested peer feedback should be able to promote self-regulated learning.
The issue includes ends with 1 opinion paper and 2 book reviews. The opinion paper titled Unfocused Written Corrective Feedback for Academic Discourse: The Sociomate-rial Potential for Writing Development and Socialization
in Higher Education by Daron Benjamin Loo discusses the practice of administering unfocused written corrective feedback by adopting the principles of sociomateriality. Loo suggests that the unfocused written corrective feedback in real classrooms should not aim to correct linguistics errors but to support language learners' academic discourse socialization. Accordingly, in the book review of Reconciling Translin-gualism and Second Language Writing (Silva & Wang, 2020), Chunhong Liu and Taiji Huang provide a succinct summary of all the chapters and discuss the merits of the book, particularly in regard to how the book authors deal with trans-lingualism and second language writing. Next, Xiaowen (Serina) Xie reviewed the book Innovative Approaches in Teaching English Writing to Chinese Speakers (Reynolds & Teng, 2021). Besides providing a summary of each of the chapters, a critical discourse of three key issues raised in the book is provided. The review ends with a final evaluation of the overall contribution of the book to the field of second language writing instruction.
ACKNOWLEDGMENTS
This study was primarily funded by the University of Macau under grant MYRG2022-00091-FED.
DECLARATION OF COMPETITING INTEREST
None declared.
REFERENCES
Ammar, A., & Spada N. (2006). One size fits all? Recasts, Prompts, and L2 learning. Studies in Second Language Acquisition, 28(4), 543-574.
Bitchener, J. (2008). Evidence in support of written corrective feedback. Journal of Second Language Writing, 17, 102-118. doi:10.1016/jslw.2007.11.004
Bitchener, J. (2009). Measuring the effectiveness of written corrective feedback: A response to "overgeneralization form a narrow focus: a response to Bitchener (2008)". Journal of Second Language Writing, 18, 276-279.
Bitchener, J., & Knoch, U. (2008). The value of written corrective feedback for migrant and international students. Language Teaching Research, 12, 406-431. doi:10.1177/1362168808089924
Bitchener, J., Knoch, U. (2009). The contribution of written corrective feedback to language development: A ten month investigation. Applied Linguistics, doi:10.1093/applin/amp016.
Bitchener, J., Young, S., & Cameron, D. (2005). The effect of different types of feedback on ESL student writing. Journal of Second Language Writing, 14, 191-205. doi:10.1016/j.jslw.2005.08.001
Borenstein, M., Hedges, L.V., Higgins, J.P.T., & Rothstein, H.R. (2009). Introduction to meta-analysis. U.K.: John Wiley & Sons, Ltd.
Borenstein, M., Hedges, L.V., Higgins, J.P.T., & Rothstein, H.R. (2005). Comprehensive Meta-Analysis [Computer software]. Eng-lewood, NJ: Biostat.
Bruton, A. (2009). Designing Research into the effects of grammar correction in L2 writing: Not so straightforward. Journal of Second Language Writing, 18, 136-140.
Chandler, J. (2004). A response to Truscott. Journal of Second Language Writing, 13, 345-348.
Chandler, J. (2009). Response to Truscott. Journal of Second Language Writing, 18, 57-58.
Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155-159.
Doughty, C., & Varela, E. (1998). Communicative Focus on Form. In C. Doughty and J. Williams (Eds.), Focus on form in classroom second language acquisition (pp. 114-138). Cambridge: Cambridge University Press.
Ellis, R. (2009). A typology of written corrective feedback types. ELT Journal, 63, 97-107. doi:10.1093/elt/ccn023
Ellis, R., Sheen, Y., Murakami, M., & Takashima, H. (2008). The effects of focused and unfocused written corrective feedback in an English as a foreign language context. System, 36, 353-371. doi:10.1016/j.system.2008.02.001
Fazio, L.L. (2001). The effect of corrections and commentaries on the journal writing accuracy of minority- and majority-language students. Journal of Second Language Writing, 10, 235-249.
Ferris, D. (1999). The case for grammar correction in L2 writing classes: A response to Truscott. Journal of Second Language Writing, 8, 1-11.
Ferris, D. (2004). The "grammar correction" debate in L2 writing: Where are we, and where do we go from here? (and what do we do in the mean time...?), Journal of Second Language Writing, 13, 49-62. doi:10.1016/j.jslw.2004.04.005
Ferris, D. (2010). Second language writing research and written corrective feedback in SLA. Studies in Second Language Acquisition, 32, 181-210. doi:10.1017/S0272263109990490
Ferris, D., & Roberts, B. (2001). Error feedback in L2 writing classes: How explicit does it need to be? Journal of Second Language Writing, 10, 161-184.
Guenette, D. (2007). Is feedback pedagogically correct? Research design issues in studies of feedback on writing. Journal of Second Language Writing, 16, 40-53. doi:10.1016/j.jslw.2007.01.001
Hunter, J., & Schmidt, F. (2004). Methods of meta-analysis. London: SAGE Publications.
Hyland, K., & Hyland, F. (2006). Feedback on second language students' writing. Language Teaching, 39, 83-101.
Iwashita, N. (2001). The effect of learner proficiency on corrective feedback and modified output in NN-NN interaction. System, 29, 267-287.
Kang, E., & Han, Z. (2015). The efficacy of written corrective feedback in improving L2 written accuracy: A meta-analysis. The Modern Language Journal, 99, 1-18.
Kao, C.W. (2022) (Online first). Does one size fit all? The scope and type of error in direct feedback effectiveness. Applied Linguistics Review. https://doi.org/10.1515/applirev-2021-0143
Kao, C.W. & Wible, D. (2014). A meta-analysis on the effectiveness of grammar correction in second language writing. English Teaching & Learning, 38, 29-69.
Larson-Hall, J. (2010). A guide to doing statistics in second language research using SPSS. Routledge: New York.
Lee, I. (2013). Research into practice: Written corrective feedback. Language Teaching, 46, 108-119. doi:10.1017/ s0261444812000390
Li, S. (2010). The effectiveness of corrective feedback in SLA: A meta-analysis. Language Learning, 60, 309-365. doi: 10.1111/j.1467-9922.2010.00561.x
Lim, S.C., & Renandya, W.A. (2020). Efficacy of written corrective in writing instruction: A meta-analysis. TESL-EJ: Teaching English as a Second or Foreign Language, 24, 1-26.
Lipsey, M.W., & Wilson, D.B. (2001). Practical meta-analysis. Thousand Oaks, CA: Sage.
Lyster, R., & Saito, K. (2010). Oral feedback in classroom SLA: A meta-analysis. Studies in Second Language Acquisition, 32, 265302. doi:10.1017/S0272263109990520
Mackey, A. & Goo, J. (2007). Interaction research in SLA: A meta-analysis and research synthesis. In A. Mackey (Ed.), Conversational interaction in second language acquisition: A collection of empirical studies (pp. 407-452). Oxford: Oxford University Press.
Muranoi, H. (2000). Focus on form through interaction enhancement: Integrating formal instruction into a communicative task in EFL classrooms. Language Learning, 50, 617-673.
Norris, J.M. & Ortega, L. (2000). Effectiveness of L2 instruction: A research synthesis and quantitative meta-analysis. Language Learning, 50, 417-528. doi: 10.1111/0023-8333.00136
Polio, C., Fleck, C., & Leder, N. (1998). "If I only had more time:" ESL learners' changes in linguistic
Reynolds, B.L., & Teng, M.F. (2021). Innovative Approaches in Teaching English Writing to Chinese Speakers. Berlin: De Gruyter Mouton.
Russell, J., & Spada, N. (2006). The Effectiveness of Corrective Feedback for the Acquisition of L2 Grammar: A Meta-analysis of the Research. In J.M. Norris and L. Ortega (Eds.), Synthesizing research on language learning and teaching (pp. 133-164). Amsterdam: John Benjamins Publishing Company.
Shao, J., & Liu, Y. (2022). Written corrective feedback, learner-internal cognitive processes, and the acquisition of regular past tense by Chinese L2 learners of English. Applied Linguistics Review, 13, 1005-1028. https://doi.org/10.1515/applirev-2019-0131
Sheen, Y. (2004). Corrective feedback and learner uptake in communicative classrooms across instructional settings. Language Teaching Research, 8, 263-300.
Sheen, Y. (2007). The effect of focused written corrective feedback and language aptitude on ESL learners' acquisition of articles. TESOL Quarterly, 41, 255-283. doi:10.1002/j.1545-7249.2007.tb00059x
Sheen, Y., Wright D. & Moldawa, A. (2009). Differential effects of focused and unfocused written correction on the accurate use of grammatical forms by adult ESL learners. System, 37, 556-569. doi:10.1016/j.system.2009.09.002
Silva, T., & Wang, Z. (Eds.). (2020). Reconciling translingualism and second language writing. Routledge.
Truscott, J. (1996). The case against grammar correction in L2 writing classes. Language Learning, 46, 327-369. Doi:10.1111/j.1467-1770.1996.tb0138.x
Truscott, J. (1999). The case for "The case against grammar correction in L2 writing classes": A response to Ferris. Journal of Second Language Writing, 8, 111-122.
Truscott, J. (2004a). Evidence and conjecture on the effects of correction: A response to Chandler. Journal of Second Language Writing, 13, 337-343. doi:10.1016/j.jslw.2004.05.002
Truscott, J. (2004b). The effectiveness of grammar instruction: Analysis of a meta-analysis. English Teaching & Learning, 28, 1729.
Truscott, J. (2007). The effect of error correction on learners' ability to write accurately. Journal of Second Language Writing, 16, 255-272. doi:10.1016/j.jslw.2007.06.003
Truscott, J. (2009). Arguments and appearances: A response to Chandler. Journal of Second Language Writing, 18, 59-60.
Van Beuningen, C. G., De Jong, N. H., & Kuiken, F. (2012). Evidence on the effectiveness of comprehensive error correction in second language writing. Language Learning, 62, 1-41. doi:10.111/j.1467-9922.2011.00674.x
Xu, C. (2009). Overgeneralization from a narrow focus: A response to Ellis et al. (2008) and Bitchener (2008). Journal of Second Language Writing, 18, 270-275.