Научная статья на тему 'WRITING FEEDBACK FROM A RESEARCH PERSPECTIVE'

WRITING FEEDBACK FROM A RESEARCH PERSPECTIVE Текст научной статьи по специальности «Языкознание и литературоведение»

CC BY-ND
93
17
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
FEEDBACK / EVALUATION / WRITING / AUTOMATED FEEDBACK / AUTOMATED EVALUATION / PEER REVIEW / TEACHER FEEDBACK / FACULTY FEEDBACK / FEEDBACK ON FEEDBACK / FEEDBACK TOLERANCE

Аннотация научной статьи по языкознанию и литературоведению, автор научной работы — Raitskaya Lilia, Tikhonova Elena

Introduction. Being an essential part of teaching and learning, feedback in close connection with evaluation is the focus of many researchers. Their interest lies mainly in automated systems, learners' and teachers' perceptions of writing feedback and feedback on feedback, new forms of feedback and their efficacy for motivation and writing performance. The review aims to identify the prevailing directions of research in the field. Methods. The review is based on 194 documents extracted from the Scopus database. The ultimate results of the search for “writing feedback” were limited to a field filter (social sciences, arts & humanities), a language filter (English), a document type (article, review, book chapter, conference paper) as well to manual screening in accordance with the inclusion criteria and relevance to the theme. Results and Discussion. Seven directions of research were identified: automated and non-automated evaluation; feedback on writing: general issues; automated feedback; peer review and teacher feedback on writing; perceptions and emotions relating to writing feedback; feedback on scholarly writing; evaluation and improvement in Chinese calligraphy. The reviewed documents proved the prominence of the topic and greater interest in new computer-mediated forms of feedback on writing. Conclusion. The results of the review may serve as a guidance for researchers at large and potential JLE authors focused on teaching and learning writing. The limitations of the review are linked to the scope and methods applied.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «WRITING FEEDBACK FROM A RESEARCH PERSPECTIVE»

https://doi.org/10.17323/jle.2022.16377

Writing Feedback from a Research Perspective

1 2, 3

Lilia Raitskaya , Elena Tikhonova

1 Moscow State Institute of International Relations (MGIMO University)

2 National Research University Higher School of Economics

3 Peoples' Friendship University of Russia (RUDN University)

ABSTRACT

Introduction. Being an essential part of teaching and learning, feedback in close connection with evaluation is the focus of many researchers. Their interest lies mainly in automated systems, learners' and teachers' perceptions of writing feedback and feedback on feedback, new forms of feedback and their efficacy for motivation and writing performance. The review aims to identify the prevailing directions of research in the field.

Methods. The review is based on 194 documents extracted from the Scopus database. The ultimate results of the search for "writing feedback" were limited to a field filter (social sciences, arts & humanities), a language filter (English), a document type (article, review, book chapter, conference paper) as well to manual screening in accordance with the inclusion criteria and relevance to the theme.

Results and Discussion. Seven directions of research were identified: automated and non-automated evaluation; feedback on writing: general issues; automated feedback; peer review and teacher feedback on writing; perceptions and emotions relating to writing feedback; feedback on scholarly writing; evaluation and improvement in Chinese calligraphy. The reviewed documents proved the prominence of the topic and greater interest in new computer-mediated forms of feedback on writing.

Conclusion. The results of the review may serve as a guidance for researchers at large and potential JLE authors focused on teaching and learning writing. The limitations of the review are linked to the scope and methods applied.

KEYWORDS

feedback, evaluation, writing, automated feedback, automated evaluation, peer review, teacher feedback, faculty feedback, feedback on feedback, feedback tolerance

INTRODUCTION

Citation: Raitskaya, L. & Tikhonova, E. (2022). Writing Feedback from a Research Perspective. Journal of Language and Education, S(4), 14-21. https://doi.org/10.17323/jle.2022.16377

Correspondence:

Elena Tikhonova, etihonova@hse.ru

Received: November 01, 2022 Accepted: December 02, 2022 Published: December 26, 2022

Writing is thoroughly studied within linguistics, education, and communication domains. The thematic scope is rather wide, ranging from the language aspects to teaching and learning writing. Feedback is an essential component of any teaching and learning processes at all levels of education. It is a critical side of pedagogical communication. If wisely and efficiently worded, it encourages learners to improve their skills and enforces self-regulating learning. Feedback is integral to evaluation, but in addition to assessment, it includes commentary on the progress, errors, strong and weak points. It is defined as "learning-oriented processes by which learners make sense of, evaluate, and use the information to

improve their current and/or future performance" (Yu, Geng, Liu, & Zheng, 2021).

Researchers of feedback also concentrate on personal traits that help in or prevent students from recognising feedback. They analyse error and feedback tolerance (Aben, Timmermans, Dingy-loudi, Lara, & Strijbos, 2022). As any evaluation is more or less biased, errors may be differently defined and perceived. The subjectivity of errors coupled with students' levels of tolerance both to errors and feedback have recently come to the fore (Aben, Timmermans, Dingyloudi, Lara, & Strijbos, 2022; Zhang, & McEne-aney, 2020).

The taxonomy of feedback on writing is often based on the identity of the feedback giver (teacher, peer, self, and au-

tomated, or computer-mediated feedback) or mode of its delivery (face-to-face, written, oral, audio, video). Students' involvement in the feedback process also vary. Traditionally, the prevailing forms of teachers' feedback are oral (face-to-face or computer-mediated) or written. At present, technology offers other modalities for feedback, including text, video, and audio. Students' emotions that feedback variations may arouse vary as interaction modes influence the language choices in feedback and engagement of both sides of the feedback process (Cunningham, 2019; Cunningham, & Link, 2021). Cunningham, & Link (2021) underline that "responding to student writing can be a complex interpersonal process with multifaceted effects on students' emotional states".

Doctoral writing takes a separate place in the field. Being an integral part of academic writing and aligned to scholarly communication, PhD writing essentially differs from other courses. It may encompass many writing-related activities, including working in pairs and groups, peer review activities, with many of which combining writing with contribution to science. Though, supervisory feedback aims for a strong thesis, some researchers highlight supervisory feedback and feedforward on doctoral writing as they are essential for training fully-fledged researchers, aspiring for a PhD degree (Carter, & Kumar, 2017).

Emotions aroused by feedback range from positive (satisfaction, pleasure, joy, etc.) to negative (dissatisfaction, frustration, sadness, discontent). Researchers have been studying this aspect of feedback, with papers on various feedback variations and environments (Lipnevich, Murano, Krannich, & Goetz, 2021; Mazzotta, & Belcher, 2018; Yu, Geng, Liu, & Zheng, 2021; Zhang, He, Du, Liu, & Huang, 2022; Yu, Di Zhang, & Liu, 2022).

The use of web-based platforms for feedback brought new possibilities for peer review in writing (Lam, 2021). Computer-mediated feedback as compared with face-to-face evaluation is more distant, time and place independent, written and perceived as anonymous (Tuzi, 2004).

Feedback tends to be teacher-centered. But to be productive, feedback must be faced by students whose feedback literacy is formed. and perceived as a stimulus for improvement. On the whole, students' feedback literacy is focused on "how learners approach, use, and evaluate feedback and manage their feelings in the process" (Yu, Di Zhang, & Liu, 2022). It is defined as "students' ability to understand, utilise and benefit from feedback processes" (Molloy, Boud, & Henderson, 2020). The feedback literacy structure covers "understanding feedback purposes and roles, seeking information, making judgements about work quality, working with emotions, and processing and using information for the benefit of their future work" (Molloy, Boud, & Henderson, 2020, p. 527).

With all the advantages feedback on writing entails, it may occasionally have a negative side. "Lack of specification, low quality, superficial feedback, unclear feedback criteria, inconsistent feedback, one-way communication, and unclosed loop" may negatively affect students' development and performance, and occasionally may lead to their frustration (Yu, Geng, Liu, & Zheng, 2021).

The editorial review aims to determine the scope of research on writing feedback published in international journals. Thus, the review question we are to answer in this paper is the following:

• What are the major thematic clusters in the writing feedback domain?

METHODS

To estimate how deep feedback writing has been researched, we searched for the keywords "teaching writing", "writing feedback", "feedback on writing" in the field covering titles, abstracts, and key words in the Scopus database as of November 21, 2022. Initially, with the applied limitations, the search brought 1,147 publications for all years. The limitations included a field filter (Social Sciences; Arts & Humanities); language (English) and types of publications (articles, reviews, book chapters, conference papers).

Then, 208 publications on writing feedback were screened and manually processed on the basis of the inclusion and exclusion criteria (see Table 1). While filtering the publications, we found that fourteen articles turned out irrelevant to the subject and, thus, we eliminated them from the list. Thus, the final selection included 194 publications for further analysis.

Ultimately, the 194 publications included 157 articles, 28 conference papers, 5 book chapters, and 4 reviews.

The 194 documents are distributed unevenly, with an upward tendency from 2012 (see Figure 1). The reviewed documents were published in the following sources: Assessing Writing (n=12); Journal of Second Language Writing (n=10); Lecture Notes in Computer Science (n=8); Calico Journal (n=7); System (n=6). The remaining journals brought out from five publications to one.

The most prolific authors embrace J. Wilson (n=13); S. Yu (n=8); D.S. McNamara (n=7); R.D. Roscoe (n=7) and L.K. Allen (n=6).

The University of Delaware (n=13), Iowa State University (n=12), the University of Macau (n=9), Arizona State University (n=8), and Georgia State University (n=6) top the list of the affiliations. The geographical breakdown is shown in Figure 2. The top ten countries include the USA (n=69), China (n=42),

Table 1

Review Criteria of Inclusion and Exclusion

Criteria

Inclusion

Exclusion

Database Language Period Subject Area

Type of Publications

Citations

Level of Education

Scopus Database

English

All years

Social Sciences

Arts & Humanities

Articles

Reviews

Book Chapters

Conference Papers

All readings

All levels

Databases other than Scopus Other languages No criterion applicable Other areas

Other types

No criterion applicable No criterion applicable

Figure 1

Research on Writing Feedback: Breakdown by Year

Note. Source: Scopus Database as of November 21, 2022

the UK (n=14), Australia (n=11), Taiwan (n=11), Canada (n=9), Turkey (n=6), Japan (n=5), Singapore (n=5), and Hong Kong (n=4).

The initial topical analysis of the selected publications lead to the following segmentation:

(1) writing feedback types;

(2) writing evaluation;

(3) teacher feedback on writing;

(4) automated feedback and evaluation;

(5) emotions, perceptions, and motivation relating to writing feed back.

We categorized the 194 publications individually, each on our own. Then, we compared the results. The clusters were refined. The ultimate breakdown includes seven clusters: automated and non-automated evaluation; feedback on writing: general issues; automated feedback; peer review and teacher feedback on writing; perceptions and emotions relating to writing feedback; feedback on scholarly writing; evaluation and improvement in Chinese calligraphy.

RESULTS AND DISCUSSION

The 184 publications were distributed among the seven thematic clusters, with overlapping excluded in favour of the prevailing topic. Some of the clusters (Automated and

Figure 2

Research on Writing Feedback: Breakdown by Country (Territory)

Note. Source: Scopus Database as of November 21, 2022 Table 2

Thematic Clusters of Scopus-Indexed Publications on Writing Feedback (all years)

Thematic Cluster Number of Publications (n) Brief Cluster Description

Automated and Non-Automated Evaluation 79 Automated writing evaluation; testing services for analysing papers; tools for automated evaluation, including computed-based checkers; evidence for evaluation inference; computer-generated quantitative assessments and qualitative diagnostic feedback on writing; effectiveness of automated writing evaluation systems

Feedback on Writing: General Issues 44 Types of writing feedback; cognitive effects of intervention on the revision process and text improvement; quality of feedback; feedback practices; audio feedback; improving text revision with self-regulated strategies; collaboration in writing

Automated Feedback 21 Automated formative feedback; intelligent tutoring systems, including the Writing Pal, Automated Essay Scoring, Research Writing Tutor, Automated Casual Discourse Analyzer, Formative Writing Systems (with automated scoring), Grammarly, MI Write, CyWrite, Peerceptiv

Peer Review and Teacher Feedback on Writing 18 Feedback on feedback; formative feedback to peers; vis-à-vis teacher feedback; peer critique of writing; learning from giving and receiving feedback on writing

Perceptions and Emotions Relating to Writing Feedback 15 Students' and academics' satisfaction or dissatisfaction with feedback on their writing; timeliness of feedback; students' engagement and interest in feedback on writing; motivation relationship with indicators of academic performance in writing; a sociocultural framework of the perception of writing feedback

Feedback on Scholarly Writing 12 Writing in disciplinary and academic contexts; supervisory feedback on doctoral writing; feedback for academic writing development; supervisory feedback literacy; automated evaluation in improving academic writing

Evaluation and Improvement in Chinese Calligraphy 5 Computer aided calligraphic learning systems in supporting beginners of Chinese; manual assessment vs computer aided systems; digital ink technology in Chinese calligraphy

Non-Automated Evaluation; Feedback on Writing: General Issues) were enlarged by combining several sub-themes.

Automated and Non-Automated Evaluation (n=82)

Evaluation is an essential component of any feedback. With computer-mediated evaluation systems on the rise, automated evaluation attracts "has been applied with the significant frequency to the evaluation and assessment" of writing (Wang, Shang, & Briody, 2013). Automated evaluation has a potential for formative assessment (Ranalli, Link, & Chukharev-Hudilainen, 2017). Automated writing evaluation systems provide "immediate computer-generated quantitative assessments" (Bai, & Hu, 2017). Automated writing evaluation complements "instructor input with immediate scoring and qualitative feedback" (Li, Link, Ma, Yang, & He-gelheimer, 2014). Controversy arises when automated writing evaluation is applied "in high-stakes tests like TOEFL" (Stevenson, 2016). Students' motivation to use automated writing evaluation is "determined by perceived usefulness, attitude towards using and computer self-efficacy" (Li, Meng, Tian, Zhang, Ni, & Xiao, 2019). In addition to better writing performance, automated writing evaluation systems facilitate grammatical development (Crossman, & Kite, 2012). As research shows automated evaluation are still facing a challenge in "evaluating content and discourse-specific feedback" (Saricaoglu, 2019).

Feedback on Writing: General Issues (n=44)

The approach to writing feedback is not unanimous. Many teachers strongly believe that corrective feedback to students' writing improves their accuracy, but others disagree (Guenette, 2007). Researchers study "the influences of different writing feedback practices on learner affective factors" (Yu, Jiang, & Zhou, 2020), including motivation. Some papers research feedback on writing in the context of higher education (Seror, 2009), including feedback on academic writing (Chang, 2014), and university students' feedback on feedback through student' generated screencasts (Fernandez-Toro, & Furnborough, 2014). Feedback literacy has attracted attention as it seriously increases its efficacy (Parker, & Baughan, 2009; Yu, Di Zhang, & Liu, 2022). The communication between a student and a teacher in a writing class creates the so-called "instructor-student loop" (Knight, Greenberger, & McNaughton, 2021).

Automated Feedback (n=21)

Computer-mediated systems providing feedback on writing are getting popular. Their characteristics are constantly being improved. Most systems are designed to improve students' writing proficiency. Such systems integrate a combination of "explicit strategy instruction, game-based practice, essay writing practice, and automated formative feedback" (Roscoe, Allen, Weston Crossley, & McNamara, 2014). Sys-

tems of automated essay scoring analyse quantitatively and qualitatively across the feedback categories of grammar, usage, and mechanics (Dikli, & Bleyle, 2014). Automated systems are working "towards providing timely and appropriate feedback" (Calvo, & Ellis, 2010). Some studies proved that corrective feedback generated by systems may be similar to the direct comments made by teachers "in terms of improving the quality of the content by criteria of structure, organisation, supporting ideas and others (Liu, Li, Xu, & Liu, 2017). Some systems (for instance, Research Writing Tutor) maintain "genre and discipline-specific feedback on the functional units of research article discourse" (Cotos, & Pendar, 2016). Roscoe, Alen, Johnson, and McNamara (2018) established the fact that the students' "perceptions of automated feedback accuracy, ease of use, relevance, and understanding" and attitudes over regular sessions brought them to revising (Roscoe, Alen, Johnson, & McNamara, 2018). The research on popular systems like Grammarly analysed students' acceptance of the new technology in editing and revising their essays and found outperformance of those who regularly applied Grammarly (Chang, Huang, & Whitfield, 2021; Tambunan, Andayani, Sari, & Lubis, 2022).

Peer Review and Teacher Feedback on Writing (n=18)

Teacher feedback is a traditional form prevailing in writing across all levels of education. It has been in the highlight for researchers for many years. The quality of teachers' feedback determines learners' efficacy in writing. Students tend to react positively to teachers' feedback relating to both content and language errors (Elwood, & Bode, 2014), but most learners see teachers' feedback as prescriptive directions to be followed without fail (Still, & Koerber, 2010). At the same time, giving feedback on student writing "can be a learning experience for most L2 writing teachers" (Yu, 2021). With the introduction of automated writing feedback systems, researchers compare them with feedback provided by teachers (Howard Chen, Sarah Cheng, & Chirstine Yang, 2017).

The provision of teachers' feedback on writing is limited in some environments and may be set off by peer feedback. The latter is studied as a resource leading to greater improvements in writing (Yang, & Badger, 2006). Another study focuses on the timing of the peer review and further student writers' revisions (Baker, 2016). Facilitating writing may be realized through directed peer review (Crossman, & Kite, 2012). Peer review is researched in various environments, i.g. institutionally integrated teletandem sessions (Aranha, & Cavalari, 2015); online peer review platforms (Kumaran, McDonagh, & Bailey, 2017). The meta-analysis conducted by Thirakunkovit and Chamvharatsri (2019) found that a noticeable difference in effect sizes was recorded between untrained peer feedback and peer feedback with prior training (Thirakunkovit and Chamcharatsri, 2019).

Perceptions and Emotions Relating to Writing Feedback (n=15)

As efficient feedback on writing results in improved texts, it must influence students' emotions in a positive way, strengthening their motivation and engagement. Students' and academics' approaches to feedback and its evaluation often differ. The research findings proved "a significant discord between staff and students to certain aspects of feedback practice" (Mulliner, & Tucker, 2017; Liu, & Wu, 2019). Writing feedback is also considered from a perspective of self-efficacy and self-aptitude (Ekholm, Zumbrunn, & Conk-lin, 2015). Some papers dwell upon teachers' emotions in giving feedback and their attitudes to automated writing evaluation and feedback (Wilson, & et al., 2021).

Feedback on Scholarly Writing (n=12)

These papers dwell upon on feedback on students' and undergraduates' writing in the disciplines, including faculty feedback (Hyland, 2013); feedback for academic writing development in postgraduate research (Hey-Cunningham, Ward, & Miller, 2021); students' engagement with automated feedback on academic writing (Zhang, & Xu, 2022); effective computer-based writing tools for the support of composing scholarly texts by non-native speakers (Lee, Wang, Chen, & Yu, 2021); doctoral writing feedback across cultures (Carter, Sun, & Jabeen, 2021); and supervisory feedback on doctoral writing (Carter, & Kumar, 2017; Wei, Carter, & Laurs, 2019).

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

DECLARATION OF COMPETING INTEREST

None declared.

Evaluation and Improvement in Chinese Calligraphy (n=5)

Chinese calligraphy is getting popularity worldwide. Beginners of Chinese tend to face difficulties in Chinese calligraphy and suffer from unstable characters writing. There are five papers on Chinese calligraphy writing on the reviewed list. They focus on computer-mediated digital-ink writing and methods of evaluation and improvement (Lai, & Zhang, 2021; Wu, Zhou, & Cai, 2013).

CONCLUSION

The identified directions in the research on writing feedback cover automated and non-automated evaluation; feedback on writing: general issues; automated feedback; peer review and teacher feedback on writing; perceptions and emotions relating to writing feedback; feedback on scholarly writing; evaluation and improvement in Chinese calligraphy. They may provide a reliable guidance for researchers. The review results are likely to serve as a landmark for potential JLE authors working on relevant topics. Though, this review has some limitations. First, it is a probing study of the topical area. Second, the review has a simplified document-selection method that does not allow to analyse the field in-depth. Future researchers may apply a more complex review methods, i.g. a scoping review methodology. A further study of the field needs spotting the gaps in a wider database covering more publications.

REFERENCES

Aben, J. E. J., Timmermans, A. C., Dingyloudi, F., Lara, M. M., & Strijbos, J. (2022). What influences students' peer-feedback uptake? Relations between error tolerance, feedback tolerance, writing self-efficacy, perceived language skills and peer-feedback processing. Learning and Individual Differences, 97. https://doi.org/10.10167j.lindif.2022.102175

Aranha, S., & Cavalari, S. M. S. (2015). Institutional integrated teletandem: What have we been learning about writing and peer feedback? DELTA Documentacao De Estudos Em Linguistica Teorica e Aplicada, 31(3), 763-780. https://doi.org/10.1590/0102-445039175922916369

Bai, L., & Hu, G. (2017). In the face of fallible AWE feedback: How do students respond? Educational Psychology, 37(1), 67-81. https://doi.org/10.1080/01443410.2016.1223275

Baker, K. M. (2016). Peer review as a strategy for improving students' writing process. Active Learning in Higher Education, 17(3), 179-192. https://doi.org/10.1177/1469787416654794

Carter, S., & Kumar, V. (2017). 'Ignoring me is part of learning': Supervisory feedback on doctoral writing. Innovations in Education and Teaching International, 54(1), 68-75. https://doi.org/10.1080/14703297.2015.1123104

Carter, S., Sun, Q., & Jabeen, F. (2021). Doctoral writing: Learning to write and give feedback across cultures. Studies in Graduate and Postdoctoral Education, 12(3), 371-383. https://doi.org/10.1108/SGPE-07-2020-0054

Chang, G. C. L. (2014). Writing feedback as an exclusionary practice in higher education. Australian Review of Applied Linguistics, 37(3), 262-275. https://doi.org/10.1075/aral.373.05cha

Chang, T., Li, Y., Huang, H., & Whitfield, B. (2021). Exploring EFL students' writing performance and their acceptance of AI-based automated writing feedback. Paper presented at the ACM International Conference Proceeding Series, 31-35. https:// doi.org/10.1145/3459043.3459065

Cotos, E., & Pendar, N. (2016). Discourse classification into rhetorical functions for AWE feedback. CALICO Journal, 33(1), 92-116. https://doi.org/10.1558/cj.v33i1.27047

Crossman, J. M., & Kite, S. L. (2012). Facilitating improved writing among students through directed peer review. Active Learning in Higher Education, 13(3), 219-229. https://doi.org/10.1177/1469787412452980

Cunningham, K. J. (2019). How language choices in feedback change with technology: Engagement in text and screencast feedback on ESL writing. Computers and Education, 135, 91-99. https://doi.org/10.1016/j.compedu.2019.03.002

Cunningham, K. J., & Link, S. (2021). Video and text feedback on ESL writing: Understanding ATTITUDE and negotiating relationships. Journal of Second Language Writing, 52. https://doi.org/10.1016/j.jslw.2021.100797

Dikli, S., & Bleyle, S. (2014). Automated essay scoring feedback for second language writers: How does it compare to instructor feedback? Assessing Writing, 22, 1-17. https://doi.org/10.1016/j.asw.2014.03.006

Ekholm, E., Zumbrunn, S., & Conklin, S. (2015). The relation of college student self-efficacy toward writing and writing self-regulation aptitude: Writing feedback perceptions as a mediating variable. Teaching in Higher Education, 20(2), 197-207. https:// doi.org/10.1080/13562517.2014.974026

Elwood, J. A., & Bode, J. (2014). Student preferences vis-à-vis teacher feedback in university EFL writing classes in Japan. System, 42(1), 333-343. https://doi.org/10.1016/j.system.2013.12.023

Howard Chen, H., Sarah Cheng, H., & Chirstine Yang, T. (2017). Comparing grammar feedback provided by teachers with an automated writing evaluation system. English Teaching and Learning, 41(4), 99-131. https://doi.org/10.6330/ETL.2017.41.4.04

Hyland, K. (2013). Faculty feedback: Perceptions and practices in L2 disciplinary writing. Journal of Second Language Writing, 22(3), 240-253. https://doi.org/10.1016/jjslw.2013.03.003

Knight, S. K., Greenberger, S. W., & McNaughton, M. E. (2021). An interdisciplinary perspective: The value that instructors place on giving written feedback. Active Learning in Higher Education, 22(2), 115-128. https://doi.org/10.1177/1469787418810127

Kumaran, S. R. K., McDonagh, D. C., & Bailey, B. P. (2017). Increasing quality and involvement in online peer feedback exchange. Proceedings of the ACM on Human-Computer Interaction, 1(CSCW). https://doi.org/10.1145/3134698

Lai, Y., & Zhang, X. (2021). Evaluating the stability of digital ink Chinese characters from CFL beginners based on center of gravity guided by calligraphy. ACM International Conference Proceeding Series, 19-25. https://doi.org/10.1145/3451400.3451404

Lam, S. T. E. (2021). A web-based feedback platform for peer and teacher feedback on writing: An activity theory perspective. Computers and Composition, 62. https://doi.org/10.1016/jxompcom.2021.102666

Lee, L., Wang, Y., Chen, C., & Yu, L. (2021). Ensemble multi-channel neural networks for scientific language editing evaluation. IEEE Access, 9, 158540-158547. https://doi.org/10.1109/ACCESS.2021.3130042

Lipnevich, A. A., Murano, D., Krannich, M., & Goetz, T. (2021). Should I grade or should I comment: Links among feedback, emotions, and performance. Learning and Individual Differences, 89. https://doi.org/10.1016/j.lindif.2021.102020

Li, R., Meng, Z., Tian, M., Zhang, Z., Ni, C., & Xiao, W. (2019). Examining EFL learners' individual antecedents on the adoption of automated writing evaluation in China. Computer Assisted Language Learning, 32(7), 784-804. https://doi.org/10.1080/0958 8221.2018.1540433

Liu, M., Li, Y., Xu, W., & Liu, L. (2017). Automated essay feedback generation and its impact on revision. IEEE Transactions on Learning Technologies, 10(4), 502-513. https://doi.org/10.1109/TLT.2016.2612659

Liu, Q., & Wu, S. (2019). Same goal, varying beliefs: How students and teachers see the effectiveness of feedback on second language writing. Journal of Writing Research, 11(2), 299-330. https://doi.org/10.17239/jowr-2019.11.02.03

Mazzotta, M., & Belcher, D. (2018). Social-emotional outcomes of corrective feedback as mediation on second language Japanese writing. Journal of Cognitive Education and Psychology, 17(1), 47-69. https://doi.org/10.1891/1945-8959.17.! .47

Molloy, E., Boud, D., & Henderson, M. (2020). Developing a learning-centred framework for feedback literacy. Assessment & Evaluation in Higher Education, 45(4), 527-540. https://doi.org/10.1080/02602938.2019.1667955

Mulliner, E., & Tucker, M. (2017). Feedback on feedback practice: Perceptions of students and academics. Assessment and Evaluation in Higher Education, 42(2), 266-288. https://doi.org/10.1080/02602938.2015.1103365

Parker, P., & Baughan, P. (2009). Providing written assessment feedback that students will value and read. InternationalJournal of Learning, 16(11), 253-262. https://doi.org/10.18848/1447-9494/cgp/v16i11/46715

Ranalli, J., Link, S., & Chukharev-Hudilainen, E. (2017). Automated writing evaluation for formative assessment of second language writing: Investigating the accuracy and usefulness of feedback as part of argument-based validation. Educational Psychology, 37(1), 8-25. https://doi.org/10.1080/01443410.2015.1136407

Roscoe, R. D., Allen, L. K., Johnson, A. C., & McNamara, D. S. (2018). Automated writing instruction and feedback: Instructional mode, attitudes, and revising. The Proceedings of the Human Factors and Ergonomics Society, 3, 2089-2093. https://doi. org/10.1177/1541931218621471

Roscoe, R. D., Allen, L. K., Weston, J. L., Crossley, S. A., & McNamara, D. S. (2014). The writing pal intelligent tutoring system: Usability testing and development. Computers and Composition, 34, 39-59. https://doi.org/10.1016/j.compcom.2014.09.002

Saricaoglu, A. (2019). The impact of automated feedback on L2 learners' written causal explanations. ReCALL, 31(2), 189-203. https://doi.org/10.1017/S095834401800006X

Stevenson, M. (2016). A critical interpretative synthesis: The integration of automated writing evaluation into classroom writing instruction. Computers and Composition, 42, 1-16. https://doi.org/10.1016/j.compcom.2016.05.001

Still, B., & Koerber, A. (2010). Listening to students: A usability evaluation of instructor commentary. Journal of Business and Technical Communication, 24(2), 206-233. https://doi.org/10.1177/1050651909353304

Tambunan, A. R. S., Andayani, W., Sari, W. S., & Lubis, F. K. (2022). Investigating EFL students' linguistic problems using Grammarly as automated writing evaluation feedback. Indonesian Journal of Applied Linguistics, 12(1), 16-27. https://doi. org/10.17509/IJAL.V12I1.46428

Thirakunkovit, S., & Chamcharatsri, B. (2019). A meta-analysis of effectiveness of teacher and peer feedback: Implications for writing instructions and research. Asian EFL Journal, 21(1), 140-170.

Tuzi, F. (2004). The impact of feedback on the revisions of L2 writers in an academic writing course. Computers and Composition, 21(2), 217-235. https://doi.org/10.1016/jxompcom.2004.02.003

Yang, M., Badger, R., & Yu, Z. (2006). A comparative study of peer and teacher feedback in a Chinese EFL writing class. Journal of Second Language Writing, 15(3), 179-200. https://doi.org/10.1016/jjslw.2006.09.004

Yu, S. (2021). Feedback-giving practice for L2 writing teachers: Friend or foe? Journal of Second Language Writing, 52. https:// doi.org/10.1016/j.jslw.2021.100798

Yu, S., Di Zhang, E., & Liu, C. (2022). Assessing L2 student writing feedback literacy: A scale development and validation study. Assessing Writing, 53. https://doi.org/10.1016Zj.asw.2022.100643

Yu, S., Geng, F., Liu, C., & Zheng, Y. (2021). What works may hurt: The negative side of feedback in second language writing. Journal of Second Language Writing, 54. https://doi.org/10.1016/jjslw.2021.100850

Wang, Y., Shang, H., & Briody, P. (2013). Exploring the impact of using automated writing evaluation in English as a foreign language university students' writing. Computer Assisted Language Learning, 26(3), 234-257. https://doi.org/10.1080/09588 221.2012.655300

Wei, J., Carter, S., & Laurs, D. (2019). Handling the loss of innocence: First-time exchange of writing and feedback in doctoral supervision. Higher Education Research and Development, 38(1), 157-169. https://doi.org/10.1080/07294360.2018.1541074

Wilson, J., Ahrendt, C., Fudge, E. A., Raiche, A., Beard, G., & MacArthur, C. (2021). Elementary teachers' perceptions of automated feedback and automated scoring: Transforming the teaching and learning of writing using automated writing evaluation. Computers and Education, 168. https://doi.org/10.1016/jxompedu.2021.104208

Wu, Y., Lu, X., Zhou, D., & Cai, Y. (2013). Virtual calligraphic learning and writing evaluation. Proceedings - 6th International Symposium on Computational Intelligence and Design, ISCID 2013,2 108-111. https://doi.org/10.1109/ISCID.2013.141

Zhang, M., He, Q., Du, J., Liu, F., & Huang, B. (2022). Learners' perceived advantages and social-affective dispositions toward online peer feedback in academic writing. Frontiers in Psychology, 13. https://doi.org/10.3389/fpsyg.2022.973478

Zhang, X., & McEneaney, J. E. (2020). What is the influence of peer-feedback and author response on Chinese university students' English writing performance? Reading Research Quarterly, 55(1), 123-146. https://doi.org/10.1002/rrq.259

Zhang, Z., & Xu, L. (2022). Student engagement with automated feedback on academic writing: A study on Uyghur ethnic minority students in China. Journal of Multilingual and Multicultural Development. https://doi.org/10.1080/01434632.2022.210 2175

i Надоели баннеры? Вы всегда можете отключить рекламу.