Научная статья на тему 'COMPUTERIZED GROUP DYNAMIC ASSESSMENT AND LISTENING COMPREHENSION ABILITY: DOES SELF-EFFICACY MATTER?'

COMPUTERIZED GROUP DYNAMIC ASSESSMENT AND LISTENING COMPREHENSION ABILITY: DOES SELF-EFFICACY MATTER? Текст научной статьи по специальности «Экономика и бизнес»

CC BY-ND
326
78
i Надоели баннеры? Вы всегда можете отключить рекламу.
Журнал
Journal of Language and Education
WOS
Scopus
ВАК
Область наук
Ключевые слова
COMPUTER-ASSISTED LANGUAGE LEARNING (CALL) / GROUP DYNAMIC ASSESSMENT / LISTENING COMPREHENSION ABILITY / SELF-EFFICACY / ZONE OF PROXIMAL DEVELOPMENT (ZPD)

Аннотация научной статьи по экономике и бизнесу, автор научной работы — Delvand Shahin Abassy, Heidar Davood Mashhadi

The present study investigated the effect of group dynamic assessment (DA) through software on Iranian intermediate EFL learners’ listening comprehension ability. The main question of the study was whether dynamic assessment via CoolSpeech software had any effect on the listening comprehension ability of learners with high and low self-efficacy. To find the answer, 80 Iranian intermediate learners were selected from among a population of 120, based on their scores on a placement test. A self-efficacy questionnaire was then used to assign selected participants into two experimental groups as low self-efficacious experimental group (n=20) and high self-efficacious experimental group (n=20), as well as two control groups, each containing 20 participants. Next, a pretest of listening comprehension ability was administered to all groups, and no significant difference between their mean scores was observed. After a period of two months, during which the experimental groups received treatment of dynamic assessment through CoolSpeech software and the control groups received a placebo, a posttest of listening comprehension was administered to all groups. The data analysis results revealed that the participants in high self-efficacious experimental group achieved significantly better scores than the other groups. However, in the second experimental group, no significant change was observed, and participants in the second experimental group did not significantly outperform the control group. It was concluded that the group dynamic assessment method via software could have a significant effect on the listening comprehension ability of EFL learners with high self-efficacy.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «COMPUTERIZED GROUP DYNAMIC ASSESSMENT AND LISTENING COMPREHENSION ABILITY: DOES SELF-EFFICACY MATTER?»

National Research University Higher School of Economics Delvand, S. A., & Heidar, D. M. (2020). Computerized Group Dynamic

Journal of Language & Education Volume 6, Issue 1, 2020 Assessment and Listening Comprehension Ability: Does Self-Efficacy

Matter? Journal of Language and Education, 6(1), 157-172. https://doi. org/10.17323/jle.2020.9834

Computerized Group Dynamic Assessment and Listening Comprehension Ability: Does Self-Efficacy Matter?

Shahin Abassy Delvand, Davood Mashhadi Heidar

Islamic Azad University

Correspondence concerning this article should be addressed to Davood Mashhadi Heidar, Department of English, Tonekabon Branch, Islamic Azad University, Tonekabon, Iran.

E-mail: [email protected]

The present study investigated the effect of group dynamic assessment (DA) through software on Iranian intermediate EFL learners' listening comprehension ability. The main question of the study was whether dynamic assessment via CoolSpeech software had any effect on the listening comprehension ability of learners with high and low self-efficacy. To find the answer, 80 Iranian intermediate learners were selected from among a population of 120, based on their scores on a placement test. A self-efficacy questionnaire was then used to assign selected participants into two experimental groups as low self-efficacious experimental group (n=20) and high self-efficacious experimental group (n=20), as well as two control groups, each containing 20 participants. Next, a pretest of listening comprehension ability was administered to all groups, and no significant difference between their mean scores was observed. After a period of two months, during which the experimental groups received treatment of dynamic assessment through CoolSpeech software and the control groups received a placebo, a posttest of listening comprehension was administered to all groups. The data analysis results revealed that the participants in high self-efficacious experimental group achieved significantly better scores than the other groups. However, in the second experimental group, no significant change was observed, and participants in the second experimental group did not significantly outperform the control group. It was concluded that the group dynamic assessment method via software could have a significant effect on the listening comprehension ability of EFL learners with high self-efficacy.

Keywords: computer-assisted language learning (CALL), group dynamic assessment, listening comprehension ability, self-efficacy, zone of proximal development (ZPD)

Introduction

Computers play an increasingly significant role in our business, recreational, and educational activities. It is quite clear that in our daily lives computers are here to stay. Schools across Iran are helping students become computer literate at earlier ages than ever before (Mohammadi & Mirdehghan, 2014). Computer programs may best serve students who need an additional challenge apart from the classroom. Computers may also be helpful to students who are behind their peers academically (Nutta, 2013). They can serve as an economical tutor in order to bring these students up to levels of mastery in line with other study skills. In fact, using computers in the classroom create an amusing atmosphere for the learners to interactively take part in classroom discussions with their peers and the teacher as well (Rahimi & Hosseini, 2011). Computer-based instruction and its most popular terminology, i.e., CALL, has had a long historical background since they were first introduced (Kulik, Bangert, & Williams, 1983). Much of the research on the effectiveness of CALL occurred over the past twenty years, highlighting the need for both teachers and learners to be computer literate. Early supporters of CALL programs (Gerard, 1967; Oliver, 1999; Warschauer & Healey, 1998) believed that these educational devices might pave the way for both teachers and learners to benefit from them in their language classroom, and create variety in the learning atmosphere, which probably leads to a more cooperative learning environment. Some of the expected benefits included self-paced instruction for students, which would result in faster learning, the availability of

Research Articles

This article is published under the Creative Commons Attribution 4.0 International License.

richer and more sophisticated materials, expert system-based instruction, and dynamic assessment (Corbeil, 2007; Wang, 2016; Zhao, 2003). The benefits for teachers were found to be the ease of modifying instructional materials and better time management, which allowed them to allocate more time to assist individual learners who required additional contact (Apperson, Laws, & Scepansky, 2006; Coleman, 2005). The effectiveness of CALL lies in the fact that, at the very least, it seems to be more effective and productive than traditional methods of instruction (Alodail, 2014; Lau, Yen, Li, & Wah, 2014), paving the way for language learners to develop their language skills more efficiently; one of the most important of which is listening comprehension.

Listening comprehension is of great importance in second and foreign language research, which should be acquired at "the early stages of L2 learning" (Nation& Newton, 2009, p. 37). It is through listening comprehension that the learners' input can be shaped, facilitating the learning of other language skills as well (Vandergrift & Goh, 2012). As stated by Rubin (1995), listening refers to "an active process in which listeners select and interpret information which comes from auditory and visual clues in order to define what is going on and what the speakers are trying to express" (p. 7). It has also been considered "the least understood and most overlooked of the four skills (Listening, Speaking, Reading, and Writing)" (Nation & Newton, 2009, p. 37). It is worth mentioning that listening comprehension seems to be overlooked in foreign language context as several Iranian researchers such as Razmjoo and Riazi (2006), Hosseini (2007), and Jahangard (2007) argued that little attention is paid to aural/oral skills in the Iranian EFL curriculum. These skills are not appropriately instructed and evaluated (Razmjoo & Riazi, 2006). In fact, Razmjoo and Riazi argued that listening tasks have received insufficient time and emphasis. Thus, innovative methodologies such as CALL aligned with assessment can be taken into account when teaching listening comprehension.

Assessment and language teaching have been found to be inter-related as Poehner (2005, 2009) believes that dynamic assessment (DA) can pave the way for language teachers to create meaningful interactions by providing them with appropriate feedback, which does not stop the flow of interaction. As Poehner and Lantolf (2005) stated, DA "is a procedure for simultaneously assessing and promoting development that takes account of the individual's (or group's) zone of proximal development" (p. 240). They also stated that group DA takes place when students actively interact and try to encourage their peers to take part in the learning atmosphere, while the teacher is monitoring their interaction. Inspired from CALL, computerized DA can be an effective tool in teaching language skills. Even though the capabilities of computers do not excite everyone, their potential for innovative computerized assessment can be of interest for multimedia test designers, instructional designers, and courseware production developers (Drasgow & Olson-Buchanan, 1999).Because of the potential of computerized assessment, numerous scholars have recommended the use of computers for evaluation (Hambleton, 1996). In addition, learners' characteristics, such as their self-efficacy, can also influence their performance.

According to Bernhardt (1997), self-efficacy can be represented as the learners' perception of their own capabilities when carrying out a task. According to Pajares (2000), it can be formed when the learners begin to judge their competence. Ehrman (1996) described self-efficacy as "the degree to which the learners think they have the capacity to cope with the learning challenge" (cited in Arnold & Brown, 1999, p.16). According to Bernhardt (1997), higher levels of self-efficacy can pave the way for language learners to be more successful in their learning, since their learning behaviors are goal-oriented. Alternatively, less confident learners are those who possess lower levels of self-efficacy, thus they assume failure from the beginning (Bernhardt, 1997, cited in Rahimi & Abedini, 2009). Therefore, the degree of learners' self-efficacy can directly or indirectly influence the learners' language learning progress (Gorban Doordinejad & Afshar, 2014).

In recent years, computer-based and web-based materials have been broadly applied by EFL instructors to compensate for the deficiencies that exist in traditional listening classes. Digital technology can contribute to listening instruction, by providing learners and teachers with intelligible input and output, and by providing opportunities for the negotiation of meaning (Chapelle, 1999; cited in Puakpong, 2008). Using digital technology in listening instruction has significant constructive effects on EFL learners' listening skills (Bingham and Larson (2006); cited in Puakpong, 2008). Hence, computerized DA can be applied as an effectual methodology to help EFL learners develop their listening comprehension ability through CoolSpeech software.

Statement of the Problem

Educators are constantly seeking new and better ways to improve instruction (Kennedy, 2005). Discovering a method of instruction that meets or exceeds conventional methods of instruction would be an appropriate

justification for its implementation. Computer-assisted instruction may be the answer to problems that have attracted educators' attention for years.

Computers have been proved to be influential in the classroom, and studies (e.g., Coiro & Dobler, 2007) related to the use of computers vary in their findings. Some studies (e.g. Gersten, Fuchs, Williams, & Baker, 2001) show the benefits of computers as supplements over varying lengths of time with various populations of students with different levels of proficiency. Many dependent measures have also been used to measure the quality of the teaching techniques. Some studies (e.g., Kennedy, 2005) have taken into account statistical analyses to probe the effectiveness of computer programs in language learning. However, the abundance of research on CALL might not necessarily look into teaching listening comprehension. Although the literature might provide some evidence regarding the role of computerized instruction in vocabulary learning (e.g., Wang, 2016) or grammar (Corbeil, 2007; Yusof & Saadon, 2012), studying computer software in teaching listening comprehension seems not to be sufficiently examined, particularly in the context of Iran and being integrated with group DA.

Apart from computer software, the nature of traditional types of assessment, including paper-and-pencil tests, was time-consuming and not having concurrent assessment might be problematic for examiners as well as the examinees. In fact, the time of administration is not a concern when utilizing computerized assessment (Thiagarajan, 1999). Moreover, the scoring procedure is immediately conducted when computerized tests occur.

Last but not least, the inclusion of DA into computerized assessment might not have been well recognized in the literature, since each feature has been taken into account individually and the advantage of each has been presented separately. However, it seems that the provision of computerized dynamic assessment through instructional software can facilitate the teaching and learning of listening comprehension, allowing both teachers and learners to benefit from meaningful interactions while working on the listening tasks. Therefore, the present study takes into account the effect of computerized DA on Iranian EFL learners' development of listening comprehension while also considering their self-efficacy.

In sum, it seems that DA and its application in technology-assisted language learning environments have been of interest to language scholars. However, the learners' performance through exposure to CALL integrated with DA while considering the learners' high and low self-efficacy might not have been examined deeply, particularly in a foreign language context, such as Iran. Hence, the current study addressed the following research questions:

Research Questions

R01. Does dynamic assessment via CoolSpeech software have any statistically significant effect on the listening

comprehension ability of learners with high and low self-efficacy? R02. Is there any significant difference in the listening comprehension ability of learners with high and low self-efficacy?

Literature Review

Theoretical Background: DA and its Components

DA has been found to be an effective methodology for evaluating the learners' performance while they are interacting with their peers. Poehner (2005) argued that teachers play the role of mediators since they attempt to take control of the learners' communication by providing the most beneficial type of feedback to keep track of the learners' engagement. In this regard, the notion of the Zone of Proximal Development (ZPD) is highlighted. The ZPD is considered to be the distance between two levels of assistance; one in which the learners need to be provided with educational support or scaffolding, and the other in which they can manage to carry out tasks autonomously (Vygotsky, 1978). It is assumed that DA should be directed to the learners' ZPD in order to help language learners feel more comfortable in their learning environment and enjoy peer communication monitored by the teacher (Poehner, 2009).

As stated by Vygotsky (1998), while traditional assessment only measures fully matured abilities, dynamic assessment measures both fully matured abilities, and abilities that are still in the process of growing; therefore,

dynamic assessment can reveal much more about the process of acquiring that information. Traditional psychological assessments are descriptive, and do not explain developmental processes (Shabani et al., 2010). Vygotsky (1984) argued that by putting a learner's ZDP at the center of the assessment procedure, the teacher is able to monitor the learners' gradual development and examine their potential to initiate an interaction with the peers (Minick, 1987).

As Sternberg and Grigorenko (2002) argued, in the context of DA, teacher-learner relationship is flexible as the tester mediates throughout the evaluation process. In the DA context, an ambience of teaching and helping replaces the traditional neutrality attitude of the traditional assessment context (Shabani et al., 2010). As claimed by Vygotsky (1998), independent problem solving shows only a small part of the cognitive ability of learners, that is, the actual level of cognitive development. Vygotsky believed that non-dynamic evaluation only determines a small fraction of the overall image of development. He also argued that responding to assistance is a very central feature for understanding a learner's cognitive ability, since it can give instructors good insight into the future progress of the individual. In the DA approach, the learners' future performance can be envisaged through their development process (Yildirim, 2008).

Vygotsky (1998) argued that teaching and evaluation should not be separated from each other in DA research. Thus, the real focus ought to be on the learners' learning, which can be shaped when learners are involved with activities, fostering peer interactions and the teachers' mediations. It can be pointed out that with the help of mediators the learners' independence in doing the tasks can be achieved (Yildirim, 2008). Dynamic testing is essentially represented as a procedure that construes personal characteristics and their consequences for education, and takes into account the mediation within the evaluation procedure. In these procedures, the learning process is accentuated, and not the learning outcomes.

As stated by Sternberg and Grigorenko (2002), in traditional evaluation, questions are provided by an examiner, and are all expected to be answered by an examinee consecutively, without any kind of feedback or intervention, unlike DA which considers teaching and evaluation inseparable, and as two sides of the same coin. DA is fundamentally based on Vygotsky's revolutionary insights which believed that teaching leads to development, in the ZPD. Before Vygotsky, it was generally believed that the only reliable marker of mental function was independent problem solving, but Vygotsky opposed this idea and proposed that independent problem solving could merely reveal a fraction of the learners' cognitive ability (Yildirim, 2008). Vygotsky claimed that the importance of receptiveness to assistance is the same as the actual developmental level, and since it offers some insight into the future development of an individual, it is considered to be an essential feature for understanding cognitive abilities (Yildirim, 2008).

It is worth mentioning that there exist two models of DA, the interventionist and interactionist models. The former is concerned with the evaluation of the learners' performance before the intervention or the instructional program, after which the target instruction takes place. This is followed by the second assessment in order to evaluate the effectiveness of the intervention (Poehner, 2009; Poehner & Lantolf, 2005). As the name suggests, the interventionist approach does not take into account the how of the learning; and only the learners' performance at the beginning and at the end of the instruction is of importance. However, the learners' process of learning and the quality of their participation are taken into account in the interactionist approach. Poehner (2009) argued that the interactionist model justifies the need for ZPD-directed feedback in which a knowledgeable peer provides dynamic assessment in order to assist the learner independently solve a task. It is here that the teacher tries to mediate the learners' interactions with their peers in order to help them benefit from each interaction and improve their autonomy (Poehner & Lantolf, 2005). Hence, group DA can be accommodated within the notion of the interactionist model since the learners' interactions are mediated by the teacher, who tries to facilitate the learners' participation in a problem-solving task (Poehner, 2005). Group DA can be integrated with other instructional methods, such as CALL, in which learners have more opportunities to interact in a more attractive learning environment.

Previous Related Studies

Researchers have been interested in applying DA to improve the learners' development of language skills. Hill and Sabet (2009) conducted a research study on four possible dynamic speaking assessment approaches in a classroom setting. They included the mediated assistance, transfer of learning, ZPD, and collaborative

engagement. Mediated assistance was done in the form of teacher-learner interaction in order to check the learners' speaking ability. The learner's ZPD was provided for a problem-solving group of students whose concentration was directed at the social and cultural dimensions of ZPD, called group-ZPD, since comparisons revealed that DA provided for individuals was not as significant as the group DA, in line with cooperative activity. The last DA approach was collaborative engagement that was done for the purpose of identifying common speaking problems as well as difficulties in assessing the learners' speaking performance. The study involved four speaking assessments for freshman Japanese university students. Their study revealed that explicit and implicit prompts were found to be effective in conditions where learners might face difficulties in comprehending or sometimes doing the target tasks. However, the nature of the prompts and the activities that stimulated the prompts remained unclear since the researchers recommended more research in this area. The study did not provide additional consideration for manipulating classroom activities and the participation of adult university students. The students' linguistic level for pairing students was not thoroughly justified by the researchers. Moreover, the study was limited to pairing as the only grouping technique and this left the reader to feel doubtful concerning the suitability of the grouping approach for dynamic speaking assessment.

Similarly, in the context of foreign language, in a study conducted by Shabani, Alavi, and Kaivanpanah (2012), it was aimed to probe whether identification of mediation strategies can be carried out through group DA as an instructional treatment, which is guided by a mediator of the learners' interactions in the listening classroom. Furthermore, the research attempted to highlight the impact of G-DA on knowledge construction among L2 learners. The researchers formulated a list of mediational tactics, embracing various forms of implicit and explicit feedback. They also revealed how beneficial the G-DA interaction could be in establishing a community of practice in which learners could greatly benefit from their classmates and instructors through their cooperative scaffolding.

As far as listening comprehension can be affected by DA, Alavi and Taheri (2014) also looked into the impacts of applying DA on the learners' ability to boost their listening comprehension. Findings demonstrated the learners' significant improvement of listening comprehension when DA was implemented in the classroom. Findings contributed to the teachers' awareness of applying DA an instructional approach to foster more peer interactions, which can pave the way for language learners to improve their listening comprehension through communication.

Not only language skills, but also the learners' pragmatic awareness can be affected by group DA. Hashemi Shahraki, Ketabi, and Barati (2015) investigated whether group DA could be effective in evaluating EFL learners' pragmatic awareness of conversational implicatures, while detecting the mediational tactics that could contribute to improving this knowledge. The results showed that through enhancing EFL learners' pragmatic comprehension of conversational implicatures, G-DA could dramatically improve the learners' listening comprehension ability. Their results supported G-DA and its usefulness for L2 listening comprehension and pragmatic instruction.

Implementing DA into a technology-mediated learning environment appears to have rarely been considered. In a recent study, Mashhadi Heidar and Afghari (2015) examined the learners' listening comprehension ability when they were exposed to dynamic assessment in Synchronous Computer-Mediated Communication via Talking and Writing technologies of Web 2.0 and Skype. In this study, the socio-cognitive progression of EFL learners was studied through DA that sought Vygotsky's willingness for instructional scaffolding in ZPD. The results showed that DA in synchronous computer-mediated communication through interaction in the ZPD enabled the educators to explore both the actual and the potential level of the learners' listening ability.

By integrating DA into computer software, Mashhadi Heidar (2016) studied the role of DA in enhancing the listening skill of EFL students via Web 2.0. His findings proved that online DA via Web 2.0 improves listening comprehension ability in Iranian EFL learners. Findings also revealed that when DA is applied in an online learning platform, it can be more flexibly ZPD-directed, which can help language learners to feel more independent in doing the related tasks.

Finally, Ashraf, Motallebzadeh, and Ghazizadeh (2016) attempted to identify whether electronic-based DA affected the listening comprehension ability of L2 learners. The experimental group received the treatment in which online application of DA was practiced in order to establish a rather different learning atmosphere for

EFL learners. The results revealed that electronic-based DA significantly improved the listening skill of the EFL learners. In fact, technology appears to be a helpful solution to benefit from DA more efficiently.

In the aforementioned studies, the impact of self-efficacy on EFL learners' achievement in conventional methods and the effects of computerized and non-computerized DA on L2 learners' listening skill were investigated, but none of the previous studies investigated the impact of EFL learners' self-efficacy on computerized group dynamic assessment. This study managed to build on previous investigations on the said variables regarding the effects of self-efficacy on computerized G-DA, and intended to focus on dimensions that could not be dealt with previously.

Method

Design

The present study adopted a quasi-experimental design in which the homogenous participation of language learners was fulfilled. Then, they were divided into four groups, including two experimental and two control groups. The study employed a pretest, treatment, and posttest design.

Participants

The study was conducted with 80 Iranian EFL learners at the intermediate level from Kish Language Institute. The participants were all adult female learners attending the institute, because the researcher aimed at keeping age and gender as fixed variables. Teenage participants needed different treatment approaches. The same problem existed for middle-aged participants. That was why the age range of the participants was kept limited. In order to make sure of homogeneity, subjects were chosen from among 120 students, based on their scores on a PET (Preliminary English Test). Having calculated the mean and the SD, the participants with the score of one SD above and below the mean were selected to take part in the study. Eighty students with an intermediate proficiency level were selected from among 120 potential participants using the PET placement test. Afterwards, the selected participants were given a Self-Efficacy Questionnaire (MSEO), and low and high self-efficacious participants were defined. Then, 20 high self-efficacious participants were assigned to the first experimental group, 20 low self-efficacious participants were assigned to the second experimental group, and 20 participants were assigned to each control group.

Instruments and Materials

CoolSpeech Software (version 5.0)

CoolSpeech software, produced by ByteCool Software Inc. (2001), is a text-to-speech software that is compatible with Microsoft Speech API to get and read aloud texts from various sources, for the purpose of empowering learners to listen to online news from any URL, converting texts into spoken wave files (.Wav), reading text files and HTML files aloud, listening to new messages from email accounts, listening to any sentences that they have typed anywhere in the Microsoft Windows, listening to texts that they have copied to the Microsoft Windows clipboard, and scheduling files, URLs, and emails to be read aloud. Such capabilities were used in treating Iranian EFL learners on their English listening comprehension skill.1

Proficiency Test

To ensure the homogeneity of the participants, a PET was administered. It was used in order to select intermediate language learners. The score obtained from the test showed the level of proficiency so that learners who passed the exam with scores higher than 65 were considered to be suitable for intermediate level. It is necessary to mention that between those subjects who passed the exam the ones who could obtain the score one SD above and below mean were selected for the study.

1 "Purchase CoolSpeech 5.0", (2019, April 29). Retrieved from http://www.bytecool.com/company.htm. Retrieved 2019-04-29.

Listening Comprehension Test

A listening comprehension test was designed and developed by the researcher. This test was designed for an intermediate level of proficiency. In order to make sure that the listening texts in the tests were of the right level, they were selected from different listening tasks of the textbook that the participants studied at Kish Language Institute. The reliability of the test was examined in a pilot study. In this phase, the researcher designed the listening test and gave it to 15 participants who were representative of the participants of the study. Afterward, the data collected from the piloting were analyzed through Cronbach's alpha. The test reliability was estimated at .71. As this test benefited from an acceptable level of reliability, it was utilized as both pretest and posttest. When two different tests are designed, the risk of different levels of difficulty in different versions of the test goes up; to avoid this problem, one test was used. As the treatment took almost two months, the same test could be used as the pretest and posttest, and there was almost no chance of remembering the questions.

Table 1

Reliability of the Listening and Reading Sections

N of Items Cronbach's Alpha

Listening 25 .709

Reading 35 .771

As Table 1 shows, the reliability of the listening and reading sections of the PET of the study was .709 and .771, which is acceptable.

In order to check the writing and speaking sections, Pearson correlation was administered to check the interrater reliability.

Table 2

Pearson correlation result for writing and speaking sections inter-rater reliability

Groups Writing Rater 1 Writing Rater 2

Writing Rater 1 Pearson Correlation 1 .923

Writing Rater 2 Sig. (2-tailed) N 120 .000 120

Pearson Correlation .923 1

Sig. (2-tailed) .000

N 120 120

Groups Speaking Rater 1 Speaking Rater 2

Speaking Rater 1 Pearson Correlation 1 .949

Speaking Rater 2 Sig. (2-tailed) N 120 .000 120

Pearson Correlation .949 1

Sig. (2-tailed) .000

N 120 120

The result of Pearson correlation in Table 2 revealed that the reliability level for writing and speaking was at .923 and .949, which was very high.

Self-Efficacy Questionnaire

In the present study, the MSEO (Memory Self-Efficacy Questionnaire) was used. The MSEO is a self-rating questionnaire that evaluates individuals' self-efficacy in two phases: first, Self-efficacy Level, which evaluates individuals' memory ability; second, Self-efficacy Strength, which evaluates individuals' confidence (Berry, West, & Dennehey, 1989). As the MSEQ enjoys high reliability and validity, it can be utilized for studies on memory self-efficacy tests (Berry et al., 1989).The questionnaire was translated into Persian to be more comprehensible to the students.

Data Collection Procedure

To collect the data of the current study, first, PET was administered to the participants. This test included 20 multiple-choice items in the form of paper-and-pencil. The learners were asked to answer the questions in 15 minutes on the provided answer sheet. The pretest and the posttest (a retest of the pretest) of the study included four paragraphs with five multiple-choice questions for each. The paragraphs were played back to the participants three times, and they were asked to answer the questions on the provided answer sheet.

In both experimental groups, the participants received the same form of treatment, based on their level of proficiency (B1). The participants in both experimental groups in this study were taught using the group dynamic assessment method through CoolSpeech software. Each session lasted twenty minutes. In these groups, the teacher selected level-appropriate listening items from the listening tasks students listened to in the class. The learners were asked to work in pairs or groups of three. They used CoolSpeech software to listen to some extracts, and then they were asked to talk about it with their peers. The teacher monitored them and helped them correct each other.

In group dynamic assessment, the teacher's role is more like a mediator who facilitates learning. In the current research, the teacher applied the following mediational strategies: first, confirming learners' correct responses that, they were not sure about; second, replaying the tape, either the total passage, or some parts of it. Whenever it was necessary, the teacher allowed learners to listen to the passage again. Third, the teacher helped learners put the words together. When learners could not comprehend an utterance after replaying it several times, the teacher tried to divide the utterance into smaller and more comprehensible parts. Fourth, whenever students guessed a sentence erroneously, the mediator repeated it with a questioning tone. This helped learners find their mistakes and correct themselves. Fifth, the mediator provided students with contextual clues. The contextual clues entailed learners' world knowledge, topical knowledge and situational awareness. Sixth, using metalinguistic clues, the teacher tried to scaffold learners. These metalinguistic clues were grammatical or lexical. Seventh, whenever learners did not know a word, and they could not guess it, they were allowed to use a dictionary. Eighth, the teacher explained the correct response when other mediational strategies did not work well.

After two months of treatment on the experimental groups, which took 20 sessions, twenty minutes each session, the participants in all groups took part in a listening posttest, and the results were compared and contrasted to check the hypotheses of the study.

Results

The statistical indexes analyzed were the mean and standard deviation of the pretest and posttest of students' listening comprehension in the groups. Having calculated the reliability of the values of the test, and established that the values were acceptable, the researcher analyzed the data through SPSS software (Version 22.0). The two research questions of the study are taken into account below.

RQ1: Does dynamic assessment via CoolSpeech software have any statistically significant effect on the listening comprehension ability of learners with high and low self-efficacy?

The first research question of the study examined the effect of dynamic assessment via CoolSpeech software on the listening comprehension ability of learners with high and low self-efficacy. In doing so, quantitative analysis

of the pretest and posttest scores was done. Initially, a normal distribution of data had to be checked. A Shapiro-Wilk test of normality revealed that the p-values of the pretests and posttests of the experimental and control groups are more than .05, indicating a normal distribution of data (pretest p-values: EH=.114, EL=.105, CH=.091, CL=.085; posttest p-values: EH=.118, EL=.112, CH=.091, CL=.095). Table 3 shows the descriptive statistics for the high self-efficacious learners' listening comprehension.

Table 3

Descriptive Statistics for the Pretest and Posttest of Learners with High Self-Efficacy in the Experimental Group

N Mean Std. Deviation Std. Error Mean

Pretest 20 21.04 3.05 .472

Posttest 20 22.86 3.05 .472

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Table 3 reveals a little improvement in the learners' listening comprehension from the pretest (M=21.04, SD=3.05) to the posttest (M=22.86, SD=3.05), which demonstrates that dynamic assessment via CoolSpeech software could help learners gain mastery over listening comprehension tasks.

Also, paired samples t-test for the high self-efficacy experimental group showed that the level of significance was less than .05 (p= .000), highlighting the significant improvement in the listening comprehension ability of the learners with high self-efficacy. Therefore, it can be deduced that DA via CoolSpeech software was found to be effective in paving the way for high self-efficacious learners to develop their listening comprehension.

Table 4

Descriptive Statistics for the Pretest and Posttest of Learners with High Self-Efficacy in the Control Group

N Mean Std. Deviation Std. Error Mean

Pretest 20 20.54 3.23 .514

Posttest 20 20.92 3.39 .534

Table 4 indicates no noticeable improvement in the learner's listening comprehension ability from the pretest (M= 20.54, SD=3.23) to the posttest (M=20.92, SD=3.39) as a result of exposure to teaching of listening comprehension through conventional means.

Further, paired samples t-test between the pretest and posttest of high self-efficacy control group indicated that the significance level was more than .05 (p= .488), which meant that no improvement was made in the listening comprehension ability of the high self-efficacious learners in the control group (mean difference= -.38, SD= 3.16, std. error mean= .481, t= 11.23, df= 19). In other words, the control group, which was taught using conventional instructional means, could not significantly improve their listening comprehension ability.

The second part of research question looked into the effect of applying DA via CoolSpeech software on the listening comprehension ability of learners with low self-efficacy. Descriptive statistics are presented in Table 5.

Table 5

Descriptive Statistics for the Pretest and Posttest of Learners with Low Self-Efficacy in the Experimental Group

N Mean Std. Deviation Std. Error Mean

Pretest 20 21.08 2.51 .472

Posttest 20 22.02 2.40 .472

Table 5 shows a very small increase from the pretest (M=21.08, SD=2.51) to the posttest (M=22.02, SD=2.40) of the learners' listening comprehension, which reveals that low self-efficacious learners' listening comprehension was not greatly affected by dynamic assessment via CoolSpeech software. In order to inferentially take into account the learners' performance on the pretest and posttest of listening comprehension, paired samples t-test was conducted.

Paired samples t-test for low self-efficacy experimental group indicated that the significance level was more than .05 (p= .112), from which it can be inferred that no significant difference can be observed between the

low self-efficacious learners' listening comprehension on the pretest and posttest (mean difference=-.94, SD= 2.91, std error mean= .422, t=13.46, df= 19). Therefore, the findings demonstrated that dynamic assessment via CoolSpeech software resulted in no significant improvement in the listening comprehension of the learners with low self-efficacy.

Table 6

Descriptive Statistics for the Pretest and Posttest of Learners with Low Self-Efficacy in the Control Group

N Mean Std. Deviation Std. Error Mean

Pretest 2Q 2Q.85 3.16 .521

Posttest 2Q 21.72 3.33 .519

Table 6 shows that only a very small increase took place in the control group's pretest (M= 20.85, SD=3.16) of listening comprehension and their posttest (M= 21.72, SD=3.33). In order to inferentially take into account the learners' performance on the pretest and posttest of listening comprehension, paired samples t-test was conducted.

Paired samples t-test for the low self-efficacy control group revealed that no significant difference could be observed concerning the low-efficacious learners' listening comprehension ability in the control group since the significance level was more than.05 (p= 310). In fact, the learners in the control group did not significantly benefit from the conventional instruction for listening comprehension (mean difference= -.87, SD= 2.78, Std Error Mean= .493, t=10.98, DF= 19).

RQ2: Is there any significant difference in the listening comprehension ability of learners with high and low self-efficacy?

The second aim of the study was to explore the difference between the high and low self-efficacious learners' listening comprehension ability affected by dynamic assessment via CoolSpeech software. To do so, descriptive as well as inferential analyses were conducted. Descriptive statistics for the experimental and control groups are presented in Table 7.

Table 7

Descriptive Statistics for the Pretest of Learners with High and Low Self-Efficacy in the Experimental and Control Groups

N Mean Std. Deviation Std. Error

EH 2Q 21D4 3D5 .472

EL 2Q 21D8 2.51 .472

CH 2Q 2Q.54 3.23 .514

CL 2Q 2Q.85 3.16 .521

Table 7 reveals that the four groups acted similarly in the beginning, from which it can be inferred that there exists a small difference among the four groups' mean scores of the listening comprehension pretest. As for the comparison of the four groups' mean scores of the pretest, two-way analysis of variance (two-way ANOVA) was run.

Table 8

Two-Way ANOVA Statistics for the Pretest of Learners with High and Low Self-Efficacy in the Experimental and Control Groups

Sum of Squares Df Mean Square F Sig.

Between Groups 142.431 3 71.215 2.986 .2Q5

Within Groups 1241.12Q 76 23.315

Total 1383.551 79

Table 8 shows no significant difference among the mean scores of the four groups' listening comprehension (F3, 76= 2.98, p= .205), which reveals the similarity in the intermediate learners' listening comprehension before the treatment sessions. The students were all at the intermediate level (B1).Table 13below provides descriptive data for the posttest scores of the learners' listening comprehension in the experimental and control groups.

Table 9

Descriptive Statistics for the Posttest of Learners with High and Low Self-Efficacy in the Experimental and Control Groups

N Mean Std. Deviation Std. Error

EH 20 22.86 3.05 .472

EL 20 22.02 2.40 .472

CH 20 20.92 3.39 .534

CL 20 21.72 3.33 .519

Table 9 indicates that there were differences in the four groups' listening comprehension ability (EH, M=22.86, SD=3.05; EL, M=22.02, SD=2.40; CH, M=20.92, SD=3.39; CL, M=21.72, SD=3.33). In order to compare the mean scores of the learners' listening comprehension in the posttest, a two-way ANOVA was run.

Table 10

Two-Way ANOVA Statistics for the Posttest of Learners with High and Low Self-Efficacy in the Experimental and Control Groups

Sum of Squares Df Mean Square F Sig.

Between Groups 421.198 3 210.599 6.811 .001

Within Groups 1318.413 76 26.619

Total 1739.611 79

The two-way ANOVA results between the posttest scores of the high and low self-efficacy in the experimental and control groups in Table 14 showed a significant difference among the posttest of four groups (F3, 76= 6.81, p= .001< .05). Thus, it can be inferred that the four groups were different in their listening comprehension ability.

To highlight the difference among the groups, a post-hoc Tukey HSD test was used (See Appendix). The results of Tukey HSD showed that a significant difference could be observed between the two experimental groups (p= .014<.05) as high self-efficacious learners outperformed the low self-efficacious ones in terms of their listening comprehension ability. Similarly, the learners with high self-efficacy in the experimental group performed better than those of the control group (p= .003<.05). However, the learners with low self-efficacy in the experimental group did not significantly outperform the control group (p= .112>.05). Hence, it can be concluded that high self-efficacious learners were significantly affected by dynamic assessment via CoolSpeech software concerning their listening comprehension ability. Moreover, they improved more than the learners with low-self-efficacy in their listening comprehension. Finally, the listening comprehension of the learners with low self-efficacy was not significantly affected by dynamic assessment using CoolSpeech software.

Discussion

The present study aimed to find out the potential effects of the group DA through CoolSpeech software on the listening comprehension ability of EFL learners with high and low self-efficacy. In this regard, four groups (two experimental and two control groups) formed the sample of the study. The findings revealed that high self-efficacious learners improved more than the other groups in terms of their listening comprehension ability. Regarding the first research question, which investigated the effect of dynamic assessment via CoolSpeech software on the listening comprehension ability of learners with high and low self-efficacy, it was found that DA via CoolSpeech software could help high self-efficacious learners to gain mastery over listening comprehension tasks. However, it was found that low self-efficacious learners' listening comprehension was not greatly

affected by such assessment. Considering the second research question, which explored the difference between the high and low self-efficacious learners' listening comprehension ability affected by dynamic assessment via CoolSpeech software, the results of the two-way ANOVA revealed differences in the four groups' listening comprehension ability. There was a significant difference between the two experimental groups as high self-efficacious learners outperformed the low self-efficacious ones regarding their listening comprehension ability. Likewise, the learners with high self-efficacy in the experimental group performed better than those of the control group. However, learners with low self-efficacy in the experimental group did not significantly outperform the control group.

The results of the current investigation can be justified on the basis that group DA method through CoolSpeech software could influence listening comprehension of the learners with high self-efficacy, but it could not significantly affect the listening comprehension of the students with lower self-efficacy. Self-efficacy, as Bernhardt (1997) suggest, is a set of various self-beliefs related to diverse areas of performance. These beliefs have both behavioral and emotional aspects. They influence the decision about whether to engage in a certain task, the effort and power an individual exerts in completing the task, and the degree of avoidance and persistence in performing it (Pajares, 2000). Thus, for inefficacious learners, tasks are perceived to be more difficult than they actually are which ultimately leads to a decrease in persistence and effort (Ehrman, 1996; Gorban Doordinejad & Afshar, 2014), thus, the less self-efficacious students demonstrated lower perceived ability, fewer amounts of invested mental effort, and lower performance as highlighted by Pajares (2000).

The findings of the current research is in harmony with Gorban Doordinejad and Afshar (2014) that revealed a significant relationship between foreign language learners' self-efficacy and their English achievement. They suggested that learners, who benefited higher foreign language self-efficacy, were more likely to attain higher English scores. This study is also congruent with Rahimi and Abedini (2009); who revealed there was a significant relationship between EFL learners' self-efficacy and their achievements in listening skill.

The results of the study are also in alignment with those of other DA studies which revealed the positive effects of DA and group DA on learners' listening comprehension ability. The present study is in harmony with Shabani, Alavi, and Kaivanpanah (2012); Alavi and Taheri (2014); and Hashemi Shahraki, Ketabi, and Barati (2015) who proved that group DA could significantly improve students' listening comprehension ability, compared to nondynamic assessment methods. Furthermore, this study confirms the studies conducted by Mashhadi Heidar and Afghari (2015) and Mashhadi Heidar (2016) who revealed that online DA via Web 2.0 significantly improved listening comprehension ability in Iranian EFL learners. Finally, the present study is in accordance with Ashraf, Motallebzadeh, and Ghazizadeh (2016) who proved that electronic-based DA method could significantly improve the listening skill of the EFL learners.

Conclusion

By quantitatively gathering and scrutinizing the data, it was concluded that the G-DA method of teaching via software could have a significant effect on the listening comprehension skill of learners with high self-efficacy, although this result was not observed in learners with lower levels of self-efficacy. The pedagogical significance of this study is multifaceted and can be examined both at micro and macro levels. Regarding the usefulness of the results at the macro level, it can be said that more areas of inquiry were identified to help curriculum designers reauthorize the remarkable changes in learning environments and the influence on learning, teaching, and testing pedagogy. The findings of this research may assist policy makers in emphasizing the significance of the use of different approaches to skills evaluation. Moreover, it seems that students, teachers, and researchers can also benefit from the outcomes of the present study.

Learners are the first beneficiary of the study findings. Many learners appear to be worried about their listening ability in the process of language learning and are usually concerned with their listening skill as well as their grades on listening exams. By being assessed through a dynamic method of assessment, learners can overcome listening difficulties since they are consciously involved in the assessment procedure. In fact, when learners are aware of their listening skill, they can take necessary action to solve possible deficiencies in listening as well as strengthen their listening ability. Teachers, who are always concerned with teaching language skills,

can benefit from various types of assessment approaches toward listening. Different types of DA through CALL can be applied as tasks and helpful techniques that can be used in English classrooms. Based on the results of this study, both teachers and learners can apply the best assessment tools in the classroom to address any possible difficulties they may face during the listening class. Finally, the researcher hopes that this study will have far-reaching conclusions that can be practical and helpful for the researchers who are interested in DA approaches because it provides them with current literature on the topic. The researcher believes that this study can contribute to putting English language courses and objectives more in line with modern approaches to language assessment, particularly for the Iranian context in which only traditional methods of instructions are made use of and the real-world needs of the students are almost completely neglected.

The findings of the present investigation should be generalized with care as the sample and context are not representative of the whole population of Iranian English learners in different settings. Thus, further research can be carried out to explore other variables, such as different learning environments, participants' gender, different levels of proficiency, and personality types.

References

Alavi, S. M., & Taheri, P. (2014). Examining the role of dynamic assessment in the development and assessment

of listening comprehension. English Language Teaching (ELT), 1(2), 23-40. Alodail, A. (2014). Impact of technology (power point) on students' learning. International Interdisciplinary

Journal of Education, 3(4), 200-206. Apperson, J. M., Laws, E. L., & Scepansky, J. A. (2006). An assessment of student performances for power point presentation structure in undergraduate courses. Computers and Education, 47(1), 1-6. https://doi. org/10.1016/j.compedu.2006.04.003 Arnold, J., & Brown, H. D. (1999). A map of the terrain. In J. Arnold (Ed.), Affect in language learning (pp. 1-24).

Cambridge: Cambridge University Press. Ashraf, H., Motallebzadeh, K., & Ghazizadeh, F. (2016). The impact of electronic-based dynamic assessment on the listening skill of Iranian EFL learners. International Journal of Language Testing, 6(1), 24-32. https://doi. org/10.14744/alrj.2018.36844 Bernhardt, S. (1997). Self-efficacy and second language learning. The NCLRCLanguage Resource, 1(5), 1-13. Berry, J. M., West, R. L., & Dennehey, D. M. (1989). Reliability and validity of the Memory Self-Efficacy

Questionnaire. Developmental Psychology, 25(5), 701-713. https://doi.org/10.1037/0012-1649.25.5.701 Bingham, S., & Larson, E. (2006). Using CALL as the major element of study for a university English class in

Japan. The JALT Journal, 2(3), 39-52. Brookhart, S. M., & DeVoge, J. G. (1999). Testing a theory about the role of classroom assessment in student motivation and achievement. Applied Measurement in Education, 12(4), 409-425. https://doi.org/10.1207/ S15324818AME1204_5

Chapelle, C. A. (1999). Technology and language teaching for the 21st century. Paper presented at the Proceedings

of the Eighth International Symposium on English Teaching. The Crane. Coiro, J., & Dobler, E. (2007). Exploring the online reading comprehension strategies used by sixth-grade skilled readers to search for and locate information on the Internet. Reading Research Quarterly, 42(2), 214-257. https://doi.org/10.1598/RRQ.42.2.2 "

Coleman, J. A. (2005). CALL from the margins: Effective dissemination of CALL research and good practices.

ReCALL: The Journal of EUROCALL, 17(1), 18-31. https://doi.org/10.1017/S0958344005000315 Corbeil, G. (2007). Can power point presentations effectively replace textbooks and blackboards for teaching

grammar? Do students find them an effective learning tool? CALICO Journal, 24(3), 631-656. Doordinejad, F.G., & Afshar, H. (2014). On the relationship between self-efficacy and English achievement among Iranian third grade high school students. International Journal of Language Learning and Applied Linguistics World, 6(4), 461-470. Drasgow, F., & Olson-Buchanan, J.B. (1999). Innovations in computerized assessment. Lawrence Erlbaum Associates.

Ehrman, M. E. (1996). An exploration of adult language learning motivation, self-efficacy and anxiety. In R. L. Oxford (Ed.), Language learning motivation: pathways to the new century (pp. 23-46). The University of Hawaii Second Language Teaching and Curriculum Centre.

Gerard, R. W. (Ed.). (1967). Computers and education. McGraw Hill.

Gersten, R., Fuchs, L. S., Williams, J. P., & Baker, S. (2001). Teaching reading comprehension strategies to students with learning disabilities: A review of research. Review of Educational Research, 71,279-320. https:// doi.org/10.3102/00346543071002279 Gorban Doordinejad, F., & Afshar, H. (2014). On the relationship between self-efficacy and English achievement among Iranian third grade high school students. International Journal of Language Learning and Applied Linguistics World, 6(4), 461-470. Hambleton, R. K. (1996). Advances in assessment models, methods, and practices. In D. C. Berliner, & R.C. Calfee

(Eds.), Handbook of educational psychology (pp. 889-925). American Council on Education/Macmillan. Hashemi Shahraki, S., Ketabi, S., & Barati, H. (2015). Group dynamic assessment of EFL listening comprehension: Conversational implicatures in focus. International Journal of Research Studies in Language Learning, 4(3), 7389. https://doi.org/10.5861/ijrsll.2014.955 Hill, K., & Sabet, M. (2009). Dynamic speaking assessments. TESOL Quarterly, 43, 537-545. https://www.jstor. org/stable/27785036

Hosseini, S. M. H. (2007). ELT in higher education in Iran and India: A critical view. Language in India, 7, 1-11.

http://www.languageinindia.com/dec2007/eltinindiaandiran.pdf Jahangard, A. (2007). Evaluation of EFL materials taught at Iranian public high schools. The Asian EFL Journal, 9(2), 130-150. https://www.asian-efl-journal.com/main-journals/evaluation-of-efl-materials-taught-at-iranian-public-high-schools/ Kennedy, M. M. (2005). Inside teaching: How classroom life under-mines reform. Harvard University Press. Koura, A. A., & Al-Hebaishi, S. M. (2014). The Relationship between multiple intelligences, self-efficacy and academic achievement of Saudi gifted and regular intermediate students. Educational Research International, 3(1), 48-70.

Kulik, J. A., Bangert, R. L., & Williams, G. W. (1983). Effects of computer-based teaching on secondary school

students. Journal of Educational Psychology, 75(1), 19-26. https://doi.org/10.1037/0022-0663.75.L19 Lau, R. W. H., Yen, N. Y., Li, F., & Wah, B. (2014). Recent development in multimedia e-learning technologies.

World Wide Web,17, 189-198. https://doi.org/10.1007/s11280-013-0206-8 Lave J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge University Press. Lidz, C. S., & Gindis, B. (2003). Dynamic assessment of the evolving cognitive functions in children. In A. Kozulin, B. Gindis, V.S.Ageyev & S. M. Miller (Eds.), Vygotsky's educational theory in cultural context (pp. 99116). Cambridge University Press. Mashhadi Heidar, D., & Afghari, A. (2015). The effect of dynamic assessment in synchronous computer-mediated communication on Iranian EFL learners' listening comprehension ability at upper-intermediate level. English Language Teaching, 8(4), 14-23. https://doi.org/10.5539/elt.v8n4p14 Mashhadi Heidar, D. (2016). ZPD-assisted introduction via web 2.0 and listening comprehension ability. English

for Specific Purposes World, 49(17). Mills, N., Pajares, F., & Herron, C. (2006). A reevaluation of the role of anxiety: Self-efficacy, anxiety, and their

relation to reading and listening proficiency. Foreign Language Annals, 39(2), 276-294. Minick, N. (1987). The development of Vygotsky's thought: An introduction. In R.W. Reiber & A. S. Carton (Eds.),

The collected works of L. S. Vygotsky: Problems of general psychology (pp. 17-36). Plenum Press. Mohammadi, E., & Mirdehghan, S. S. (2014). A CMC approach to teaching phrasal-verbs to Iranian EFL senior high school students: The case of blended learning. Procedia - Social and Behavioral Sciences, 98, 1174-1178. https://doi.org/10.1016Zj.sbspro.2014.03.531 Nation, I. S. P., & Newton, J. (2009). TeachingESL/EFL speaking and listening. Routledge.

Nutta, J. (2013). Is computer-based grammar instruction as effective as teacher directed grammar instruction for

teaching L2 structures? CALICO Journal, 16(1), 49-62. Oliver, R. (1999). Exploring strategies for online teaching and learning. Distance Education, 20(2), 240-254. Pajares, F. (2000). Self-efficacy beliefs and current directions in self-efficacy research, http://www.emory.edu/

EDUCATION/mfp/effpage.html Poehner, M. E. (2005). Dynamic assessment of oral proficiency among advanced L2 learners of French [Unpublished

doctoral thesis]. The Pennsylvania State University. Poehner, M. E. (2008). Dynamic assessment. A Vygotskian approach to understanding and promotingL2 development.

Springer International Publishing. Poehner, M. E. (2009). Dynamic assessment as a dialectical framework for classroom activity: Evidence from second language (l2) learners. Journal of Cognitive Education and Psychology, 8(3), 252-268. https://doi. org/10.1891/1945-8959.8.3.252

Poehner, M. E., & Lantolf, J. P. (2005). Dynamic assessment in the language classroom. Language Teaching Research, 9, 233-265. https://doi.org/10.1191/1362168805lr166oa

Puakpong, N. (2008). An evaluation of a listening comprehension program. In F. Zhang & B. Barber (Eds.), Handbook of research on computer-enhanced Language acquisition and learning. IGI Global.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Rahimi, A., & Abedini, A. (2009). The interface between EFL learners' self-efficacy concerning listening comprehension and listening proficiency. Novitas-Royal, 3(1), 14-28.

Rahimi, M., & Hosseini, F. K. (2011). The impact of computer-based activities on Iranian high-school students' attitudes towards computer-assisted language learning. Procedia Computer Science, 3, 183-190. https://doi. org/10.1016/j.procs.2010.12.031

Razmjoo, S. A., & Riazi, M. (2006). Is communicative language teaching practical in the expanding circle? A case study of teachers of Shiraz high schools and institutes. Journal of Language and Learning, 4(2), 144-171.

Reeves, T. C. (2000). Alternative assessment approaches for online learning environments in higher education. Journal of Educational Computing Research, 23(1), 101-111. https://doi.org/10.2190/GYMQ-78FA-WMTX-J06C

Rubin, J. (1995). An overview to a guide for the teaching of second language listening. In D. J. Mendelsohn, & J. Rubin (Eds.), A guide for the teaching of second language listening (pp. 7-11). Dominie Press, Inc.

Shabani, K., Alavi, S. M., & Kaivanpanah, S. (2012). Group dynamic assessment: An inventory of mediational strategies for teaching listening. The Journal of Teaching Language Skills (JTLS), 30(4), 27-58.

Shabani, K., Khatib, M., & Ebadi, S. (2010). Vygotsky's Zone of Proximal Development instructional implications and teachers' professional development. English Language Teaching Journal, 3(4), 237-248. https://doi. org/10.5539/elt.v3n4p237

Sternberg, R. J., & Grigorenko, E. L. (2002). Dynamic testing. The nature and measurement of learning potential. Cambridge University Press.

Thiagarajan, S. (1999). Formative evaluation in performance technology. Performance Improvement Quarterly, 4(2), 22-24

Vandergrift, L., & Goh, C. C. M. (2012). Teaching and learning second language listening: Metacognition in Action. Taylor & Francis.

Vandergrift, L., & Goh, C. C. M. (2012). Teaching and learning second language listening: Metacognition in action. Routledge.

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press.

Vygotsky, L. S. 1984a. Sobranie Sochinenii, Tom chetvertyi: Detskaya psikhilogii[collected works (vol. 4: Child psychology). Izdatel'stvo Pedagogika.

Vygotsky, L. S. (1998). Infancy. In R. W. Rieber (Ed.), The collected works of L.S. Vygotsky (pp. 207-241). Plenum Press.

Wang, Y. H. (2016). Promoting contextual vocabulary learning through an adaptive computer-assisted EFL reading system. Journal of Computer Assisted Learning, 32, 291-303. https://doi.org/10.1111/jcal.12132

Warschauer, M., & Healey, D. (1998). Computers and language learning: An overview. Language Teaching, 31, 57-71.

Warschauer, M., & Healey, D. (1998). Computers and Language Learning: An overview Published online by Cambridge University Press. https://doi.org/10.1017/S0261444800012970

Yildirim, A. G. O. (2008). Vygotsky's sociocultural theory and dynamic assessment in language learning. Anadolu University Journal of Social Sciences, 8(1), 301-308.

Yusof, N. A., & Saadon, N. (2012). The Effects of Web-based Language Learning on University Students' Grammar Proficiency. Procedia-Social and Behavioral Sciences, 67, 402-408.

Zhao, Y. (2003). Recent developments in technology and language learning: A literature review and meta-analysis. CALICO Journal, 21(1), 7-27. https://www.jstor.org/stable/24149478

Appendix

Table15

Multiple Comparisons: Tukey HSD Test for the Posttest of Learners with High and Low Self-Efficacy in the Experimental and Control Groups

(I) Groups (J) Groups Mean Difference (I-J) Std. Error Sig. 95% Confidence Interval Lower Bound Upper Bound

EH EL .84* .693 .014 -4.7543 1.9258

CH 1.94* 1.213 .003 .5831 6.9870

CL 1.14* 1.214 .011 -.0011 7.0709

EL EH -.84* .693 .014 -1.9258 4.7543

CH 1.10* 1.201 .012 2.0565 8.3410

CL .30 1.309 .112 1.4681 8.4319

CH EH -1.94* 1.213 .003 -6.9870 -.5831

EL -1.10* 1.201 .012 -8.3410 -2.0565

CL -.80 1.253 .051 -3.5980 3.0971

CL EH -1.14* 1.214 .011 -7.0709 .0011

EL .30 1.309 .112 -8.4319 -1.4681

CH .80 1.263 .051 -3.0971 3.5980

*. The mean difference is significant at the 0.05 level.

i Надоели баннеры? Вы всегда можете отключить рекламу.