Научная статья на тему 'MENTAL SIMULATION EFFECTS ON PERFORMANCE: BENEFITS OF OUTCOME VERSUS PROCESS SIMULATIONS IN ONLINE COURSES'

MENTAL SIMULATION EFFECTS ON PERFORMANCE: BENEFITS OF OUTCOME VERSUS PROCESS SIMULATIONS IN ONLINE COURSES Текст научной статьи по специальности «Науки об образовании»

CC BY
354
66
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
HIGHER EDUCATION / ACADEMIC PERFORMANCE / MENTAL SIMULATION / PERFORMANCE FORECAST

Аннотация научной статьи по наукам об образовании, автор научной работы — Al Ghazo Runna, Daqqa Ibtisam, Hanadi Abdelsalam, Maura Ae Pilotti, Huda Al Mulhem

The present research compares the effects of mentally recreating the experience of realizing that a desirable goal had been achieved (outcome simulation exercise) with those of mentally recreating the actions that might lead to the desirable goal (process simulation exercise). It asked whether the performance benefits of process simulations over outcome simulations, which have been reported in students enrolled in face-to-face classes, would generalize to an online environment. The process simulation exercise was expected to foster attention to the antecedents of good grades, thereby improving class performance relative to the outcome simulation exercise which was intended to be merely motivational. College students from the Middle East, who were taking classes online due to the COVID-19 pandemic, participated. Type of simulation impacted students’ performance on assignments, but differently depending on the timing of the assessment. It did not influence behavioral engagement, midterm test performance, or predictions of performance before or after the test. Instead, process simulation enhanced students’ confidence in their predictions. These findings suggest that process simulation exercises may be useful learning props for activities that challenge students’ problem-solving skills (e.g., assignments) rather than engage well-practiced study habits (e.g., tests).

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «MENTAL SIMULATION EFFECTS ON PERFORMANCE: BENEFITS OF OUTCOME VERSUS PROCESS SIMULATIONS IN ONLINE COURSES»

Original scientific paper

UDK:

159.953.3.072-057.875(532)

Received: August, 25.2020. Revised: October, 30.2020. Accepted: November, 09.2020.

doi: 10.23947/2334-8496-2020-8-SI-37-47

Mental Simulation Effects on Performance: Benefits of Outcome Versus

Runna Al Ghazo1', Ibtisam Daqqa1, Hanadi AbdelSalam1, Maura A. E. Pilotti1, and Huda Al Mulhem1

1Prince Mohammad Bin Fahd University, Al Khobar, Kingdom of Saudi Arabia e-mail: ralghazo@pmu.edu.sa; idaqqa@pmu.edu.sa; habdelsalam@pmu.edu.sa; mpilotti@pmu.edu.sa;

halmulhem@pmu.edu.sa

Abstract: The present research compares the effects of mentally recreating the experience of realizing that a desirable goal had been achieved (outcome simulation exercise) with those of mentally recreating the actions that might lead to the desirable goal (process simulation exercise). It asked whether the performance benefits of process simulations over outcome simulations, which have been reported in students enrolled in face-to-face classes, would generalize to an online environment. The process simulation exercise was expected to foster attention to the antecedents of good grades, thereby improving class performance relative to the outcome simulation exercise which was intended to be merely motivational. College students from the Middle East, who were taking classes online due to the COVID-19 pandemic, participated. Type of simulation impacted students' performance on assignments, but differently depending on the timing of the assessment. It did not influence behavioral engagement, midterm test performance, or predictions of performance before or after the test. Instead, process simulation enhanced students' confidence in their predictions. These findings suggest that process simulation exercises may be useful learning props for activities that challenge students' problem-solving skills (e.g., assignments) rather than engage well-practiced study habits (e.g., tests).

Keywords: higher education; academic performance; mental simulation; performance forecast.

During the COVID-19 pandemic, many educational institutions around the world have opted to cancel face-to-face classes, and have mandated that faculty move their courses online. As a result, a large number of students and instructors have been catapulted into the world of distance education (Hodges et al., 2020; Laplante, 2020; Williamson et al., 2020). For many college students, prior experience with either synchronous or asynchronous virtual classrooms may have been sporadic at best, including enrollment in an online course out of necessity, and/or use of an eLearning platform (e.g., Blackboard) in face-to-face classes to access and submit course materials. If the spring 2020 semester has forced upon such students and many instructors a drastic shift into the online world (Zimmerman, 2020), it has also given them the opportunity to adjust, thereby making online summer classes feel like a familiar ecosystem (Murray and Barnes, 1998; Wang et al., 2010). In summer courses, learning is usually packed into a shorter timeframe and is more likely to suffer from distractions promoted by clement weather. As such, academic success greatly relies on the students' motivation and timely execution of course activities. Thus, it is reasonable to ask whether educational interventions whose effectiveness as propellers of academic success has been put to the test in face-to-face courses bring forth similar outcomes when the courses are taught synchronously online during the more compacted timeframe of the summer term.

In the present research, we focus on mental simulations as capable of enhancing students' performance (Pham and Taylor, 1999). Specifically, we compare the effects of mental simulation instructions that require learners to envision the actions necessary to obtain the desired outcome (a good grade) with those of mental simulation instructions that require learners to imagine their reactions to having achieved the desired outcome. Synchronous online learning is defined as learning that happens in real-time (Richardson, 2020). Classes are scheduled in a virtual classroom at specific times, during which students and instructors are given the opportunity to interact through a particular online medium (e.g., Blackboard Collaborate). Before we discuss the purported impact of the two modes of envisioning the future on performance, the concept of mental simulation needs to be defined.

'Corresponding author: ralghazo@pmu.edu.sa

Process Simulations in Online Courses

introduction

In essence, mental simulation is the "imitative representation of some event or series of events" which may pertain to the past, present, or future (Taylor et al., 1988, p. 430). If it involves the latter, the simulation offers the person who performs it a window into the future by enabling him/her to consider possibilities and the actions that are deemed suitable to realize those possibilities. Envisioning the future is consequential. For instance, running through a set of events in one's minds and imagining them in concrete and specific terms, constrained by what is plausible, has been shown to make events seem real, increasing the person's belief that such events will actually happen (Koehler, 1991). Furthermore, imagining an event has also been shown not to produce a dry cognitive representation but rather to evoke emotions with underlying physiological changes (e.g., alterations in heart rate, blood pressure, and electrodermal activity), which give the person experiencing them opportunities for self-regulation (Linden and Skottnik, 2019). Most importantly, it is thought of as providing information about those events (Druckman, 2004). As such, it can play an important role in how people solve problems in everyday life. Thus, it is not surprising that mental simulation has been characterized as capable of improving human performance by linking mental activity to action. References to the purported benefits of mental simulation of future events are plentiful in the literature on self-help advice (Fanning, 1994; Peale, 1982), clinical psychology, and sports (Marlatt and Gordon, 2005; Orlick and Partington, 1986).

Yet, not all mental simulations are viewed as equally effective when performance is concerned (Druckman, 2004). They may interfere with self-regulation if they are unrealistic, such as fantasies (Oettingen, 1995), or involve rumination of painful experiences (Horowitz, 1976; Silver et al., 1983). In the educational domain, even within the realm of realistic and emotionally restrained scenarios, the effectiveness of mental simulation of the future is a matter of debate. According to one viewpoint, the critical component of a successful mental simulation is its emphasis on mimicking the process needed to reach a given goal. The reason is that rehearsal of the process required to reach a desired end state forces the learner to identify and organize the steps involved in getting there, thereby inducing planning, enhancing the accessibility of required actions, and serving as a motivator. For example, students who desire to complete a college course successfully can increase their chances of doing so by mentally simulating the activities linked to each requirement of the selected course, such as reading the study guide of an upcoming test, reviewing the study materials, defining key concepts, developing examples of such concepts, practicing answering questions, etc. A contrasting point of view, mostly derived from the self-help literature, argues for a quite different form of mental simulation. This approach maintains that a person's attentional focus on the outcome to be achieved will help bring it about (Fanning, 1994; Peale, 1982) by motivating independent action. Accordingly, students who want to be successful in a course may be advised to envision themselves already at the end of a semester looking at an A or A+ on their transcript.

Despite the widespread popularity of outcome simulation, it remains unclear how a mere attentional focus on the desired outcome, without an articulation of the activities that lead to attainment, may enable people to achieve it. It can motivate students but does not inform them of the path to get there. Furthermore, although the stated purpose of formal education is to acquire general and specialized knowledge as well as a variety of information processing skills, a strong subtext that emphasizes the significance of grades is often present (Pollio and Beck, 2000). Thus, simulating a goal that is already relevant in students' minds may not add much of a drive to the activities that lead to the successful completion of a course.

Not surprisingly, evidence that one type of simulation is superior to the other in enhancing future performance is not decisive. Consider, for instance, the findings of experiments conducted by Pham, Taylor, and colleagues (1998; 1999) who, during the week before the administration of a midterm test, asked college students who were studying for the test to do one of two things. In the process simulation condition, students were told to visualize themselves studying for the test to obtain a high grade (or more explicitly an A). In the outcome simulation condition, they were told to imagine themselves having obtained a high grade (or more explicitly an A) on the test. Both groups of students were asked to perform the simulation for five minutes each day before the exam. Pham and colleagues found that students who had simulated the process of studying for the exam prior to their midterm merely displayed a trend towards superior performance. In a systematic replication of the experiment, however, they reported a significant difference in favor of process simulation. Interestingly, prior to the test, when asked about expected grades, students in the process simulation condition reported higher grades. Further evidence from self-reports of the activities preceding the test indicated that students who practiced outcome simulation rehearsed the delight they would experience from obtaining a good mark but failed to put more effort into achieving the good mark, thereby ultimately reducing their aspirations. Positive expectations motivate learning in students, leading to enhanced performance through behavioral engagement (Putwain et al., 2019). Behavioral engagement is an omnibus variable that includes effort, persistence, and exertion in

class activities (Appleton et al., 2006; Fredricks et al., 2004). Thus, lower aspirations may be considered counterproductive if the aim of mental simulation exercises is to motivate action.

The Present Study (in Brief)

Three predictions regarding mental simulation in the synchronous online classroom can be formulated, all stemming from the recognition that teaching and learning in the synchronous and face-to-face mode greatly resemble each other (Dennis, 2003). Like the exchanges in the face-to-face classroom, real-time interactions give students the feeling of immediate contact, as well as the opportunity to clarify uncertainties in a timely manner (Salmon, 2000; Steeples et al., 2002). Yet, the physical distance among participants inherent to synchronous technologies tends to yield slightly less interaction between students and among instructors and students inside and outside the classroom (Anderson, 2003; Ng, 2007). Faculty's self-reports at our university support this claim. Process simulation may be ideally effective in compensating for the physical distance imposed by the medium, through its focusing students' attention on the steps necessary to complete activities, making such steps more accessible, and thus ultimately enhancing self-regulating learning. Alternatively, the impact of process and outcome simulation instructions may not differ. Although both can be conceptualized as propellers of thought and action (i.e., sources of motivation), as they include references to performance attainment (e.g., grades), they may become redundant props in the students' environment where attainment is already emphasized. It is also possible though to predict that outcome simulation may reduce students' aspirations, and thus impair performance by diminishing exerted effort.

If indeed the process simulation instructions differentially shape students' cognition and action from those involving outcome simulation, then not only performance but also predictions of performance may differ. Consider that simulating the process of completing the requirements of a course may motivate students as well as inform them of the steps necessary to get to the desired goal. That is, it encourages learners to articulate informed expectations. Expectancy is defined as learners' belief about the probability of success in forthcoming activities (Locke and Latham, 1990). Thus, we hypothesize that if students are asked, before a test, about the expected grades, those who have performed a process simulation exercise will exhibit higher expectations and be more confident in their ability to fulfill such expectations than those who performed an outcome simulation exercise. Yet, higher expectations may be a double-edged sword. They can reflect an optimistic forecast that motivates preparation (Ng, 2007; Taylor and Brown, 1988) or be so inflated to encourage inertia with dire consequences on performance (Brookings and Serratelli, 2006; Yang et al., 2009). Because expectations are informed by the process simulation exercise, students exposed to this mental exercise are likely to yield estimates that are not only more confident, but also more accurate than those of students who perform the outcome simulation exercise. Furthermore, it is important to recognize that grade prediction is undeniably a tricky business, especially for students who are struggling. Not surprisingly, evidence suggests that grade prediction accuracy changes with performance level (Miller and Geraci, 2011; Pilotti et al., 2019, 2020; Yang et al., 2009). Although poor performers tend to be less accurate in their grade predictions than good performers, yielding inflated estimates of future performance, they are less confident in their inflated estimates. Thus, the benefits of process simulation exercises may be experienced primarily by poor performers. We test predictions of actual and forecast performance with the methodology described below.

Materials and Methods

Participants

The participants of the simulation exercises were 269 female undergraduate students (age range: 18-25) at a university located in the Eastern Province of the Kingdom of Saudi Arabia. They reported Arabic as their first language and English as their second language. For admission, English competency had to be demonstrated through standardized English proficiency tests (i.e., Aptis, IELTS, or TOEFL). At the university, English was the participants' primary vehicle of instruction. They were enrolled in one of eight courses of the Core curriculum offered during the summer semester (to be described below). The number of credit hours completed varied from 0 to 135. Students' prior experience with distance learning at the university was reported to be limited to courses completed during the previous spring semester and to the use of Blackboard in face-to-face classes. Participation complied with the guidelines of the Office for Human Research Protections of the USA Department of Health and Human Services and with the American Psychological Association's ethical standards in the treatment of human subjects. Sixteen additional students were excluded from the study for their failure to follow the instructions or complete

the course. Due to gender-segregation rules, a comparable sample of male students was unattainable. Because ethical considerations prevented us from randomly excluding students from exercises that could benefit academic performance, we identified a sample of students (n = 269) who could serve as a baseline (business-as-usual instruction). They had been enrolled in the same courses selected for the mental simulation exercises. The courses had been taught by the same instructors in the earlier semester. Two criteria were applied to match students assigned to the mental simulation exercises with baseline students: the number of credit hours completed (perfect match or 1 credit hour either above or below), which served as a measure of academic experience, and final class grades (perfect match or within a limited range of points of deviation: 0.25-2.15), which served as a measure of overall performance level.

Procedure and Materials

This field study was conducted during the summer 2020 semester. Eight courses of the Core Curriculum of the University were selected according to the following criteria: they (a) included students across the entire university, (b) offered the smallest likelihood of overlap of students, and (c) ensured adequate representation of the Core curriculum whose courses emphasize the practice of basic academic skills (e.g., writing, speaking, reasoning, etc.) across a variety of topics (e.g., Technical and Professional Communication, Writing and Research, and Oral Communication) or within a specific domain of knowledge (e.g., Introduction to Psychology). The curriculum of such courses relies on syllabi approved by the Texas International Education Consortium (TIEC) and textbooks published in the USA. The only exception is a sequence of 4 courses on Islamic and Arabic culture, which provides the Core curriculum with a culturally-appropriate foundation. As much as the courses approved by TIEC, they emphasize the practice of basic academic skills across a variety of topics (Introduction to Islamic Culture, Islamic Society, and Communication) or within a particular domain of knowledge (History of Prophet Mohammad). Another feature of the selected courses was the equivalence of students' outcome assessment protocols concerning the range of outcomes and the timing of assessment. Each course involved assignments to be completed during the first half of the semester (i.e., before the midterm) or the second half of the semester, a midterm exam, and a final exam. Test questions and assignments encompassed five of the six types of information processing highlighted by Bloom's taxonomy (Anderson and Krathwohl, 2001; Bloom, 1956, 1976; Krathwohl, 2002). Namely, assessment required remembering, understanding, application, analysis, and evaluation, but excluded synthesis/creation of work due to the introductory nature of the Core curriculum.

Three instructors, whose pedagogy was deemed student-centered and learning-oriented by students, colleagues, and administrators (Farias et al., 2010), volunteered to participate in the study. The instructors were described in peer evaluations, peer observations, and students' evaluations as underscoring partnership and mutual support in learning activities, offering developmental feedback, treating grades as opportunities for learning, and fostering the practice of critical thinking. The instructors' scores on the LOGO F questionnaire of Eison et al. (1993), along with peer observations and evaluations supported this pedagogical profile. The 20-item questionnaire assessed their attitudes towards grades and learning on a 5-point Likert scale from strongly disagree (1) to strongly agree (5), and their behaviors towards grades and learning on a five-point scale from never (1) to always (5). Instructors reported engaging in behaviors likely to support a learning-orientation in their students (M = 4.00) more often than behaviors likely to foster a grade-orientation (M = 2.13). Likewise, they endorsed more strongly learning-oriented attitudes (M = 4.13) than grade-oriented ones (M = 2.53). Each taught at least 2 of the 8 classes included in the study.

Table 1 describes the activities performed by the students assigned to the simulation exercises. At the start of the semester (after the instructor overviewed the requirements of the course in which students were enrolled) and prior to the midterm exam, 5 questions were asked under two randomly assigned between-subjects instructional conditions. The process simulation exercise asked participants (n = 139) to mentally simulate the process of effective studying, whereas the outcome simulation exercise required participants (n = 130) to mentally simulate the experience of realizing that a desirable goal (e.g., high mark) had been achieved. Specifically, in the process simulation exercise condition, students were instructed to picture themselves studying for an upcoming test or assignment in such a way that would lead them to obtain a high mark. Because a letter grade, such as an A or A+, was not a realistic goal for all the students, we chose to state the desirable goal vaguely. To ensure realism and compliance, their task was to describe 5 different actions they would perform to achieve a high mark. Instead, in the outcome simulation exercise condition, students were instructed to picture themselves realizing that they scored a high mark on a test or assignment in the class in which they were enrolled. Their task was to describe 5 actions they would perform after realizing that they obtained a high mark. To ensure comfort,

instructions were presented both in English and Arabic and students were free to use either language in their responses. In the mental simulation conditions, before starting the midterm test and after having completed it, students predicted their performance. For both prospective and retrospective estimates, students also expressed their confidence in the prediction made. Students predicted their grade (see Hacker et al., 2000) on a scale from 0 to 100 as well as expressed their confidence in the prediction made on a scale from 0 (not at all confident) to 4 (extremely confident). Estimates of grades, which were made both before and after the midterm test, illustrated students' accuracy of self-assessment by comparing estimates to actual grades, whereas reports of confidence indicated the extent to which such estimates were trusted by the students who made them (i.e., subjective confidence).

Table 1

Sequence of Activities

Timing Activity

Process or outcome simulation exercise

2nd Assignments (1st half of the semester)

3* Process or outcome simulation exercise

4th Grade prediction and subjective confidence rating

5th Midterm test

6th Grade prediction and subjective confidence rating

7>h Assignments {2n(i half of the semester)

8th Participation score and class grade

During the semester, grades and feedback regarding assignments and the midterm test were delivered to students within a few days from submission to ensure timely feedback. At the completion of the semester, participation scores were computed by the instructors. The participation score measured behavioral engagement estimated by the instructor based on the contribution that each student made to the online class (attendance, frequency, and quality of questions asked, answers given, and comments made during lectures or class discussions, etc.). Due to institutional restrictions, final test grades were not made available. All grades/scores were computed as percentages.

Because of the COVID-19 epidemic, all classes were taught online through the synchronous (in real-time) mode. As a result, the simulation exercises were performed through Blackboard and closely monitored by each instructor who served as a content facilitator (Williams and Peters, 1997). Pedagogically, the synchronous virtual space replicated many aspects of the face-to-face space. Blackboard Collaborate, which is a real-time video conferencing tool equipped with audio, video, and application-sharing tools, a text-chat box, and a whiteboard, allowed students to interact with the instructor during lectures and participate in class discussions. Blackboard gave them access to study materials and resources, such as study guides, textbooks, and videos. Each online course was characterized by the instructor's effort to maximize (a) learner-content interaction, (b) learner-instructor interaction, and (c) learner-learner interaction, which are the criteria set by Moore (1989) as necessary for successful distance education. There was an important difference though. Although interactions could occur through typed or spoken messages, the camera function was disabled for cultural and religious reasons. Thus, students and faculty could not see each other.

Results

The results reported below are significant at the .05 level. Results are organized by the question that they were intended to answer. Descriptive statistics are reported in Table 2. If an analysis of variance (ANOVA) is conducted, and results are significant, tests of simple effects are carried out. The sequentially rejective multiple test procedure is adopted to adjust the alpha of each test and thus determine significance while controlling for experiment-wise alpha (Chen et al., 2017). Before the analyses of performance measures, estimates, and confidence ratings were conducted, the students' written responses to the simulation instructions were examined. They offered evidence of compliance, as well as established that the two types of simulation instructions yielded qualitatively different responses. At the start of the field

experiment, the academic experience (as measured by credit hours completed) of students assigned to the three conditions did not differ, F(1, 535) < 1, ns (process simulation: M = 41.88, SEM = 2.49; outcome simulation: M = 41.02, SEM = 2.61; and baseline: M = 40.74, SEM = 1.88).

Table 2

Mean and Standard Error of the Mean (in Parentheses) of Main Dependent Variables

Measure Scale Process Simulation Outcome Simulation Baseline

Assignments (1st half of semester)BC 0-100 8500 (1 35) 89.82(1 16) 84.53 (0.90)

Test grade prediction 0-100 90.43 (0.70) 89.51 (1.03)

Accuracy before the test ,0, +5 86(1 43) +6 59(1 81)

Subjective confidence in prediction 0-4 2.47 (0.08) 2.20 (0.09)

Midterm test 0-100 84.57 (1.38) 82 92(1 68) 8255(1 08)

Test grade prediction 0-100 8621 (1 01) 85 15(1 25)

Accuracy after the test .,0, +1.64 (1.65) +2.23(1.50)

Subjective confidence in prediction 04 2 31 (0 09) 2.00(0.10)

Assignments {2ni half of semester)0 b 0-100 90.17(1.04) 8603(1 30) 86 28 (0 82)

Participation score 0-100 92.17(1.67) 95.57 (0.91) 93.80 (0.97)

Overall class grade 0-100 86.24(1.01) 85 36(1 10) 86.79 (0.66)

Note.3 significant difference between simulation conditions; -1 significant difference between process simulation and baseline;

c significant difference between outcome simulation and baseline.

Does the Type of Simulation Differentiate Performance?

The impact of mental simulation on grades may be immediate (1st half of the semester) or may take time (2nd half of the semester). Thus, a two-way ANOVA was conducted on assignment performance with timing (1st and 2nd half of the semester) and condition (process simulation, outcome simulation, and baseline) as the independent variables. This analysis yielded a significant interaction, F(2, 535)=13.75, MSE = 102.06, p < .001, np2= .049 (main effects: Fs < 2.74, ns). Performance on assignments completed during the 1st half of the semester was higher for students who carried out the outcome simulation exercise, t(267) = 2.74, p = .006. Instead, performance on assignments completed during the 2nd half of the semester was higher for students who carried out the process simulation exercise, t(267) = 2.50, p= .013. Consistent with these results, in the 1st half of the semester, assignment performance following outcome simulation was superior to that of the baseline condition, t(397) = 3.47, p = .001. In the 2nd half of the semester, performance following process simulation was superior to that of the baseline condition, t(406) = 2.84, p = .005. Baseline performance was not different from performance in the process simulation condition during the 1st half of the semester, and in the outcome simulation condition during the 2nd half of the semester, ts < 1, ns. Furthermore, a one-way ANOVA on midterm test grades as well as behavioral engagement (as measured by participation scores) of the three conditions failed to reach significance, Fs< 1.54, ns.

It is important to note that if we had merely focused on midterm test grades, as Pham and Taylor (1999) did, we would have incorrectly concluded that type of simulation does not matter. Instead, the impact of this factor in our study was modulated by the task at hand and its timing. These findings can be understood by first taking into consideration the content of spontaneous comments made by students in class and during debriefings. According to such comments, assignments were much more likely to engage students' problem-solving skills than tests which tended to rely on well-developed study habits. That is, students saw assignments as novel artifacts that needed to be approached strategically, whereas tests tended to activate well-established routines for preparation. This difference in the way assignments and tests were perceived by the students may account for the sensitivity of assignment performance, and the insensitivity of test performance, to the type of mental simulation exercised by the learners. Yet, the timing of assessment may also be relevant as it suggests the existence of qualitatively different drives promoted by the two exercises: a feeling-good drive and a slow-acting one guided by information about the steps to perform, each capable of propelling action but impacting learners' activities sequentially. At least initially, simulating a desirable outcome may be more motivating than simulating the activities to obtain it, as the value of a realistic examination of the labor required by each assignment may sound

just more work. As a result, a feeling-good drive may be more effective in propelling effort and promoting higher levels of performance in activities perceived as novel problems to solve. However, as experience with the class requirements grows, the informational value of simulating such activities may begin to be recognized, such as the increased accessibility of the steps involved in each assignment, thereby ultimately benefitting performance.

In our study, it was not feasible to measure the expended effort, as measurement through questioning could change what learners would normally do if they were merely exposed to the simulation exercises. Thus, we asked students to engage in an activity that is habitually conducted in each class. Namely, we asked them to predict their grades on a midterm exam, thereby treating grade prediction as a measure of expectancy. We hypothesized that if a motivational gap between the two types of mental simulation exists, it may seep into grade predictions for the midterm test (as indices of students' expectations). The issue is whether a feeling-good drive propelling action versus one guided by information about the steps to perform may differentially shape midterm test predictions, as well as subjective confidence in such predictions. If a feeling-good drive propelling action is the hallmark of the outcome simulation exercise, the predictions of the students who performed such exercise will be more optimistic than those of students who performed the process simulation exercise. In contrast, the informed expectancy fostered by the process simulation exercise may give rise to more accurate predictions. Whether differences in the confidence learners place in such predictions will emerge may depend on whether subjective confidence rests on a feeling-good drive, which can foster overconfidence, or an informed expectancy, which can promote restrained confidence. We examine these issues next.

Does the Type of Simulation Differentiate Grade Estimates and Subjective Confidence?

The midterm grade obtained by each student was subtracted from the estimated one to determine whether the student's performance expectation was optimistic (+), accurate (0), or pessimistic (-). A two-way ANOVA on students' predictions of midterm grades with timing (1st and 2nd half of the semester) and condition (process simulation and outcome simulation) as the independent variables yielded a main effect of timing, F(1, 267) = 33.74, MSE = 73.23, p < .001, np2= .112, indicating that optimism merely declined after the test (other Fs < 1, ns). Thus, test experience gave rise to restrained optimism irrespective of the simulation exercise to which one submitted.

Did participants' optimism reflect inflated estimates? We analyzed the extent to which students' estimates deviated from 0 (accuracy) in each exercise group. Before the test, both outcome and process simulation participants made inflated estimates, t(129) = 3.64, p < .001, and t(138) = 4.09, p < .001, respectively. After the test, estimates were rather accurate, ts < 1.49, ns.

The same two-way ANOVA on subjective confidence indicated that predictions were also made with less confidence after the test, F(1, 267) = 8.95, MSE = .482, p < .003, np2= .032. Although type of mental simulation did not affect learners' expectations, subjective confidence in such expectations was higher among those who performed the process simulation exercise, F(1, 267) = 7.03, MSE = 1.590, p=.008, np2= .026 (other F < 1, ns).

In sum, the informed expectancy fostered by the process simulation exercise did not affect the optimism with which estimates were made, but made participants more confident in such estimates. Yet, contrary to the hypothesized effect of the process simulation exercise, its estimates were as inflated before the midterm test and as realistic afterward as those of the outcome simulation condition.

Does Performance Level Modulate the Impact of Type of Simulation on Estimates and Subjective Confidence?

Regression analyses with class performance (i.e., grade at the end of the semester) and simulation type as the predictors were conducted on the accuracy of estimates and subjective confidence (see Table 3a). These analyses indicated that performance was an important contributor to the accuracy of the estimates made by students. That is, the higher was the performance of a student, the less inaccurate her predictions were. Instead, the type of simulation was an important contributor to the confidence with which estimates were made. That is, the process simulation exercise yielded greater subjective confidence overall than the outcome simulation exercise. Table 3b illustrates these differential contributions. In it, class grades were used to classify students into poor (C or below), average performers (B), or high performers (A) for ease of illustration.

Table 3a

Regression Analysis with Estimates or Subjective Confidence as the Outcome Variable and Simulation Exercise (Outcome Versus Process) and Class Performance as the Predictors

Table 3b

Mean and Standard Error of the Mean (in Parentheses) of Accuracy of Estimates and Subjective Confidence as a Function of Performance Level and Condition

Measure Performance Level Process SimuMori Outcome simulation

Before fhe midterm test

Accuracy Poor +23.95 (6.14) +29.29 (4.48)

Average +5 69(1 71) +4.87 (2.76)

Good +0.05(1.10) -2.16(1.52)

Subjective confidence Poor 232 (018) 2.03(0.18)

Average 252 (013) 219(0 17)

Good 2.48 (0.11) 2.28 (0 14)

After the midterm test

Accuracy Poor +19.66 (6.45) +19.24 (3.69)

Average +2 93 (2 05) -2.16(3.05)

Good -524(1 69) -2.89(1.18)

Subjective confidence Poor 2.23(17) 1.90 (0.20)

Average 2.38(16) 1.91 (0.20)

Good 2.28(13) 2.09 (0.13)

Discussions

The results of the present field study can be summarized in two points. First, outcome and process simulation exercises had a different impact on assignment performance and no impact on midterm test performance. Outcome simulation promoted performance during the 1st half of the term, whereas process simulation enhanced performance during the 2nd half of the term. Second, the type of mental simulation did not affect future estimates of test performance, but the confidence with which such estimates were made. Process simulation, which was assumed to inform learners of the steps necessary to complete a task, made learners more confident in their estimates. Irrespective of the type of simulation, estimates were generally more realistic after having experienced the test.

The impact of the outcome simulation exercise was expected to resemble that obtained by Sherman et al. (1981) who asked participants to envision and then explain a desirable or undesirable outcome of an upcoming task before estimating the probability of completing the task. Participants who explained a desirable outcome not only had higher expectations of performance but also performed at a higher level than control participants who did not imagine and explain outcomes. In our study, the benefits of simulating a desirable outcome were fleeting (limited to the 1st half of the term) and relegated to tasks that participants approached as novel problems to solve (assignments). However, simulating outcomes neither affected participants' predictions of performance in an upcoming test nor their subjective confidence in such predictions. Thus, in our study, feeling-good drives were found to have a time-limited impact and be task-specific. Oettingen (1995) argued that outcome simulation might be counterproductive. It can reduce expended effort because it anticipates the attainment of success, thereby preventing one's appreciation of the amount and quality of the effort required for the envisioned success to become reality. In support of Oettingen's proposal, Pham and Taylor (1999) found that outcome simulation reduced the number of hours spent preparing for an upcoming midterm test and lowered learners' expectations of performance. We did not find evidence of this pattern of effects. During the debriefing, we asked students about their test preparation activities. There was no evidence from the students' self-reports that they studied less or differently in the outcome simulation condition.

Of particular relevance to educators are the results of the process simulation exercise which demonstrated its effectiveness towards the 2nd half of the semester when students experienced more intense fatigue. It is important to note that process simulation resembles scenario construction whereby a script-like description of a future event, such as an upcoming test, is broken down into components, thereby helping people not only identify the cluster of activities that are critical to successful performance but also envision activities along a timeline that connects the point of origin to the desired end-point (Healey and Hodgkinson, 2008; Schoemaker, 1991). Process simulation is an informative exercise as it provides concreteness to the actions to be performed in an upcoming task, defines uncertainty, and highlights causal connections.

Our findings are also different from those of Taylor and Pham (1999) who asked participants, prior to writing an essay, to simulate the outcome of writing the essay, simulate preparing the essay, or do neither (baseline condition). They found that both types of simulation improved the quality of an essay written on an expected topic. If the assignments that our students had to complete are equated to the problemsolving activity of writing essays on expected topics, our results appear to be inconsistent with those of Taylor and Pham. However, their assessment did not cover performance across an entire semester, but rather it was a one-time affair. If we were to combine the assignments of the 1st half and 2nd half of the semester, then our results would resemble those obtained by Taylor and Pham. The latter also reported that outcome simulation enhanced motivation and self-efficacy, whereas process simulation facilitated essay-writing planning. We found, instead, that process simulation enhanced subjective confidence on the estimated outcome of a midterm test, which may be related to the facilitation that process simulation is assumed to exercise on planning.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Conclusions

Although the present findings give educators practical information on the nature and boundaries of the impact of different types of mental simulation, some limitations need to be considered in future research. The sample assessed included only female students, thereby questioning whether gender differences could be observed. In compliance with ethical considerations, the students in the baseline group were merely matched with those of the simulation group, rather than being randomly selected. Matching, albeit successful, does not entail identity, thereby leaving factors for which matching was not

exercised free to exert their influence on participants' cognition and action.

Repeated practice with process simulation may help poor-performing students to not only engage in behaviors that maximize their chances of success but also feel that they have a role to play in doing well. Yet, the quality of the simulation, its frequency, timing, and the extent to which it requires instructor's supervision are factors that need to be explored in future research. As many students struggle to succeed in higher education, understanding the effectiveness of exercises whose goal is to enhance performance can guide students who are failing to behaviors that increase their chances of success. This study adds to a growing body of literature that empowers students to take charge of their learning by considering different strategies to attain academic success.

Acknowledgments

The authors are grateful to the members of the Cognitive Science Research Cluster at Prince Mohammad Bin Fahd University (PMU) as well as the members of the PMU Undergraduate Research Society for their assistance and feedback, and to the students who participated in the study.

Conflict of interests

The authors declare no conflict of interest.

References

Anderson, L. W., & Bloom, B. S. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom's taxonomy of

educational objectives. Longman. https://eduq.info/xmlui/handle/11515/18345 Anderson, T. (2003). Getting the mix right again: An updated and theoretical rationale for interaction. International Review of

Research in Open and Distance Learning, 4(2), 9-14. https://doi.org/10.19173/irrodl.v4i2.149 Appleton, J. J., Christenson, S. L., Kim, D., & Reschly, A. L. (2006). Measuring cognitive and psychological engagement: Validation of the student engagement instrument. Journal of School Psychology, 44(5), 427-445. https://doi. org/10.1016/j.jsp.2006.04.002

Bloom, B. S. (1976). Human characteristics and school learning. McGraw Hill. https://psycnet.apa.org/record/1977-22073-000 Bloom, B. S. (Ed.). (1956). Taxonomy of educational objectives. Cognitive domain. McKay.

Brookings, J. B., & Serratelli, A. J. (2006). Positive illusions: Positively correlated with subjective well-being, negatively correlated with a measure of personal growth. Psychological Reports, 98(2), 407-413. https://doi.org/10.2466/pr0.98.2.407-413 Chen, S. Y., Feng, Z., & Yi, X. (2017). A general introduction to adjustment for multiple comparisons. Journal of Thoracic

Disease, 9(6), 1725-1729. https://doi.org/10.21037/jtd.2017.05.34 Dennis, J. (2003). Problem-based learning in online vs. face-to-face environments. Education for Health: Change in Learning

& Practice, 16(2), 198-209. https://doi-org.library.pmu.edu.sa/10.1080/1357628031000116907 Druckman, D. (2004). Be all that you can be: Enhancing human performance. Journal of Applied Social Psychology, 34(11),

2234-2260. https://doi.org/10.1111/j.1559-1816.2004.tb01975.x Eison, J., Janzow, F., & Pollio, H. R. (1993). Assessing faculty orientations towards grades and learning: Some initial results.

Psychological Reports, 73(2), 643-656. https://doi.org/10.2466/pr0.1993.73.2.643 Fanning, P. (1994). Visualization for change (2nd ed.). New Harbinger Publications Incorporated. Farias, G., Farias, C. M., & Fairfield, K. D. (2010). Teacher as judge or partner: The dilemma of grades versus learning. Journal

of Education for Business, 85(6), 336-342. https://doi.org/10.1080/08832321003604961 Fredricks, J. A., Blumenfeld, P. C., & Paris, A. H. (2004). School engagement: Potential of the concept, state of the evidence.

Review of educational research, 74(1), 59-109. https://doi.org/10.3102/00346543074001059 Hacker, D. J., Bol, L., Horgan, D. D., & Rakow, E. A. (2000). Test prediction and performance in a classroom context. Journal

of Educational Psychology, 92(1), 160-170. https://doi.org/10.1037/0022-0663.92.1160 Healey, M. P., & Hodgkinson, G. P. (2008). Troubling futures: Scenarios and scenario planning for organizational decision making. In G. P. Hodgkinson, & W. H. Starbuck (Eds.), The Oxford Hand-book of Organizational Decision Making (pp. 565-585). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199290468.003.0030 Hodges, C., Moore, S., Lockee, B., Trust, T., & Bond, A. (2020). The difference between emergency remote teaching and online learning. Educause Review, 27. https://er.educause.edu/articles/2020/3/the-difference-between-emergency-remote-teaching-andonline-learning Horowitz, M. J. (1976). Stress response syndromes. Aronson.

Koehler, D. J. (1991). Explanation, imagination, and confidence in judgment. Psychological Bulletin, 110(3), 499-519. https://

doi.org/10.1037/0033-2909.110.3.499 Krathwohl, D. R. (2002). A revision of Bloom's taxonomy: An overview. Theory into Practice, 41(4), 212-218. https://doi.

org/10.1207/s15430421tip4104_2 Laplante, P. (2020). Contactless U: Higher education in the postcoronavirus world. IEEE Annals of the History of Computing,

53(07), 76-79. https://doi.org/10.1109/MC.2020.2990360 Linden, D. E., & Skottnik, L. (2019). Mental imagery and brain regulation-New links between psychotherapy and neuroscience.

Frontiers in Psychiatry, 10, 779. https://doi.org/10.3389/fpsyt.2019.00779 Locke, E. A. (1996). Motivation through conscious goal setting. Applied and preventive psychology, 5(2), 117-124. https://doi. org/10.1016/S0962-1849(96)80005-9

Locke, E. A., & Latham, G. P. (1990). Work motivation and satisfaction: Light at the end of the tunnel. Psychological Science,

1(4), 240-246. https://doi.org/10.1111/j.1467-9280.1990.tb00207.x Marlatt, G. A., & Donovan, D. M. (Eds.). (2005). Relapse prevention: Maintenance strategies in the treatment of addictive behaviors. Guilford press.

Miller, T. M., & Geraci, L. (2011). Unskilled but aware: Reinterpreting overconfidence in low-performing students. Journal of

Experimental Psychology: Learning, Memory, and Cognition, 37(2), 502-506. https://doi.org/10.1037/a0021802 Moore, M. G. (1989). Three types of interaction. The American Journal of Distance Education, 3(2), 1-6. https://doi.

org/10.1080/08923648909526659 Murray, L., & Barnes, A. (1998). Beyond the "wow" factor - evaluating multimedia language learning software from a

pedagogical viewpoint. System, 26(2), 249-259. https://doi.org/10.1016/S0346-251X(98)00008-6 Ng, K. C. (2007). Replacing face-to-face tutorials by synchronous online technologies: Challenges and pedagogical implications. The International Review of Research in Open and Distributed Learning, 8(1), 1-15. https://doi.org/10.19173/irrodl. v8i1.335

Oettingen, G. (1995). Positive fantasy and motivation. In P. M. Gollwitzer & J. A. Bargh (Eds.), The psychology of action:

Linking cognition and motivation to behavior (pp. 219-235). Guilford Press. Orlick, T., & Partington, J. T. (1986). Psyched: Inner views of winning. Coaching Association of Canada. Peale, N. V. (1982). Positive imaging: The powerful way to change your life. Fawcett Crest.

Pham, L. B., & Taylor, S. E. (1999). From thought to action: Effects of process-versus outcome-based mental simulations on performance. PersonalityandSocialPsychologyBulletin, 25(2),250-260. https://doi.org/10.1177/0146167299025002010 Pilotti, M.A.,Alaoui, K. E., Mulhem, H.A., & Salameh, M. H. (2020). A close-up on a predictive moment: Illusion of knowing or lack of confidence in self-assessment?. Journal of Education, 0022057420944843 https://doi.org/10.1177/0022057420944843 Pilotti, M., El Alaoui, K., Mulhem, H., & Al Kuhayli, H. (2019). The Illusion of knowing in college. Europe's Journal of Psychology,

15(4), 789-807. https://doi.org/10.5964/ejop.v15i4.1921 Pollio, H. R., & Beck, H. P. (2000). When the tail wags the dog: Perceptions of learning and grade orientation in, and by, contemporary college students and faculty. The Journal of Higher Education, 71(1), 84-102. https://doi.org/10.1080/0 0221546.2000.11780817

Putwain, D. W., Nicholson, L. J., Pekrun, R., Becker, S., & Symes, W. (2019). Expectancy of success, attainment value, engagement, and achievement: A moderated mediation analysis. Learning and Instruction, 60, 117-125. https://doi. org/10.1016/j.learninstruc.2018.11.005 Richardson, K. (2020). Online tools with synchronous learning environments. In P. Wachira & J. Keengwe (Eds.), Handbook of research on online pedagogical models for mathematics teacher education (pp. 68-78). IGI Global. https://www. igi-global.com/chapter/online-tools-with-synchronous-learning-environments/243500 Salmon, G. (2000). E-moderating: The key to teaching and learning online Kogan Page.

Schoemaker, P. J. (1991). When and how to use scenario planning: A heuristic approach with illustration. Journal of Forecasting,

10(6), 549-564. https://doi.org/10.1002/for.3980100602 Sherman, S. J., Skov, R. B., Hervitz, E. F., & Stock, C. B. (1981). The effects of explaining hypothetical future events: From possibility to probability to actuality and beyond. Journal of Experimental Social Psychology, 17(2), 142-158. https:// doi.org/10.1016/0022-1031(81)90011-1 Silver, R. L., Boon, C., & Stones, M. H. (1983). Searching for meaning in misfortune: Making sense of incest. Journal of Social

issues, 39(2), 81-101. https://doi.org/10.1111/j.1540-4560.1983.tb00142.x Steeples, C., Jones, C., & Goodyear, P. (2002). Beyond e-Learning: A future for networked learning. In C. Steeples & C. Jones (Eds.) Networked Learning: Perspectives and issues (pp. 323-341). Springer-Verlag. https://doi.org/10.1007/978-1-4471-0181-9_19

Taylor, S. E., & Brown, J. D. (1988). Illusion and well-being: A social psychological perspective on mental health. Psychological

Bulletin, 103(2), 193-210. https://doi.org/10.1037/0033-2909.103.2.193 Taylor, S. E., Pham, L. B., Rivkin, I. D., & Armor, D. A. (1998). Harnessing the imagination: Mental simulation, self-regulation,

and coping. American Psychologist, 53(4), 429-439. https://doi.org/10.1037/0003-066X.53A429 Taylor, S. E., & Pham, L. B. (1999). The effect of mental simulation on goal-directed performance. Imagination, cognition and

personality, 18(4), 253-268. https://doi.org/10.2190/VG7L-T6HK-264H-7XJY Wang, Y., Chen, N. S., & Levy, M. (2010). Teacher training in a synchronous cyber face-to-face classroom: Characterizing and supporting the online teachers' learning process. Computer Assisted Language Learning, 23(4), 277-293. https://doi. org/10.1080/09588221.2010.493523 Williams, V. & Peters, K. (1997). Faculty incentives for the preparations of web-based instruction. In B. H. Kahn (Ed.), Web-

based instruction (pp. 107-110). Educational Technology Publications. Williamson, B., Eynon, R. & Potter, J. (2020). Pandemic politics, pedagogies and practices: Digital technologies and distance education during the coronavirus emergency. Learning, Media and Technology, 45(2), 107-114. https://doi.org/10.108 0/17439884.2020.1761641

Yang, M. L., Chuang, H. H., & Chiou, W. B. (2009). Long-term costs of inflated self-estimate on academic performance among adolescent students: A case of second-language achievements. Psychological Reports, 105(3), 727-737. https://doi. org/10.2466/PR0.105.3.727-737 Zimmerman, J. (2020, March 10). Coronavirus and the great online-learning experiment. Chronicle of Higher Education. https:// www.chronicle.com/article/Coronavirusthe-Great/248216

i Надоели баннеры? Вы всегда можете отключить рекламу.