Научная статья на тему 'Technology integration in postsecondary education: a summary of findings from a set of related meta-analyses'

Technology integration in postsecondary education: a summary of findings from a set of related meta-analyses Текст научной статьи по специальности «Науки об образовании»

CC BY
542
66
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
meta-analysis / systematic reviews / technology integration in education / effectiveness of teaching and learning / learning outcomes / effect size / student-student interactions / blended learning / cognitive support / teacher education and training

Аннотация научной статьи по наукам об образовании, автор научной работы — Evgueni F. Borokhovski, Robert M. Bernard, Rana M. Tamim, Richard F. Schmid

Although the overall research literature on the application of educational technologies to classroom instruction tends to favor their use over their non-use, these results vary considerably depending on what kind of technology is used, who it is used with and, more importantly, under what circumstances and for what instructional purposes it is used. Relatively recent, but well-developed and powerful methodology of systematic reviews, particularly quantitative syntheses (also known as metaanalyses) is especially suitable for addressing questions of that type by systematically summarizing research evidence in given areas of interest in social sciences. This meta-analysis summarizes data from 674 independent primary studies that compared higher degrees of technology use in the experimental condition with less technology in the control condition, in terms of their effects on student learning outcomes in postsecondary education. The result was an overall average weighted effect size of g = 0.27 (k = 879, p < .01), indicating low but significant positive effect of technology integration on learning. The follow-up analyses revealed the influence of educational technology used for cognitive support and blended learning instructional settings designed interaction treatments, and technology integration in teacher training, especially when student-centered pedagogical frameworks are used. These findings are of potentially high interest and applied value for educational practitioners, including teachers and school administrators, as well as for instructional designers and developers of educational software.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Technology integration in postsecondary education: a summary of findings from a set of related meta-analyses»

ТЕОРИЯ И МЕТОДИКА ПРОФЕССИОНАЛЬНОГО

ОБРАЗОВАНИЯ THEORY AND METHODOLOGY OF PROFESSIONAL _EDUCATION_

UDC 303.425.2 doi: 10.21702/rpj.2016.4.17

TECHNOLOGY INTEGRATION IN POSTSECONDARY EDUCATION: A SUMMARY OF FINDINGS FROM A SET OF RELATED META-ANALYSES

Evgueni F. Borokhovski1*, Robert M. Bernard1, Rana M. Tamim2, Richard F. Schmid1

1 Concordia University, Montreal, QC, Canada

2 Zayed University, Dubai, United Arab Emirates

* Correspondence author. E-mail: eboeokhovski@education.concordia.ca

Although the overall research literature on the application of educational technologies to classroom instruction tends to favor their use over their non-use, these results vary considerably depending on what kind of technology is used, who it is used with and, more importantly, under what circumstances and for what instructional purposes it is used. Relatively recent, but well-developed and powerful methodology of systematic reviews, particularly quantitative syntheses (also known as meta-analyses) is especially suitable for addressing questions of that type by systematically summarizing research evidence in given areas of interest in social sciences.

This meta-analysis summarizes data from 674 independent primary studies that compared higher degrees of technology use in the experimental condition with less technology in the control condition, in terms of their effects on student learning outcomes in postsecondary education. The result was an overall average weighted effect size of g = 0.27 (k = 879, p < .01), indicating low but significant positive effect of technology integration on learning. The follow-up analyses revealed the influence of educational technology used for cognitive support and blended learning instructional settings designed interaction treatments, and technology integration in teacher training, especially when student-centered pedagogical frameworks are used. These findings are of potentially high interest and applied value for educational practitioners, including teachers and school administrators, as well as for instructional designers and developers of educational software.

Keywords: meta-analysis, systematic reviews, technology integration in education, effectiveness of teaching and learning, learning outcomes, effect size, student-student interactions, blended learning, cognitive support, teacher education and training.

For citation: Borokhovski E. F., Bernard R. M., Tamim R. M., Schmid R. F. Technology integration in postsecondary education: A summary of findings from a set of related meta-analyses. Russian Psychological Journal, 2016, V. 13, no. 4, pp. 284-302.

Original manuscript received 26.10.2016

Introduction

When educational researchers and practitioners join in public discussions about the value of modern learning technologies, researchers are encouraged to take at least a few steps beyond fashionable trends emanating from the latest release of cutting-edge applications. The real challenge is to explore more systematically and in depth what works in education for student learning and to effectively link ever-advancing technological functionalities to educational theory and practice. Well-established pedagogical frameworks may or may not guide the effective use of various new educational technologies, and pioneering technological features may or may not make a difference in supporting learning. Research in education intends to sort out the "mays" from the "may nots" in an effort to explain when and why seemingly promising technologies lead to successful educational practices while others don't. This is the challenge of primary researchers and ultimately the concern of the meta-analyst.

This paper provides a summary of findings from a large-scale meta-analysis of classroom technology integration studies in postsecondary education [24] and its several follow-ups that specifically focus on sub-collections of studies addressing blended learning [3], designed interaction treatments [8], and the pedagogical underpinnings of technology use in teacher education [28].

The evolving role of educational technology

Like it or not, reliance on computers in various aspects of daily life is a reality and impacts significantly on nearly everything we do, both personally and professionally. Why then are educational researchers still engaging in debate over its effectiveness for teaching and learning? Some, like [11], have insisted on a rather auxiliary role for educational technology (i. e., it's functions are not unique, but can be performed in more conventional or effective ways), while others [e. g., 19] have argued that its role in education is more substantive and transformative. The disagreement is probably rooted in the history of educational technology itself. Originally, technology was utilized almost exclusively to deliver instructional content, and as a medium was no more effective than a human teacher, even an expert one. For instance, early studies on distributed closed-circuit television versus live teaching [10] found no differences between live teachers and televised teachers. Even the development of much more sophisticated computer tools and applications (e. g., computer-assisted learning, multi-media) did not improve student learning

RUSSIAN PSYCHOLOGICAL JOURNAL • 2016 VOL. 13 # 4

outcomes enough for the issue to be considered resolved unequivocally in favor of educational technology [e. g., 25]. It was the arrival of what [17] refers to as computer-based cognitive tools that appears to have tipped the scales from what can be achieved using multiple alternative media (i. e., argument of [11]) to what can be achieved primarily through computing (i. e., argument of [19]). Computer-based communication, simulations, serious games, blogs and wikis, social networking, search and retrieval, and the like, promise unique benefits that go well beyond the simple transfer of content from teacher to student. Add productivity software like spreadsheets, statistical packages, concept mapping programs, and a host of other student-oriented applications, and we can see that Clark's arguments [11], while still valid in certain computing domains, must be considered insufficient for examining the overall benefits (and potential deficits) of the introduction and the continued use of computing in education.

Examining the big picture - The Schmid et al. meta-analysis

While primary sample-based research is intended to provide an inferential link between a sample under scrutiny and the population that it is presumed to represent, meta-analysis makes this link more explicit by examining the effects of a multitude of primary studies that have attempted to address the same question. Inferential statistics and associated assessment of significance are unnecessary because a meta-analysis investigates the entire population of primary studies with like characteristics and conducted within a specified timeframe. In this sense, meta-analysis looks at the "big picture," or the characteristics of the phenomenon in terms of the body of primary studies that have examined it.

The [24] meta-analysis, which forms the basis for the various adaptations that are presented here, attempts to examine the big picture in terms of learning via technology in postsecondary classrooms. Recognizing the limitations of the technology vs. no technology hypothesis (i. e., comparisons between technology-enhanced and technology-free classrooms), [24] divided the entire collection of included studies into those with no technology in the control condition and those with technology in the treatment condition and some technology in the control condition. The timeframe was 1990 through 2010 - twenty years of research on the use of technology in postsecondary classrooms.

The [24] meta-analysis followed from research that also examined the big picture, but where the unit of analysis was the meta-analysis, not the primary study [27]. In this second-order meta-analysis, 25 previously published meta-analyses that cut across all levels of formal education, subject matters and technology types, from the 1970s to the present, were selected from a pool of about 75 meta-analyses and their results synthesized. The analysis revealed a weighted average effect size of g= 0.35 (p < .01), encompassing 1,055 primary studies and 109,700 participants.

The [27] addressed the big question based on meta-analyses that looked at the technology vs. no technology question - the kind that [11] decried as flawed and full of confounds. The authors concluded that, generally speaking, technology does enhance learning, even if only to a relatively small extent.

One of the problems with a second-order meta-analysis, however, as pointed out by [13], is that it represents a limited means for addressing the host of peripheral questions that can only be settled in a primary meta-analysis, where coding decisions can be made by the meta-analyst and the synthesis is conducted at the more granular level of the individual effect size. As a result, [27] did not examine any substantive moderator variables and thus provided little texture or nuance to the overall results.

It is these issues, plus the fact that the educational research landscape is rapidly changing to include many new educational technology applications and relatively recent "technology vs. technology" types of experimental settings (i. e., a more reasonable question in modern education), that motivated the effort by [24] to perform a primary meta-analysis of technology use in postsecondary classrooms.

The [24] meta-analysis reports the overall weighted average effects of technology use on achievement and attitude outcomes and explores a fairly large set of moderator variables in an attempt to explain how technology treatments lead to positive or negative effects when educational technology is broadly understood in terms of the earlier definition by [23] as a "...variety of modalities, tools, and strategies for learning, [whose] effectiveness, therefore, depends on how well [they] help teachers and students achieve the desired instructional goals" (p. 19).

Out of an initial pool of 11,957 study abstracts, 1,105 were chosen for analysis, yielding 879 achievement effect sizes after pre-experimental designs and studies with obvious methodological confounds were removed. The random effects weighted average effect size for achievement was g = 0.27, k = 879, p < .05. The collection of achievement outcomes was divided into two sub-collections, according to the amount of technology integration in the control condition. These were "no technology" in the control condition (g= 0.25, k = 479, p < .01) and "some technology" (though necessarily a lower degree of its use than in the treatment condition) in the control condition (g= 0.31, k = 400, p < .01). Random effects multiple meta-regression analysis was run on each sub-collection revealing three significant predictors (subject matter, degree of difference in technology use between the treatment and the control, and pedagogical uses of technology). The set of predictors for each sub-collection was both significant and homogeneous. Differences were found among the levels of all three moderators, but particularly among varieties of cognitive support applications. These findings are presented in greater detail in the Results section.

In summary, the [24] meta-analysis was intended to: 1) overcome some of the limitations of the previous reviews, including [27]; 2) provide the most comprehensive first-order meta-analysis of technology use in postsecondary classroom education; 3) encompass studies with "no technology in the control condition" and studies with "various degrees of technology in each condition"; and 4) look at the pedagogical use of a broad range of educational technologies and other important instructional moderator variables.

Examining the details: Three follow-up meta-analyses

A set of subsequent (follow-up) analyses addressed several additional questions we judged to be of the utmost importance. More specifically, we investigated the effects of various purposes of technology use (with the focus on cognitive support for learning), technology use in blended learning, instructional settings (interaction treatments) designed specifically to enable student collaborative work, and various pedagogical approaches to teacher education. Following is a brief rationale for each of these follow-ups.

Major purpose of technology use

As educational technology advanced beyond media whose primary role was to deliver content to students (e. g., instructional television, multi-media applications, computer-assisted instruction), the following question arose in the theoretical and practical literature: "How can computers be used to support student cognition, without directly instructing them"? The [17] addresses the practical issue, but [12] provides a possible answer to the theoretical question. He argues that the most compelling role of computing in learning is its ability to afford "cognitive efficiencies" to students. In most, if not all, learning situations there is shared cognition among the learner, the task itself, and the tools that the learner uses in the process. Computer-based cognitive tools, he argues, could be designed and implemented to fulfill one of these roles. His rationale was that the more learners can distribute cognition "outside of their heads," the more cognition can be devoted to the process of learning new material. Therefore, one purpose of the [24] review was to explore whether and to what extent cognitive tools promote student achievement in learning environments involving technology, compared to other roles that computers might assume (e. g., presentation, search and retrieval, communication).

The effects of blended learning

There is a growing literature of studies investigating blended learning, an instructional approach that involves a combination of elements of face-to-face and online instruction. It is sometimes argued that blended learning is the "best

of both worlds" because it is a marriage of the best elements of the two practices, although it is still arguable what these "best elements" are and how they fit together. To date, there have only been three meta-analyses devoted to blended learning [3 - partly reported here, 21, and 26].

In the [21] meta-analysis, blended learning conditions were found to significantly outperform fully face-to-face classroom instruction (g = 0.35, k = 23, p = .001). Several moderator variables were significant: 1) blended learning outperformed self-study; 2) collaborative learning and teacher-directed expository instruction significantly outperformed self-study; 3) in computer-mediated communications with instructor and among students, the asynchronous mode only was more effective than in combination with the synchronous mode; and 4) undergraduate students benefited more from blended learning than graduate students.

In the [26] study, the overall effect size favored blended learning (g = 0.34, k = 24, p < .05) for achievement based on objective effectiveness outcomes and (g = 0.34, k = 11, p < .05) for achievement based on subjective effectiveness outcomes. These results closely mirror the [21] results. The presence or absence of quizzes appeared to differentiate the overall result in terms of objective effectiveness.

The [3] study, an offshoot of [24], addressed the following research questions: What is the impact of blended learning (i. e., courses that combine face-to-face and online learning) on the achievement of higher education students? How do various pedagogical factors (e. g., the amount of time spent online and the purpose of technology use) and course demographics (e. g., subject matter) moderate the overall average effect size? These findings appear in the Results section.

Designed interaction treatments

Institutions of higher education provide students with the opportunity to interact with each other both inside and outside the classroom. Three major forms of interaction are important for effective learning, extrapolated from the distance education literature [e. g., 1, 22]. Student-student interaction refers to interaction among individual students or among students working in small groups. Student-teacher interaction traditionally focused on, but is not limited to, classroom-based dialogue between students and the instructor. Finally, student-content interaction refers to students interacting with the subject matter under study to construct meaning, relate it to personal knowledge, and apply it to problem solving. Meta-analytical findings support the overall positive influence of the three types of interaction on learning outcomes with specific emphasis on student-student interactions [4]. Naturally, all three of these forms can occur in higher education classrooms without technology, but it was the intention of

this study to investigate student-student interaction, supported by technology, in conditions that were specifically designed to support collaborative learning. Thus, the research question became: Are designed interaction treatments (i. e., intentionally implemented collaborative instructional conditions that are meant to increase student learning) more effective than contextual interaction treatments (i. e., learning setups that contain conditions for student-student interaction to occur, but are not intentionally designed to create collaborative learning environments in technology-enhanced classroom instruction in higher education)? This question is important because it emphasizes the use of instructional design to facilitate the potentially positive effects of technology use in higher education classrooms [7].

Technology use in teacher education

This section focuses on the subset of studies that specifically addressed technology use in educational and teacher training programs. The main objective is to further explore the impact of technology use in this specific context in an attempt to determine what aspects of teaching practices set education apart from STEM and non-STEM disciplines. Moreover, it aims to investigate the nature of the most effective pedagogical frameworks supporting successful technology integration. The focus is on how technology is used by educational professionals to achieve educational goals.

While it is impossible to review all pedagogical frameworks available to instructors, it seems appropriate to focus on student-centered instructional strategies. It is argued that the learner-centered approach supports learning by emphasizing the student's role in the instructional environment, thus shifting the focus from knowledge transmission to the actual learning process [e. g., 20]. Regardless of technology integration, general teaching strategies that are aligned with the student-centered philosophy include cooperative and collaborative learning [9, 16], problem-based learning [18], and the provision of elaborate feedback [14].

Research questions

To summarize, the meta-analysis under consideration and its follow-ups were designed to answer questions about the impact of instructional technology on postsecondary student achievement outcomes. Specifically, the research questions addressed were as follows:

What is the weighted average effect size and variability for studies that investigate the impact of the instructional uses of technology on postsecondary student achievement outcomes?

Is there a difference in average effect sizes for achievement outcomes associated with major purposes of technology use?

What is the weighted average effect size and variability for studies that investigate the impact of the instructional uses of technology in so-called blended

educational settings?

Is there a difference in average effect sizes for achievement outcomes associated with designed vs. contextual interaction treatments?

What is the effectiveness of instructional technology in teacher training dependent on a particular pedagogical approach?

Method

To facilitate navigation through the review, here is a brief summary of major terms and definitions used in the meta-analysis, followed by a set of inclusion/ exclusion criteria and the key aspects of the review methodology (for the complete description of the methodology, please see [24]).

Terms and definitions

Educational technology is understood here according to [23] as quoted previously. The degree of technology use was the primary determinant for assigning groups to either the experimental or the control condition. This distinction is important because it specifies the +/- valence of the effect size. Two types of studies were found, those that contained no technology in the control condition and those that contained some technology in the control condition. In the former class of studies, the assignment of the experimental and control group designation was clear. In the latter case, the differential use of technology in the two conditions was rated. The condition containing the "highest degree of technology use" was designated the treatment condition and the alternative condition was the control. There were three possible interpretations of the degree of technology use. The experimental group was considered: a) to contain the longest or most frequent exposure to technology tools; b) to contain more advanced technology (i. e., enabling more functions); and/or c) to employ a larger number of technology tools.

There were the following major purposes of technology use identified and analyzed in the reviewed studies:

1) to promote communication and/or to facilitate the exchange of information. This category includes technology that enables a higher level of interaction between individuals (i. e., two-way communications among learners and between learners and the teacher);

2) to provide cognitive support for learners. This category encompasses various technologies that enable, facilitate, and support learning by providing cognitive tools (e. g., concept maps, simulations, wikis, different forms of elaborate feedback, spreadsheets);

3) to facilitate information search and retrieval. This type of technology is intended to enable and/or facilitate access to additional information (e. g., web-links, search engines, electronic databases);

4) to enable or enhance content presentation (e. g., PowerPoint presentations, graphical visualizations, computer tutorials with limited interactive features).

When more than one purpose was identified, codes indicating multiple purposes were created (e. g., cognitive support plus presentational support). Achievement outcomes included various objective measures of academic performance (e. g., exam/test scores), but not self-evaluation. A wide spectrum of moderator variables were coded: methodological (e. g., research design), instructional (e. g., purpose of technology use, pedagogical approach), demographic (e. g., subject matter), and publication (e. g., date, source).

Inclusion/exclusion criteria and review procedure

Studies under review that were to be included had to have the following characteristics: 1) be published no earlier than 1990; 2) be publicly available (or archived); 3) address the impact of computer technology on student achievement and/or attitudes; 4) contain at least one between-group comparison, where one group fits the definition of the experimental condition and the other group the definition of the control condition, using the criterion of the degree of technology use (higher vs. lower); 5) be conducted in a formal postsecondary educational setting; 6) represent classroom or blended instructional environments, but not distance education; and 7) contain sufficient statistical information for effect size extraction.

Failure to meet any of these criteria led to the exclusion of the study with the reason(s) for rejection documented for further reporting. Two researchers working independently rated the studies, first at the abstract level, then at the full text level, on a scale, from 1 (definite exclusion) to 5 (definite inclusion). All disagreements were discussed until they were resolved inviting a third opinion when necessary, and initial agreement rates calculated as Cohen's Kappa (k) and as Pearson's r (where appropriate). Similarly, two researchers participated in all other data extraction procedures (i. e., effect size extraction and study feature coding).

Literature search strategies and data sources

Extensive literature searches were designed to identify and retrieve primary empirical studies relevant to the major research questions. Key terms used in search strategies, with some variations to account for specific retrieval sources, included: "technolog*," "comput*," "web-based instruction," "online," "Internet," "blended learning," "hybrid course*," "simulation," "electronic," "multimedia" OR "PDAs" etc.) AND ("college*," "university," "higher education," "postsecondary," "continuing education," OR "adult learn*") AND ("learn*," "achievement*," "attitude*," "satisfaction," "perception*," OR "motivation," etc.), AND excluding "distance education" or "distance learning" in the subject field.

POCCMlCKMfl nCI/IXOnon/mECKI/ll/l «yPHAfl • 2016 TOM 13 № 4

The following databases were among the sources examined: ERIC (WebSpirs), ABI InformGlobal (ProQuest), Academic Search Premier (EBSCO), CBCA Education (ProQuest), Communication Abstracts (CSA), EdLib, Education Abstracts (WilsonLine), Education: A SAGE Full-text Collection, Francis (CSA), Medline (PubMed), ProQuest Digital Dissertations & Theses, PsycINFO (EBSCO), Australian Policy Online, British Education Index, and Social Science Information Gateway.

In addition, Google Internet searches were performed to help identify gray literature, including a search for conference proceedings. Review articles and previous meta-analyses were used for branching, and the tables of contents of major journals in the field of educational technology (e. g., Educational Technology Research & Development) were manually searched.

Effect size calculation and synthesis

A d-type standardized mean difference effect size was used as the common metric (i. e., Cohen's d), and then was transformed into Hedges' g metric [15] to provide necessary correction for small sample sizes. The random effects model [6] was the main analytical approach for this meta-analysis. A mixed effects model was used to test the difference in levels of moderator variables. In a mixed analysis, average effect sizes for categories of the moderator are calculated using a random effects model. The variance component Q-Between is calculated across categories using a fixed effect model [6]. All analyses, including sensitivity and publication bias analysis, were performed in Comprehensive Meta-Analysis™ 2.2.048 [5].

Results

The findings of the meta-analysis of effects of classroom technology integration in higher education on student achievement outcomes are presented, first, overall, and then by individual research sub-question as outlined earlier. More detailed information regarding each of these follow-up meta-analyses can be found in respective publications.

Overall findings

The overall random-model results of the [24] study are shown in Table 1. The total of 879 effect sizes produced a weighted average effect size of 0.27 that was significantly greater than zero. The collection is significantly heterogeneous, based on findings from the fixed model where heterogeneity is tested in terms of the magnitude of Q-Total (i. e., total between-study variability). An effect size of 0.27 is considered to be small and represents a difference of 0.27sd between the mean of the treatment condition and the control condition, amounting to about an 11 % difference. These results suggest that technology-supported

instruction is advantageous compared to either non-use or limited use, but that this advantage is relatively modest.

Table 1.

Overall weighted average effect size (Random Effects Model)

Population Estimates k g SE Lower 95th Upper 95th

Final Collection 879 0.27* 0.02 0.24 0.31

** Heterogeneity Analysis QT = 3,183.10 (df = 878), p < .001, I2 = 72.42

* p < .01; ** Based on the fixed effect model for k = 879

Major function of technology use

The results become differentiated when effect sizes are divided by pedagogical application. These results from [24] indicate that technology that is used to support student cognition outperforms all other categories, but especially presentational support (Table 2). This effect is interpreted as a difference primarily between "technology used by students" (for content understanding) and "technology used by teachers" (for content delivery). Other functions, such as support for communication and a mixture of cognitive and presentational support, fall in between these two.

Table 2.

Instructional moderator variable analysis: Major function of technology use

Levels of Technology Use k g Lower 95th Upper 95th QBetween

Cognitive Support (CS) 186 0.36 0.28 0.44

Presentational Support (PS) 113 0.15 0.07 0.23

Communication Support 27 0.24 0.12 0.35

Mixture (CS plus PS) 485 0.25 0.21 0.30

Between Groups, df = 3 13.28, p = .004

Contrast: Cog. Supp. vs. Presentational Supp., z = 5.14, p < .0001

Êh

Blended learning

The effects of blended learning (i. e., partly in class and partly online) were derived from the [24] database and analyzed and reported in [3]. As evident from Table 3, the random-effects weighted average effect size (= 0.334, k = 117) is larger than the overall average effect of technology use in the original metaanalysis and is in line with the findings from the other meta-analyses of blended learning [21, 26]. Apparently, there is an advantage that accrues from balancing face-to-face instruction with online learning outside of class. The mechanisms of this effect have not been determined, so one of the challenges of educational technology research of the future will be to tease out the effects of variables such as amount of time devoted to each pattern of instruction, the most effective learning strategies, and the teacher's role in the online portion of blended learning.

Table 3.

Weighted average effects for blended learning

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Analytical Models K 9 SE Lower 95th Upper 95th

Random Effect Model 117 0.334* 0.04 0.26 0.41

Fixed Effect Model 117 0.316** 0.02 0.28 0.36

Heterogeneity Q-total = 372.91, df = 116, p < .001 /-squared = 68.89 % т2 = 0.11

* z = 8.62, p < .001; ** z = 15.68, p < .001.

Interaction treatments

Interaction treatments were defined by [4] as instructional setups in distance education that are intended to facilitate and promote interaction among students, between students and teachers, and between students and the content. This definition was used to code studies in the [24] study in classroom setting. Table 4 shows the results of this basic analysis. Conditions where interactions were greater in the treatment group, compared to the control condition, produced results that were significantly higher than when the control group was higher in the potential for interaction. As expected, when the two conditions were roughly the same (i. e., = 0.29, k = 703), the outcome was not significantly different from the former condition (i. e., = 0.34 vs. 0.29).

RUSSIAN PSYCHOLOGICAL JOURNAL • 2016 VOL. 13 # 4

Table 4.

Mixed effects analysis of the degree of student-student interaction

Levels k g SE Lower 95th Upper 95th QBetween

Equal in control and experimental groups 703 0.29 0.02 0.25 0.33

Control group higher 127 0.16 0.04 0.07 0.24

Experimental group higher 48 0.34 0.07 0.20 0.48

Between Groups, df = 2 8.93, p = .012

** p < .01

When the interaction treatments were divided further by the presence or absence of design intention, the results strongly supported designed interaction treatments over contextual interaction treatments (Table 5). It appears that merely providing the means for student-student interaction is not enough as a pedagogical strategy. Some form of instructional design is needed (e. g., collaborative learning, reciprocal teaching).

Table 5.

Mixed effects analysis of designed and contextual interaction treatments

Levels k g SE Lower 95th Upper 95th QBetween

Designed treatments 31 0.46* 0.01 0.32 0.60

Contextual treatments 17 0.07 0.02 -0.22 0.37

Between Groups, df = 1 5.41, p = .02

* p < .01

Technology use in education courses

In this final section, the effects of educational technology across subject areas were investigated. Because of our special interest in teacher education (i. e., teaching future teachers), these studies were broken out from the other Non-STEM

POCCMlCKMfl nCI/IXOnon/mECKI/ll/l «yPHAfl • 2016 TOM 13 № 4

content areas and their average effect size compared with that of STEM (i. e., Science, Technology, Engineering, Mathematics) and the remaining Non-STEM subject areas. The results, shown in Table 6, revealed nearly equal average effects for STEM and Non-STEM minus education, but a noticeable difference between these areas and technology used in teacher education.

Table 6.

Moderator variable analysis for subject matter (STEM vs. Non-STEM vs.

Education)

Levels of Subject Matter k* 9 Lower 95th Upper 95th QBetween

STEM Subjects 356 0.28 0.23 0.33

Non-STEM Subjects 454 0.25 0.20 0.30

Teacher Education 66 0.45 0.32 0.58

Between Groups, df = 2 7.78, p = .02

* Three cases of unidentified subject matter or mixture of several subject matters were removed from the analysis (k = 876)

To investigate further what might account for such a high average effect for teacher education, studies were further classified according to the underlying pedagogical frameworks for using technology in these studies (Table 7). Only studies from the Education sub-group, where there was a clear indication of a particular pedagogical approach to instruction and associated use of technology, were included. This reduced the number of effect sizes from k = 66 to k = 39. Although this number after the split somewhat limited the power of subsequent analyses, using technology to provide feedback to students resulted in an unusually high average effect size of = 0.75 (k = 11). Two other instructional approaches to technology use were also prominent: multimedia theory and problem-based learning were each around = 0.50. Notably, the use of technology to support collaborative learning (i. e., group projects) was very low (less than = 0.10) and not significantly greater than zero. These finings may arise from the peculiarities of teaching future teachers, as compared with content, as in the other STEM and Non-STEM subject areas, but it does suggest the need for further exploration of the range of technology uses in education, as well as in allied areas such as nursing education.

RUSSIAN PSYCHOLOGICAL JOURNAL • 2016 VOL. 13 # 4

Table 7.

Moderator variable analysis: Type of pedagogical (conceptual) framework

Framework k* g Lower 95th Upper 95th QBetween

Collaborative Learning 6 0.06 -0.24 0.35

Feedback Strategies 11 0.75 0.38 1.12

Information Processing 5 0.26 -0.17 0.70

Multimedia Theory 8 0.46 -0.10 1.01

Problem- Based Learning 9 0.56 0.16 0.96

Between Groups, df = 4 9.71, p = .046

Discussion

Overall results. Overall, these results demonstrate a consistent message concerning the value of technology use in higher education. Whether or not these results could have been achieved through other means, as [11] has claimed, is rather a moot point in our view. For better or for worse, technology is with us for the long haul and we as a profession and as professional researchers are obligated to analyze and investigate it, and to make certain that the use of it for pedagogical purposes achieves maximum learning benefits.

Cognitive versus presentational support tools. Based on the analyses of the purposeful use of technology, we see where Clark's original assertions about the passive nature of technology may have arisen. Technologies prior to the 1980s and well into the 1990s were indeed passive because their primary purpose was to convey content to students, either through the actions of teachers' use of presentational software or as a consequence of computer-aided-instruction (CAI). Feedback, of course, was present in these stand-alone technologies, but it often amounted to simply providing the "right answer" without elaboration [2]. Not until the advent of personal computers, and especially software and online tools that "work with" the student in the learning process, have we seen changes in technology use that educators could not have envisaged prior to 1980, when much of Clark's work was done.

The overall message emerging from these data is that learning is best supported when the student learns through meaningful activities via technological tools that provide cognitive support during the process. However, we are a long way from understanding more specifically how to design effective cognitive support tools and when precisely to integrate them into instruction. We encourage vigorous research programs to help bridge our knowledge gaps in these areas.

Blended learning in higher education. For the longest time, there were two primary venues for learning in higher education - classroom instruction (with or without technology support) and distance education, now often referred to as online learning. While there is considerable evidence that technology that is used to support classroom instruction is beneficial, it is a requirement when students and teachers are in separate locations, often working asynchronously (in the correspondence education era, the post office was a technology, of sorts). We are now able to marry these two environments, providing students with some time in classrooms and some time online out of classrooms. The results of studies of these so called blended or hybrid learning experiences are encouraging [3, 21, 26], but we are still unable to predict with confidence which variables are most influential (e. g., instructional strategy applied in each context) and which are trivial (e. g., time spent in each pattern), or how to design effective blended learning given the myriad of circumstances that can arise under various conditions. We must encourage research work in this domain, as it may turn out to be of immense value to university students, partly because it encourages a pattern of face-to-face and online work that they will encounter throughout their future careers.

Collaborative interaction treatments. In this meta-analysis we have identified a link between the intentioned and designed use of technology in higher education classrooms and the provision of technology without such explicit intention [7]. The results provide educators with specific guidance about what works and doesn't work in the domain of education, especially as it relates to technology-based student-student interaction. This part of the meta-analysis reaffirms the effectiveness of computer supported collaborative learning (CSCL) from the perspective of technology use and the design of instruction that aims to support and amplify interaction. Once again, pedagogy and specific instructional design take precedence over the contextual communicative benefits of modern educational technology.

Technology use in teacher education. The current analysis has moved one step beyond responding to the general question of whether technology works or not. Findings have confirmed previous results and provided meaningful insights with regard to specific pedagogical approaches that are successful in improving student performance. The general analyses were in line with the findings of the overall meta-analysis [28], indicating the importance of cognitive support tools for successful learning. In addition, the results further suggested that moderate intensity and complexity of technology use works better than oversaturation.

The current meta-analysis provides some input regarding pedagogical strategies that work better for educating pre-service teachers. Particularly speaking, provision of adequate and specific feedback to students in technology-supported environments greatly increases the impact of technology use on student performance. The resulting average effect size of 0.75 translates into

a 27-percentile gain for average students in the experimental group compared to those in the control group. The gains are significant and the implications are clear. Instructors need to incorporate feedback in their technology-enhanced instruction. Another pedagogical approach for successful use of technology in educational contexts is problem-based learning (PBL), where the average effect size of 0.56 indicates a 21-percentile gain in student performance.

Finally, one of the most interesting pieces of evidence offered by this study pertains to the importance of training in successful technology integration in post-secondary education. This is in striking contrast to the belief that the newer generation of students is so technologically savvy that they do not need training.

Concluding remarks

In conclusion, we would like to specifically highlight for the readers two particular outcomes of this entire series of meta-analyses that, in our view, are of the utmost importance for research and practice:

1. Technology alone, no matter how advanced, sophisticated, and fashionable, hardly works beyond its "novelty effect" in the absence of the other operative consideration - "educational." Well-thought through instructional design and effective pedagogical strategies (e. g., interaction treatments designed for collaborative work, elaborate feedback, in-time technology training for teachers and students) provide the substantial value-added that transforms technology (i. e., whichever one we care to adopt) into technological tools that are advantageous for teaching and learning.

2. Blended learning, which supposedly combines the best qualities of face-to-face and online instruction, appears to be a viable teaching/learning option for applying educational technology to achieve its maximum benefits for learning. Nevertheless, this promise is still to be further substantiated by both primary research and meta-analytical reviews. Beyond the practical advantages of blended learning, we need to know how to combine the best of the face-to-face world with the best of the online world.

References

1. Anderson T. Getting the mix right again: An updated and theoretical rationale for interaction. International Review of Research in Open and Distance Learning, 2003, 4 (2), pp. 9-14. Retrieved from: http://www.irrodl.org/index.php/ irrodl/article/view/149/230

2. Azevedo R., & Bernard R. M. A meta-analysis of the effects of feedback in computer-based instruction. Journal of Educational Computing Research, 1996, 13 (2), pp. 109-125.

3. Bernard R. M., Borokhovski E., Schmid R. F., Tamim R. M. & Abrami P. C. A metaanalysis of blended learning and technology use in higher education: From

POCCMlCKMfl nCI/IXOnon/mECKI/ll/l «yPHAfl • 2016 TOM 13 № 4

the general to the applied. Journal of Computing in Higher Education, 2014, 26 (1), pp. 87-122. http://dx.doi.org/10.1007/s12528-013-9077-3

4. Bernard R. M., Abrami P. C., Borokhovski E., Tamim R., Wade A., Surkes M., & Bethel E. A Meta-analysis of three types of interaction treatments in distance education. Review of Educational Research, 2009, 79 (3), pp. 1243-1289. http://dx.doi.org/10.3102/0034654309333844

5. Borenstein M., Hedges L. V., Higgins J. P., & Rothstein H. ComprehensiveMeta-Analysis (Version 2.2.048) [Computer software]. Englewood, NJ: Biostat, 2008.

6. Borenstein M., Hedges L. V., Higgins J. P., & Rothstein H. A basic introduction to fixed-effect and random-effects models for meta-analysis. Research Synthesis Methods, 2010, 1, pp. 97-111. http://dx.doi.org/10.1002/jrsm.12

7. Borokhovski E., Tamim R. M., Bernard R. M., Schmid R. F., & Sokolovskaya A. Does educational technology work better when designed for collaborative learning? Paper presented at the annual meeting of the American Educational Research Association (April). San Francisco, CA, 2013.

8. Borokhovski E., Tamim R. M., Bernard R. M., Abrami P. C., & Sokolovskaya A. Are contextual and design student-student interaction treatments equally effective in distance education? A follow-up meta-analysis of comparative empirical studies. Distance Education, 2012, 33 (3), pp. 311-329. http:// dx.doi.org/10.1080/01587919.2012.723162

9. Bruffee K. Collaborative learning. Baltimore, MD: Johns Hopkins University Press, 1993.

10. Carpenter C. R., & Greenhill L. P. An investigation of closed-circuit television for teaching university courses (Report 1). University Park, PA: The Pennsylvania State University, 1955.

11. Clark R. E. Reconsidering research on learning from media. Review of Educational Research, 1983, 53, pp. 445-459.

12. Cobb T. Cognitive efficiency: Toward a revised theory of media. Educational Technology Research & Development, 1997, 45 (4), pp. 21-35. http://dx.doi. org/10.1007/BF02299681

13. Cooper H., & Koenka A. C. The overview of reviews: Unique challenges and opportunities when research syntheses are the principal elements of new integrative scholarship. American Psychologist, 2012, 67, pp. 446-462. http:// dx.doi.org/10.1037/a0027119

14. Hattie J., & Timperley H. The power of feedback. Review of Educational Research, 2007, 77 (1), pp. 81-112. doi: 10.3102/003465430298487

15. Hedges L. V., & Olkin I. Statistical methods for meta-analysis. Orlando, FL: Academic Press, 1985.

16. Johnson D. W., & Johnson R. T. Cooperation and the use of technology. In J. M. Spector, M. D. Merrill, J. V. Merrienboer & M. P. Driscoll (Eds.), Handook

RUSSIAN PSYCHOLOGICAL JOURNAL • 2016 VOL. 13 # 4

of Research on Educational Communication and Technology (3rd ed.). New York, NY: Lawrence Erlbaum Associates, 2008.

17. Jonassen D. H. Computers as cognitive tools: Learning with technology, not from technology. Journal of Computing in Higher Education, 1995, 6 (2), pp. 40-73.

18. Jonassen D. H., Howland J., Moore J., & Marra R. M. Learning to solve problems with technology: A constructivist perspective. Columbus, OH: Merrill/Prentice-Hall, 2003.

19. Kozma R. Learning with media. Review of Educational Research, 1991, 61, pp. 179-221.

20. Laurillard D. Rethinking university teaching: A framework for the effective use of educational technology (2nd ed.). London: Routledge, 2002.

21. Means B., Toyama Y., Murphy R. F. & Baki M. The effectiveness of online and blended learning: A meta-analysis of the empirical literature. Teachers College Record, 2013, 115 (3), pp. 1-47.

22. Moore M. G. Three types of interaction. American Journal of Distance Education, 1989, 3 (2), pp. 1-7.

23. Ross S. M., Morrison G. R., & Lowther D. L. Educational technology research past and present: Balancing rigor and relevance to impact school learning. Contemporary Educational Technology, 2010, 1, pp. 17-35. Retrieved from http://www.cedtech.net/articles/112.pdf

24. Schmid R. F., Bernard R. M., Borokhovski E., Tamim R. M., Abrami P. C., Surkes M. A., Wade C. A., & Woods J. The effects of technology use in postsecondary education: A meta-analysis of classroom applications. Computers & Education, 2014, 72, pp. 271-291. http://dx.doi.org/10.1016/j~. compedu.2013.11.002

25. Sitzmann T., Kraiger K., Stewart D., & Wisher R. The comparative effectiveness of web-based and classroom instruction: A meta-analysis. Personnel Psychology, 2006, 59 (3), pp. 623-644. http://dx.doi.org/10.1111/J.1744-6570.2006.00049.x

26. Spanjers I. A. E., Konings K. D., Leppink J., Verstegen D. M. L., de Jong N., Czabanowska K., & van Merrienboer J. J. G. The promised land of blended learning: Quizzes as a moderator. Educational Research Review, 2015, 15, pp. 59-74. http://dx.doi.org/doi:10.1016/j.edurev.2015.05.001

27. Tamim R. M., Bernard R. M., Borokhovski E., Abrami P. C., & Schmid R. F. What forty years of research says about the impact of technology on learning: A second-order meta-analysis and validation study. Review of Educational Research, 2011, 81 (3), pp. 4-28. http://dx.doi.org/10.3102/0034654310393361

28. Tamim R. M., Borokhovski E., Bernard R. M., Schmid R. F., Abrami P. C., & Sokolovskaya A. Technology use in teacher training programs: Lessons learned from a systematic review. A paper presented at the AERA 2014 annual meeting (April), Philadelphia, PA, 2014.

i Надоели баннеры? Вы всегда можете отключить рекламу.