Научная статья на тему 'E-Learning in Theory, Practice, and Research'

E-Learning in Theory, Practice, and Research Текст научной статьи по специальности «Науки об образовании»

CC BY-NC-ND
7577
1493
i Надоели баннеры? Вы всегда можете отключить рекламу.
Журнал
Вопросы образования
Scopus
ВАК
ESCI
Область наук
Ключевые слова
learning management systems (LMS) / MOOCs / e-learning / learning design / student success / behaviorism / cognitivism / constructivism / digital media theory / active learning theory / scholarship on teaching and learning / assessment / feedback

Аннотация научной статьи по наукам об образовании, автор научной работы — Maria Janelli

This article presents three intersecting aspects of e-learning: theory, practice, and research. It begins with a review of the major theoretical frameworks to date — behaviorism, cognitivism, constructivism, digital media theory, and active learning theory — to demonstrate the ways in which e-learning is both similar and dissimilar to traditional modes of learning. The article then turns to a practical case study of e-learning, a Massive Open Online Course (MOOC) created by the American Museum of Natural History and hosted on the Coursera platform. The case study demonstrates both how learning theory affords a template to guide MOOC creation, and how MOOC platforms can be a laboratory for e-learning instructional design. The article concludes with an example of e-learning research, demonstrating the importance of synergy among theory, practice, and research.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «E-Learning in Theory, Practice, and Research»

E-Learning in Theory, Practice, and Research

Maria Janelli

Received in Maria Janelli

July 2018 Senior Manager of Online Teacher Education Programs at the American Museum of Natural History; Ph. D. Fellow at the City University of New York. Address: 200 Central Park West, New York, NY10024, USA. E-mail: mjanelli@amnh.org

Abstract. This article presents three intersecting aspects of e-learning: theory, practice, and research. It begins with a review of the major theoretical frameworks to date—behaviorism, cognitivism, constructivism, digital media theory, and active learning theory—to demonstrate the ways in which e-learning is both similar and dissimilar to traditional modes of learning. The article then turns to a practical case study of e-learning, a Massive Open Online Course (MOOC) created

by the American Museum of Natural History and hosted on the Coursera platform. The case study demonstrates both how learning theory affords a template to guide MOOC creation, and how MOOC platforms can be a laboratory for e-learning instructional design. The article concludes with an example of e-learning research, demonstrating the importance of synergy among theory, practice, and research.

Keywords: learning management systems (LMS), MOOCs, e-learning, learning design, student success, behaviorism, cognitivism, constructivism, digital media theory, active learning theory, scholarship on teaching and learning, assessment, feedback.

DOI: 10.17323/1814-9545-2018-4-81-98

The term e-learning is a source of controversy and debate among scholars and practitioners alike [Andrews 2011]. Depending on whom you ask, e-learning is a buzzword, fad, teaching strategy, or a pedagogy unto itself. In the following paper, I address three aspects of e-learning. The first section situates e-learning within theoretical frameworks. The second describes e-learning in practice—specifically an e-learning initiative at the American Museum of Natural History (AMNH) in New York City. In the final section, I present a research study that will contribute to the e-learning body of knowledge. My purpose is to assert the importance of research-based practices for those who design, develop, and implement e-learning resources.

E-learning: At its most basic level, e-learning is the use of technology for teaching Theory and learning [Mayes, Freitas 2005]. A more refined definition is that e-learning is the use of any electronic media in the service of all aspects of teaching and learning, both online and offline [Andrews, 2011;

Koohang et al. 2009]. Pange and Pange [2011] posit that e-learning is even more specific, that it builds knowledge and increases the quality of learning by transmitting content and instruction via the internet.

Effective e-learning is structured to provide resources and support for students. There are many, many types of e-learning applications. These include blogs, wikis, online discussion boards, online games and simulations, online courses offered within learning management systems (LMSs), massive open online courses (MOOCs), tablet apps, and a host of others. Despite the countless free and commercial e-learning resources available, many are not grounded in formal and empirical understandings of best practices regarding how students are taught, how content is delivered, and how the technology interface is designed [Pange. Pange 2011]. Similarly, e-learning applications, and online learning in particular, often are not grounded in educational theory [Mayer 2015]. We must change this. E-learning design and development should be grounded in theoretical frameworks and empirical findings so that good instructional design principles can be applied to teaching and learning [Mayer 2015; Mayes, Freitas 2005] and, equally importantly, so that scholars and researchers have a common vocabulary and understanding from which to conduct research on the effectiveness of e-learning applications, resources, and interventions.

To date, there is no unified theory of e-learning. Many scholars agree that existing theories of learning can be combined, modified, and/or directly applied to e-learning [Pange, Pange 2011]. Of these existing theories, cognitivism and constructivism are most frequently applied to e-learning development and instruction. Behaviorism, digital media theory, and active learning theory are also applied, though less often. Some scholars, however, contend that e-learning requires a new learning theory. Let us explore these possibilities, starting with cognitivism.

Cognitivists posit that learning is an internal process involving thought, memory, reflection, motivation, and metacognition [Modritscher 2006. Information is received through different senses, processed by working memory, which is limited, and then transferred to long-term memory, which is unlimited [Burke 2013; Modritscher 2006; Van Merrienboer, Ayres 2005]. Long-term memory organizes complex material into schemas that reduce the load on and extend the capacity of working memory. Working memory can be affected intrinsically (by the nature of the content) and extraneously (by how the content is presented) [van Merrienboer, Ayres 2005]. Cognitive overload occurs when too much material is presented such that it cannot be processed by working memory and transferred to long-term memory. A problem with educational technology/e-learning is that much of it increases rather than decreases the likelihood of cognitive overload [Burke 2013]. This issue is addressed when cognitivism is the theoretical foundation on which e-learning applications are developed.

Several scholars have developed cognitivist approaches to educational technology. Among these is Richard Mayer. Dubbed "the father of the science of e-learning" [Mayer 2015], Mayer has put forth a cognitive theory of multimedia learning, the goals of which are to reduce extraneous cognitive processing; manage essential cognitive processing (processing required to comprehend the material); and support generative processing (deep processing needed to organize and integrate the material). Through decades of empirical research and hundreds of experiments, Mayer has identified twelve principles for reducing the cognitive load of multimedia material by organizing and presenting information to students in a way that optimizes their ability to process the material in their working and long-term memory [Ibid.].

Like Mayer, Modritscher [2006] and van Mer^nboer and Ayres [2005] are also proponents of a cognitivist approach to e-learning. Van Mermnboer and Ayres [2005] note that many online learning tasks are complex and include interacting elements that must be processed by working memory. Even if one were to address the issue of cognitive load in the content, the interactive nature of the task itself may present a cognitive load so demanding that it poses a barrier to learning.

Van Mer^nboer and Ayres [2005] and Modritscher [2006] offer suggestions similar to Mayers—principles in order to reduce the cognitive load of interactive e-learning tasks. Together, their guidelines create a blueprint for those who wish to use cognitivism as a theoretical framework to inform the design, development, and assessment of e-learning applications.

In addition to cognitive load theory, constructivism—the act of constructing new knowledge based on experience [Koohang et al. 2009]—is also applied to e-learning. In fact, constructivism is the theory used most often for e-learning [Pange, Pange 2011]. Constructivism in e-learning is present when students engage in active and/or interactive processes that promote collaboration. Additionally, students who engage in constructivist e-learning tasks have a degree of control over the learning process, usually in the form of instructor-guided discovery, or on-screen guided discovery, that culminates in student decision-making. Instructors who incorporate constructivism in their teaching include examples in their e-learning activities and provide opportunities for students to reflect on their work [Modritscher 2006].

In 2009, Koohang et al. put forth a constructivist approach to e-learning that has three core components: activities that include collaboration and cooperation, the adoption of multiple perspectives, real world examples, self-reflection, scaffolding, self-assessment, and multiple representations of ideas; assessments that include instructor assessments, group assessments, and self-assessments; and instructor roles that include coaching, mentoring, acknowledging student work and effort, providing feedback, and assessing student learning. The authors subsequently expanded this model by identify-

ing nine constructivist elements of e-learning such as interdisciplinary learning, self-reflection, the use of real-world examples, and scaffolding to facilitate the Zone of Proximal Development [Koohang et al. 2009].

Constructivism in e-learning is not dissimilar from constructivism in traditional learning. Both provide students with opportunities to actively construct their own knowledge through experience, present information from a variety of perspectives, incorporate the facilitation of an expert or guide, and provide time and opportunities for students to develop metacognitive skills [Modritscher 2006. However, constructivism in both traditional and e-learning is not without limitations. It takes a lot of time and effort to create context-based content, and it takes even more time and effort to create content that aligns with individual learners—interests and experiences. Constructivism necessarily limits the degree to which a teacher can focus learners—attention in a particular direction, and in the absence of extrinsic motivators, students can lose interest in the activity. Finally, it is not always easy or possible to adequately evaluate student learning in constructivist situations. It is possible, however, for e-learning systems to automate some aspects of student assessment, removing the burden from the instructor.

There are three additional theories of learning that are applied to e-learning, though with less frequency than cognitive load theory and constructivism. The first is behaviorism. Behaviorism situates learning within the contexts of external or environmental stimuli. Knowledge is acquired through experiences and interactions with and within the world around us [Schunk 2012].

Behaviorists recommend that instructional designers take a structured approach to the development of e-learning materials. For example, all material should be broken down into smaller pieces or segmented tasks to make complex information and activities easier to understand. Another way to incorporate behaviorism into e-learn-ing design is to give learners more control of the learning process by allowing them to choose the next steps in their learning sequence (watch a video or read text, etc.) [Modritscher 2006]. With a behavio-rist framework, material should be organized in a sequence that becomes more difficult over time. As students master the initial content, more difficult material becomes available to them. Lastly, teachers or e-tutors should guide students by describing and/or modeling the task in discrete parts. This allows the learner to copy the guide's behavior [Modritscher 2006].

The remaining two theories that can be applied to e-learning are mentioned briefly in the literature. A digital media theory approach to e-learning focuses on the variety of media formats available for teaching and learning. This focus is evocative of Marshall McLuhan's "the medium is the message" [McLuhan 2003: 23] in that the emphasis is on hardware (computers, hand-held devices, recording devices, etc.),

not software or content. Additionally, digital media theory examines the important issues of access and accessibility [Andrews 2011] which are not critical to either cognitivism or constructivism.

Finally, activity theory and active learning theory can also be applied to e-learning [Mayes, Freitas 2005; Pange, Pange 2011]. Active learning is any instructional strategy that engages learners in educational processes. This increased student activity can lead to better understanding of the content [Pange, Pange 2011]. Gamification, for example, is one popular way to increase student motivation that incorporates active learning theory and could be delivered via e-learning.

Despite the successful application of existing learning theories to e-learning, the question remains: does e-learning require a theory of its own [Andrews 2011]? Pange and Pange [2011] and Siemens [2005] contend that the problem with existing learning theories is that they were developed before education was infiltrated by electronics, the internet, software, computers, and electronic media. These critical components of e-learning, which have become ubiquitous in many schools and classrooms, have thus been excluded from traditional theories of learning. Furthermore, e-learning—the term itself—sug-gests that it is distinct from traditional learning and thus could benefit from its own theory. Lastly, to keep up with changes in technology development, e-learning is necessarily dynamic and ever-evolving. Existing learning theories do not adequately capture this dynamism [Andrews 2011].

Andrews [2011] has suggested that a new theoretical approach to e-learning is needed because e-learning differs from traditional face-to-face learning. He notes that e-learning happens in communities that are significantly different from traditional learning communities. For example, e-communities gather and communicate via social network sites, virtual learning environments, learning management systems (LMSs), email groups/lists, chat rooms, video chat interfaces, and more. Unlike traditional communities, these communities function regardless of individuals' locations, and they can be much larger than traditional learning communities. When motivated e-learners are isolated, they tend to make extra efforts to communicate with others and establish themselves as members of the learning community.

Like e-learning communities, e-learning practices are also different than traditional learning practices. E-learning allows students to participate in special interest e-groups, subscribe to e-journals, conduct research quickly within databases and digital archives, communicate via email with classmates and instructors, create blogs, participate in online discussions, and much more [Andrews, 2011]. The breadth of these activities is simply unavailable in traditional teaching and learning.

Yet another way in which e-learning is distinct from traditional learning is through student agency. Andrews [2011] posits that the digitization of text gives students greater agency, as digital text can

easily be changed or manipulated into other works. Related to the digitization of text is that e-learning creates a less hierarchical social structure of education. In traditional education, conversation between scholars happens in print. The exchange of information, ideas, and discoveries is a formal and slow process in which most learners cannot contribute. E-learning levels this playing field. When texts are digitized, they become more accessible to learners, more easily critiqued, and more easily integrated into e-learning projects, processes, and activities. In this way, knowledge continually changes and develops as a result of the social practice of deconstructing and reconstructing digital texts. This evolution of knowledge is not possible in traditional, hierarchical teaching and learning practices [Andrews, 2011].

These features of e-learning that distinguish it from traditional learning suggest that e-learning requires a new theory (Ibid.). Siemens agrees, but for a different reason. He states that existing learning theories fail to consider external learning that is "stored and manipulated by technology" [Siemens 2005: 5] and learning within the context of organizations. Therefore, a theory of learning appropriate for the digitally saturated world in which we live must explicitly acknowledge connections—among people, institutions, and technology. He articulates a theory called connectivism to fill this void in the literature: "Connectivism is the integration of principles explored by chaos, network, and complexity and self-organization theories. . . Learning (defined as actionable knowledge) can reside outside of ourselves (within an organization or a database), is focused on connecting specialized information sets, and the connections that enable us to learn more are more important than our current state of knowing (Ibid.: 7)." Connectivism shifts learning from an internal to an external activity, and from what one knows in the present to what one is able to learn in the future.

The perspectives of Andrews [2011] and Siemens [2005] highlight the discord that exists among scholars about theories of e-learning. Though scholarly consensus is elusive, one thing is certain: more research is necessary. The existing body of e-learning research is saturated with studies about strategies, social contexts, and instructional design. Most of these studies are either descriptive or ethnographic [Andrews 2011]. Very few theoretical papers exist [Andrews 2011; U. S. Department of Education, Office of Planning, Evaluation, and Policy Development 2009]. Unless this changes, researchers and practitioners alike will continue to be tempted by the seduction and shine of new technologies instead of focusing on understanding and communicating how learning and cognition are most affected by educational technology [Burke 2013]. When it comes to e-learning, we must shift from good intentions to learning theories, learning outcomes, and empirical evidence [Mayer 2015]. The following case study is an examination of one institution's effort to contribute to this shift.

E-learning: Founded in 1869, the American Museum of Natural History in New Practice York City is among the world's most renowned scientific, educational, and cultural institutions. The Museum's mission is to discover, interpret, and share information through research, exhibitions, and education. With more than 33 million objects in its collections, this is both an exciting and challenging undertaking that is facilitated by the use of digital experiences. Indeed, AMNH has a decades-long track record of creating award-winning educational media and resources. From its OLogy science website for children to its Seminars on Science graduate courses for educators, AMNH has long been an e-learning innovator. When MOOC platforms emerged, it was natural for AMNH to create educational opportunities in that space as well.

With more than 150 institutional partners, more than 2600 courses, and more than 31 million learners from around the world1, Coursera is one of the leading MOOC provided. Coursera grew out of its founders' belief that the best courses from the best teachers at the best schools should be available to anyone anywhere in the world [TED, 2012].

In 2013, the American Museum of Natural History partnered with Coursera on its inaugural Teacher Professional Development program. Through the Coursera platform, AMNH offers several online science courses designed with science teachers in mind. Each of the first three MOOCs created by AMNH has a science content component for a general audience and a science teaching component for science educators. These courses (about genetics, evolution, and the Earth) were—and continue to be—utilized by tens of thousands of people, with educators from around the world translating AMNH essays and videos into their native languages to be used in their own classrooms.

Though the Coursera Teacher Professional Development program has ended, the AMNH partnership with Coursera continues. To date, AMNH has designed and developed six science MOOCs. AMNH relies upon a collaborative team of instructional designers, learning science experts, scientists, writers, videographers, and graphic designers to create pedagogically sound and visually compelling online courses. During the past five years of MOOC production, this team has learned valuable lessons about creating online courses for the large and diverse audience of MOOCs. The following is a blueprint of the Museum's existing MOOC production process that may be useful to instructional designers who are just getting started with MOOCs.

• Course outline. Every MOOC produced by AMNH starts with an articulation of learning goals and a course outline. The outline lists

1 Maggioncalda J. (2018) Keynote Address. Coursera Conference, Tempe, AZ.

2 As the Museum's MOOC partner, Coursera is the focus of this paper. Other MOOC providers not represented here offer similar educational and research opportunities.

all of the course content: syllabi, modules, essays, videos, quizzes, and related resources. Included in the outline is a note about whether the asset already exists or needs to be produced.

•Asset aggregation. AMNH MOOCs combine previously created content with brand new material. After a course outline is finalized, the production team determines which assets exist and which need to be added to the department's production schedule. Existing resources are gathered, and new essays are written.

• Video production. Video production is among the most labor-intensive and expensive parts of MOOC production. To create flexible content, the production team creates evergreen graphics and excludes dated words from scripts.

• Assessment creation. Multiple-choice quizzes are used in all of the AMNH MOOCs. Creating high-quality assessment questions is difficult and time-consuming. The team strives to ensure that all content addressed in quizzes is easily accessible in the course material, and that the lures we use in the answer options are not too confusing. The goal is not for the quizzes to be a source of misinformation; rather, the goal is for each question to be an opportunity for learners to check their understanding. Once a quiz is published, we analyze the results periodically, revising and updating any quiz question with an average first-attempt correct response score of less than 70%. This access to real-time data and editing is one of the benefits of online education broadly, and the Coursera platform specifically.

• Course production and quality assurance testing. Every course is built and tested several weeks prior to the start date. Coursera staff also review the course to ensure that all links and grading formulas work properly. Once a course is live, learners have the ability to flag content that is incorrect. These oversights can be fixed and published immediately. In this way, part of the quality assurance process is crowd-sourced.

• Course communication. Each course has a series of custom emails that is scheduled to be sent to learners at the start of each week. These emails recap the previous week's content and remind learners about assessment deadlines and survey requests. They can motivate students to continue the course.

• Research and evaluation. In addition to the demographic data collected by Coursera, AMNH conducts a pre-course and postcourse survey for each MOOC. Through these voluntary surveys, we learn about the age, sex, location, prior education, occupation, and learning objectives of the people who enroll in our courses. For example, we have learned that the majority of people who engage with AMNH MOOCs do not start the course with the intention of completing it. They come for the educational resources, not a certificate.

The production team is constantly iterating on this instructional design process; each MOOC is an opportunity to learn from the last one that we created and to inform the Museum's other online educational media production processes. Additionally, the MOOC portfolio and Coursera partnership create opportunities to measure and learn more about teaching and learning best practices, thus contributing empirical research to the body of educational technology knowledge.

E-learning: One benefit of MOOCs is their potential for rigorous educational re-Research search. In addition to my role as the senior manager of online teacher education programs at AMNH, I am also a Ph.D. fellow studying educational psychology at the City University of New York (CUNY). I have the good fortune of using the expertise I have gained at the Museum to inform the research I am conducting as a graduate student. My dissertation is an experiment using A/B/C/D design in which I use randomized testing in an AMNH MOOC to determine the effectiveness of tests and feedback for adult learners.

Though it is often associated exclusively with assessment, testing serves other purposes as well. For example, "testing has often been shown to be more effective than further study in encouraging retention of tested information" [Richland, Kornell, Kao 2009: 243]. Additionally, research indicates that testing-as-instruction can be just as effective as testing-as-assessment [Beckman 2008; Bjork, Storm, deWinstanley 2010; Kornell, Hays, Bjork 2009; Richland et al., 2009]. Educational psychology studies have found that pre-tests before instruction can help students' brains learn and encode important concepts that are then presented in detail in future lessons [Dunlosky et al. 2013]. Research also shows that the effectiveness of tests-as-instruction can be dependent on the feedback students receive after taking a test [Richland, Kornell, Kao 2009]. Most of the studies about pre-test-ing and feedback focus on K-12 or undergraduate populations in traditional face-to-face classrooms. Few, if any, studies include adult online learners as participants.

Building on these findings, my dissertation study is an experiment designed to identify the effects of pre-tests and feedback on learning outcomes in a five-week online science course for adults. A secondary component of this study is a pre-course self-efficacy survey which will be used to identify links between student self-efficacy, learning outcomes (post-test scores), and persistence (course completion).

The experiment is being conducted in one of AMNH's Coursera courses. The course has five modules. A pre-test is administered at the start of each module, and a post-test is administered at the end of each module. Pre-test and post-test scores will be compared to understand which treatment, if any, has a greater effect on learning outcomes.

Figure 1. Each of the four course samples and the course material that is included for each

Self- efficacy survey Pre-test no feedback Pre-test basic feedback Pre-test detailed feedback Post test

Control group 0 0

Sample one 0 0 0

Sample two 0 0 0

Sample three 0 0 0

FIVE WEEK COURSE

The pre-test questions, post-test questions, and feedback were written by Dr. Debra Tillinger, an AMNH online educator. The pre-and post-test questions are not identical, but they address the same course themes. While Dr. Tillinger wrote the questions and feedback, I prepared the new course shells for the four samples.

The implementation is simple. When a student enrolls in the course, she is randomly assigned to one of the groups3.

Pre-test no feedback: Students randomly assigned to this version of the course receive a quiz score, but they don't know which questions they answer correctly/incorrectly.

Pre-test basic feedback: Students randomly assigned to this version of the course receive information about which pre-test questions they answer correctly/incorrectly, as well as their quiz score.

Pre-test detailed feedback: Students randomly assigned to this version of the course receive information about which pre-test questions they answer correctly/incorrectly, their quiz score, and detailed feedback for each pre-test question.

Control group: Students randomly assigned to this version of the course take the post-tests but no pre-tests.

This study is designed to address the following questions: Does taking a pre-test at the start of an online learning module prime adult students to learn key concepts? Does question-level feedback moderate the effect of the pre-test in an online learning module for adults? Does hiding the pre-test results moderate the effect of the pre-test in an online learning module for adults?

3 Random assignment is not easy to do in educational research, and it is a feature of the Coursera platform that makes MOOC research a compelling option for learning science scholars.

Figure 2. Screenshot of Postico, the SQL program used to query data from the exported course.csv files. The SQL query for this project was developed with the assistance of Dr. Neil Sarnak, who holds a Ph.D. in computer science from New York University.

In addition to an exploration of testing and assessment, this study provides the opportunity to learn about students through non-cognitive factors as well. One non-cognitive factor that will be explored is self-efficacy. Upon enrolling in the course, students will be asked to complete a pre-course survey. Survey questions were designed to address students' confidence in their ability to complete online courses; their confidence in their understanding of the course content; and their perceptions of and receptivity to feedback. Survey responses will be correlated with quiz scores and course completions to understand the relationship between these non-cognitive factors and learning outcomes.

Participants include adult learners from all over the world who enroll in the MOOC. Data collection for this study began on January 8, 2018 and will conclude on December 24, 2018. The dataset will include quiz submissions and pre-course surveys from twelve offerings (one course offering every four weeks).

At the end of a course offering, I submit a request to Coursera for a student data export. This anonymized data is stored in 74 different tables on Coursera's servers and is exported from the platform in.csv files. I import the relevant tables into a SQL program in which custom queries are used to combine the exported data from the original ta-

bles into a single spreadsheet (See Figure 2). The dataset is then imported into SPSS for analysis.

This research, which is possible in part because of the unique af-fordances of Coursera's MOOC platform, contributes to the educational technology and e-learning landscape in several ways. First, it expands upon existing assessment and feedback research that focuses on traditional classrooms and instead focuses exclusively on online learning. Second, unlike many educational studies that focus on young learners, it examines a broad adult population, including all learners 18 years-of-age and older. Third, it is a global study, with participants from the United States, India, China, Russia, Germany, Pakistan, Canada, and many additional countries. The results of this experiment, which will be published in the spring of 2019, will help online education practitioners understand the effectiveness of both pre-tests and feedback.

This research study is just one example of how MOOCs and Coursera can be used to create quantitative research designs with random assignment that can inform the way practitioners create and conduct e-learning experiences. The results of this study will inform future MOOC work undertaken at AMNH and hopefully also at other institutions that produce online courses.

Summary When it is done well, e-learning has many benefits. Unfortunately, theory, practice, and research don't often intersect, resulting in e-learn-ing applications that can actually decrease learning outcomes. One way in which the theory/practice/research intersection can successfully occur is on a platform like Coursera. MOOCs can be created using one or more learning theories as the pedagogical foundation. These courses can be delivered to countless learners easily and quickly, and the real-time data and experimentation features available to course administrators can facilitate the development and execution of quantitative research designs. A single platform—a single course!— can be used to contribute empirical findings to the growing body of knowledge in the e-learning domain. It is my hope—and the hope of my colleagues at AMNH and CUNY—that the MOOC research study described herein can contribute meaningfully to that shared body of knowledge.

References Andrews R. (2011) Does E-Learning Require a New Theory of Learning? Some Initial Thoughts. Journal for Educational Research Online, vol. 3, no 1, pp. 104121.

Beckman W. S. (2008) Pre-Testing As a Method of Conveying Learning Objectives. Journal of Aviation/Aerospace Education & Research, vol. 17, no 172, pp. 61-70.

Bjork E. L., Storm B. C., DeWinstanley P.A. (2010) Learning from the Consequences of Retrieval: Another Test Effect. Successful Remembering and Success-

ful Forgetting: A Festschrift in Honor of Robert A. Bjork (ed. A. S. Benjamin), New York, NY: Psychology Press, pp. 347-364.

Burke L. (2013) Educational and Online Technologies and the Way We Learn. International Schools Journal, vol. XXXII, no 2, pp. 57-65.

Dunlosky J., Rawson K. A., Marsh E. J., Nathan M. J., Willingham D. T. (2013) Improving Students' Learning With Effective Learning Techniques: Promising Directions From Cognitive and Educational Psychology. Psychological Science in the Public Interest, vol. 14, no 1, pp. 4-58.

Koohang A., Riley L., Smith T., Schreurs J. (2009) E-Learning and Constructivism: From Theory to Application. Interdisciplinary Journal of E-Learning and Learning Objects, no 5, pp. 91-109.

Kornell N., Hays M. J., Bjork R. A. (2009) Unsuccessful Retrieval Attempts Enhance Subsequent Learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 35, no 4, pp. 989-998.

Mayer R. (2015) Coursera Partners' Conference. Available at: https://www. coursera.org/learn/coursera-partners-portal/lecture/anwb6/richard-may-er-keynote-plenary (accessed 10 August 2018).

Mayes T., de Freitas S. (2005) Review of E-Learning Theories, Frameworks and Models. London: JISC e-Learning Models Desk Study.

McLuhan M. Gordon W. T. (2003) Understanding Media: The Extensions of Man. Critical Edition. Corte Madera, CA: Gingko Press.

Modritscher F. (2006) E-Learning Theories in Practice: A Comparison of Three Methods. Journal of Universal Science and Technology of Learning, vol. 28, pp. 3-18.

Pange A., Pange J. (2011) Is E-Learning Based on Learning Theories? A Literature Review. World Academy of Science, Engineering and Technology, vol. 5, no 8, pp. 56-60.

Richland L. E., Kornell N., Kao L. S. (2009) The Pretesting Effect: Do Unsuccessful Retrieval Attempts Enhance Learning? Journal of Experimental Psychology: Applied, vol. 15, no 3, pp. 243-257.

Schunk D. H. (2011) Learning Theories: An Educational Perspective. Boston, MA: Pearson HE, Inc.

Siemens G. (2014) Connectivism: A Learning Theory for the Digital Age. International Journal of Instructional Technology and Distance Learning, vol. 2, no 1, pp. 1-8.

TED (2012) Daphne Koller: What We're Learning from Online Education [Video File]. Available at: https://www.youtube.com/watch?v=U6FvJ6jMGHU (accessed 10 August 2018).

U. S. Department of Education, Office of Planning, Evaluation, and Policy Development (2009) Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies. Available at: https:// www2.ed.gov/rschstat/eval/tech/evidence-based-practices/finalreport.pdf (accessed 10 August 2018).

Van Merrienboer J. J.G., Ayres P. (2005) Research on Cognitive Load Theory and Its Design Implications for e-Learning. Educational Technology Research & Development, vol. 53, no 3, pp. 5-13.

i Надоели баннеры? Вы всегда можете отключить рекламу.