Научная статья на тему 'On business processes of computer-supported collaborative learning: a case of peer assessment system development'

On business processes of computer-supported collaborative learning: a case of peer assessment system development Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY-NC-ND
282
82
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
BUSINESS PROCESS OPTIMIZATION / PEER ASSESSMENT / COMPUTER SUPPORTED COLLABORATIVE LEARNING / ACTIVE LEARNING / EDUCATIONAL SOFTWARE / RANDOMIZATION / BLINDING / ОПТИМИЗАЦИЯ БИЗНЕС-ПРОЦЕССОВ / ВЗАИМНОЕ ОЦЕНИВАНИЕ / КОЛЛАБОРАТИВНОЕ ОБУЧЕНИЕ С КОМПЬЮТЕРНОЙ ПОДДЕРЖКОЙ / АКТИВНОЕ ОБУЧЕНИЕ / ОБРАЗОВАТЕЛЬНОЕ ПРОГРАММНОЕ ОБЕСПЕЧЕНИЕ / РАНДОМИЗАЦИЯ / ОСЛЕПЛЕНИЕ

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Kolomiets A.I., Maksimenkova O.V., Neznanov A.A.

Nowadays peer assessment is recognized as a crucial part of a wide range of active learning routines. Nevertheless, practitioners and educators speak of the complexity and high resource consumption for the implementation of this type of assessment. Undoubtedly, convenient software that supports peer assessment processes may substantially raise productivity of its participants. A review of educational literature and free software shows there are several bottlenecks in the business processes of peer assessment and key user roles. First, most of the programs examined are web-based and expand a set of tools for teachers and learners by extra interfaces. Moreover, this logically creates a new branch in the learning business process. Second, there is probably no peer assessment system which allows users to attach something other than the text to be reviewed. There is a gap in the market of free peer assessment software. This paper offers a peer assessment system specification that attempts to eliminate these disadvantages in order to improve user experience and thus increase the use of information technologies in peer assessment. The specification is based on a thorough description of the peer assessment process involving complex artifacts and double-blinded peer review. Software called PASCA (peer assessment system for complex artifacts) is introduced to illustrate the specification achieved. PASCA uses habitual e-mail services and does not affect other business processes. It supports standard features like blinding and randomization, and it provides a set of original features. They contain evaluation of arbitrary artifacts, creation of complex peer review forms with their validation and scoring, and easy analysis of data from peer assessment sessions.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «On business processes of computer-supported collaborative learning: a case of peer assessment system development»

On business processes of computer-supported collaborative learning: A case of peer assessment system development1

Andrei I. Kolomiets

Student, Faculty of Computer Science National Research University Higher School of Economics Address: 20, Myasnitskaya Street, Moscow, 101000, Russian Federation E-mail: aikolomiets@edu.hse.ru

Olga V. Maksimenkova

Senior Lecturer, School of Software Engineering

Junior Researcher, International Laboratory for Intelligent Systems and Structural Analysis National Research University Higher School of Economics Address: 20, Myasnitskaya Street, Moscow, 101000, Russian Federation E-mail: omaksimenkova@hse.ru

Alexey A. Neznanov

Associate Professor, School of Data Analysis and Artificial Intelligence

Senior Researcher, International Laboratory for Intelligent Systems and Structural Analysis

National Research University Higher School of Economics

Address: 20, Myasnitskaya Street, Moscow, 101000, Russian Federation

E-mail: aneznanov@hse.ru

Abstract

Nowadays peer assessment is recognized as a crucial part of a wide range of active learning routines. Nevertheless, practitioners and educators speak of the complexity and high resource consumption for the implementation of this type of assessment. Undoubtedly, convenient software that supports peer assessment processes may substantially raise productivity of its participants.

A review of educational literature and free software shows there are several bottlenecks in the business processes of peer assessment and key user roles. First, most of the programs examined are web-based and expand a set of tools for teachers and learners by extra interfaces. Moreover, this logically creates a new branch in the learning business process. Second, there is probably no peer assessment system which allows users to attach something other than the text to be reviewed. There is a gap in the market of free peer assessment software. This paper offers a peer assessment system specification that attempts to eliminate these disadvantages in order to improve user experience and thus increase the use of information technologies in peer assessment. The specification is based on a thorough description of the peer assessment process involving complex artifacts and double-blinded peer review. Software called PASCA (peer assessment system for complex artifacts) is introduced to illustrate the specification achieved. PASCA uses habitual e-mail services and does not affect other business processes. It supports standard features like blinding and randomization, and it provides a set of original features. They contain evaluation of arbitrary artifacts, creation of complex peer review forms with their validation and scoring, and easy analysis of data from peer assessment sessions.

Key words: business process optimization, peer assessment, computer supported collaborative learning, active learning, educational software, randomization, blinding.

Citation: Kolomiets A.I., Maksimenkova O.V., Neznanov A.A. (2016) On business processes of computer-supported collaborative learning: A case of peer assessment system development. Business Informatics, no. 4 (38), pp. 35-46. DOI: 10.17323/1998-0663.2016.4.35.46.

1 The article was prepared within the framework of the Basic Research Program at the National Research University Higher School of Economics (HSE) and supported within the framework of a subsidy by the Russian Academic Excellence Project "5-100"

Introduction

Formative assessment has settled as a powerful, effective, and well-proven active learning approach in the modern world [1, 2]. This type of assessment involves students in educational planning and provides them with criticism, which students may treat as a guideline or an algorithm in their next steps in learning. That is the reason why questions related to formative assessment are widely studied and discussed by active learning practitioners in various fields of knowledge [3-6].

Peer assessment is widely applicable to the practice of the active learning technique. It is also well known as a powerful, and probably the most popular means of formative assessment [7-9]. For the sake of clarity, in this work we define peer assessment as a learning procedure for evaluation in which students review each other's works, evaluate them according to earlier formulated criteria, and provide feedback.

In fact, formative assessment partially causes the evolution of collaborative and cooperative assessment techniques [7, 10] in both traditional and computer-based types of learning, such as blended and computer supported collaborative learning (CSCL). A number of peer assessment systems (PAS) were introduced in different learning management systems (LMS), e-learning and massive open online courses (MOOC) platforms. Experience of applying PASs which support users' interaction during peer assessment has been documented in a great many academic works [3, 11, 12].

The authors have conducted a review of educational literature in order to summarize the results of these works. The review shows high interest in peer assessment from scientists and practitioners from different fields. Thus, the first group consists of educators at various levels who focus on descriptions of the peer assessment process, their validation and verification [7, 8, 13]. The second group is interdisciplinary and unites specialists who are engaged to educational software development (e.g., business analysts, programmers, designers). The specific interests of this group in the context of peer assessment are analysis of educational business processes and their optimization, software requirements specification (SRS) design and other questions of the development of PASs [14-16].

Despite the fact that business process analysis plays a great role in software construction and development especially in such complicated area as education, an explicit software requirements specification (SRS) for a PAS seems to be missing.

Actually, educational process of different levels is well studied and classified [17]. This paper aims to introduce SRS for a modern PAS, which follows from the analysis and formalization of peer assessment processes.

1. On the place of peer assessment

in educational business processes

By now, peer assessment as a form of formative assessment has a rather short but rich history. In different countries and knowledge areas, educators have conducted experiments and described studies connected with peer assessment implementation [2], efficiency [18], scaling, etc. Being interested in PAS development, we have generalized the works suitable to collect software requirements and to understand peer assessment processes in this section.

Several review papers by the leading scientists in the field of education published between 1995 and 2015 were taken into consideration. It seems that almost all the research mentioned above has been reviewed and analyzed in detail from different points of view in these works.

In 1998, Topping [13] enriched the body of reviews about active forms of assessment with a review on peer assessment literature specialized in higher education. The review studies 109 research papers which were published between 1980 and 1996. Topping probably was the first one who systematically reviewed and generalized the results about reliability and validity of peer assessment in higher education. Unlike other reviews taken into consideration in this paper, his work underlines the significance and the necessity of participants' matching and randomization within the peer assessment process. An important requirement arises from this result. A flexible peer assessment system should implement high quality algorithms of randomization.

Dochy et al. in 1999 reviewed the quantitative studies on active forms of assessment (self and peer assessment) and covered the period from 1987 to 1998 [8]. Based on the results of analysis of more than 60 research reports, the authors suggest guidelines on self- and peer assessment. They underline the great formative role of peer assessment and the significance of clear, predefined assessment criteria. Moreover, this work draws attention to the need to provide support to students during the assessment processes. Since the work does not focus on technical details, no method of automating the support is proposed.

A year later, in 2000, Falchikov and Goldfinch presented a meta-analysis of comparative studies conducted from 1959 to 1999 and concerned with comparison of the marks which were gained from peer-assessment and from a teacher [7]. Detailed analysis of 96 qualitative and quantitative studies confirms wide abilities of peer assessment implementation in different areas with students of different levels. In addition, the authors recommend practitioners to follow the design and the implementation reported in the study. Falchikov and Goldfinch also emphasize that only formative feedback is appropriate to peer assessment processes.

In 2010, van Zundert et al. [10] issued a complex review of peer-assessment efficiency. The review deals with 26 empirical studies selected from several databases and published between 1990 and 2007. The paper reports high psychometric quality of peer-assessment procedures and generalizes the results of studies confirming a positive correlation between peer-assessment and learning outcomes in different domains. The main advantage of the review is that van Zundert et al. approved peer-assessment applicability to courses of different specializations [19]. The paper also cited a study described the computing course integrated with a peer assessment system [6].

In the same year, Kollar and Fischer [20] introduced a review which concerned the cognitive facilities of peer assessment, and also partly described peer assessment process modeling. These results allowed us to consider that the actors of a peer assessment process are defined as an assessee and an assessor. The assessee sends his work to be evaluated, and the assessor evaluates the work received and gives a formative feedback. In PASCA, instead of assessee and an assessor we use correspondingly a submitter and a reviewer. Moreover, a student generally plays both these roles when participating in a peer assessment.

The next review valuable for our study by Nulty [9] was published in 2011. The author introduced a representative body of literature which examined peer-assessment application in first-year courses. Though Nulty does not give any practical recommendations on using peer assessment in universities, he formed an academic basis for its unhampered application to the first-year courses. This opens prospects for software adoption in first-year courses which are relatively massive.

Summarizing all of the above, we may conclude that a large amount of work has been done in the field of peer assessment investigation and implementation. The body of literature presented above has already helped

us in collecting requirements and defining the roles for PASCA. As an intermediate statement, the functional requirements should include: sending and receiving feedback between participants and administrator of PA, randomization of reviewers.

2. Computer supported peer assessment: challenges and solutions

Over the years, computer-based PAS have paved their way to be used in learning activities on a daily basis by different institutions. Evidently such systems have become especially popular in computer programming education. In this field, among other advantages, peer assessment familiarizes students with practices used in the profession (e.g. code inspection, reflective practice).

Generally, the most of widely spread PASs are either incorporated into utilized LMS, or web-based. These options guarantee that every participant of the PA process can easily access the system.

Most systems implement different grading algorithms based on weighting marks which are awarded from different reviewers and/or task assigner. Key differences comprise providing specific functionality, such as: ano-nymization; randomization; support for complex artifacts; means of communication between submitter and reviewer, feedback; conditional actions (e.g. informational messages based on deadline time or on number of acquired submissions).

Typical examples of PASs in use to date are presented in Table 1. Some of the mentioned systems are the outcomes of scientific studies that examined the PA process.

One of the first successful implementations of a computer-based PA assistant — NetBeans — was described by Lin, Liu, Yuan et al. [19]. Among software-related studies, extensive research has been carried out on practical advantages of PA during educational process.

Another example of a web-based PA assistant is SPARK. It was introduced by Freeman and McKinzie in 2002 [21]. The system focuses on group projects with an assignment evaluated by each group member.

A similar approach is found in WebPA system [15], which was developed at the Centre for Engineering and Design Education at Loughborough University. This system aims to leverage bias in reviews conducted by different students, the teacher and during self-assessment. A notable difficulty of integrating WebPA is the necessity to deploy a web-server installation to

be accessed by PA participants. A similar installation process is required by MyPeerReview, the result of investigation described by Hyyrynen, et al. in 2010 [16]. The development was initiated due to interest in the PA process. Preliminary tests showed students' positive attitude to PAS and issues future work.

As a way to free users from server-side installation work, Hamer and colleagues designed and support the web-based system Aropa [22]. This system is ready to be used via a web-interface after registration. The system supplements education in several institutions world-wide.

Long-term research into studio-based interactive learning has led to the development of the Online Studio-Based Leaning Environment (OSBLE) introduced in 2010 by Hundhausen, et al. [14]. The study proved that PAS improve students' involvement and the efficiency of the PA process. OSBLE is specialized in source code review, specifically optimized for Visual Studio IDE interaction. This fact narrows the field of application of this.

Since the beginning of the 21st century, user expectations concerning software have risen substantially, as has the level of technologies used in development. Thus, it is important to investigate modern solutions for developing a PAS. For instance, the well-known LMS Moodle delivers a Workshop module that assists in a PA process without the need to install extra software or even switch to another tab in the web browser.

Current solutions exhibit a high barrier to entry: in order to use a PAS, teachers almost inevitably need to contact an IT-department for a proper server installation. An LMS-based PAS lessens this problem in case LMS is already being used in the course, however even module installations may require assistance.

Although modern technologies allow embedding links to complex objects of any type using cloud services, it is very convenient to be able to exchange files right in a PAS. Moreover, means of communication are especially vital in case of double-blinded peer assessment. Table 1 allows us to compare the main properties of the solutions mentioned.

3. Application field analysis 3.1. Main concepts of a peer review process

Here we define Peer Assessment (PA) as an assessment procedure organized in the form of a randomized Peer Review (PR) of arbitrary artifacts treated as results of an assignment with previously formalized assessment criteria. The process of PA of a single assignment is called PA session. The main roles of the PA process:

1. Teacher — any organizer or manager of a PA process with full access to a PA data objects.

2. Student - any trainee who participates in a PA session.

3. Initial Author — a student who was registered as future Submitter in a PA session.

Table 1.

Comparative table of some obtainable PASs

s o s o

PAS Ships as Artifacts (S N 1 ^ s o s «S (S N 1 o ■o s (S oc Communication and feedback Extra features and comments

WebPA Web-based, Server deployment required, Open-source Not supported, Artifact submission is to be held externally - - *) Feedback, Justification comments Intelligent algorithmic mark calculation, Powerful but complicated tool

MyPeer-Review Web-based, Server deployment required Submission should include external links + ? ? Current development status not clear

Moodle Workshop Moodle LMS module Complex + + Private messages Moodle provides wide extensibility

Aropa Web-based Text only + + Submitter-reviewer Dialogues Supports review of reviewers

OSBLE+ Visual Studio IDE Plugin, Web interface Source Code + + Author rebuttal sessions, Discussion feed Inline code review, Advanced rubric editor

I Each group member grades all other members (by design)

4. Submitter — a student who creates an artifact and submits it to a PA session.

5. Reviewer — a student who writes a review and sends a complete PR form to a PA session.

Main PA data objects:

1. PA parameters — a set of formal parameters for current PA session.

2. PR form — a table that specifies fields in a review for some type of an artifact. Ideally, a PR form contains clear description of fields and supports basic value validation. It should also contain a text field called free comment for immediate unformalized feedback.

3. Submission — a complete artifact, submitted by a Submitter into a PAS as a result of the assignment in the current session.

4. Review — a complete PR form received from a Reviewer.

5. Feedback — additional information from a Student, different from Submission and Review.

Some remarks about the main concepts and terms:

1. In our case of PA, we suppose that a set of Reviewers is equal to or less than a set of Submitters in terms of sets theory. It is easy to divide these sets for other schemes of PA.

2. We use separate verbs Submit and Send/Broadcast to distinguish actions on the stage of collecting artifacts (submissions) and in several other situations.

3.2. Business processes of a PA Session

The five main stages of a PA session are described in Table 2. After analyzing a process of a PA session, we found phases in which automation of teacher's work can be most efficient.

Table 2.

PA session stages

No. Title Initiation event Finalization event

1. Preparation Create a PA session Complete a PA session configuration

2. Collecting submissions Broadcast assignments to Initial authors "Submission_end" deadline

3. Collecting reviews Broadcast artifacts to Reviewers "Review_end" deadline

4. Analysis of PR results Gather final Reviews "Result_message" deadline

5. Session feedback (additional stage) Send first feedback message End of a course/ education cycle

Figure 1 shows PA processes from a Teacher's perspective using a Business Process Model and Notation (BPMN 2.0). It is the most general representation without technological details, i.e. without the lane of PAS.

3.3. Main use cases

According to the BPMN diagram (Figure 1), we can populate the list of main PASCA use cases (from a Teacher's perspective, by the mentioned stages of a PA session).

1. Preparation stage:

a. Prepare an assignment (a task description file).

b. Prepare a PR Form with validation rules and assessment criteria (rubrics).

c. Prepare a source list of PA participants (Initial authors) and their e-mail addresses.

d. Fill in the parameters of a PA session and a schedule.

e. Anonymize participants and build a randomization scheme of PR with an initial mapping between Submitters and Reviewers.

2. Submissions collecting stage:

a. Broadcast a task description file (assignment) to the Initial authors.

b. Gather Submissions from the Initial authors.

c. [Optional] Remap Reviewers based on missing Submissions.

3. Reviews collecting stage:

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

a. Send PR Forms to the Reviewers.

b. Gather Reviews from the Reviewers.

c. Calculate final marks and check status of all PA participants.

4. PR results analysis stage:

a. Send results of the PA session to the Submitters (or all the Initial authors).

b. Gather additional feedback from the Submitters.

c. Build a final report of the PA session.

5. [Optional] Permanently available actions:

a. Check a mailbox availability and working capacity.

b. Archive a mailbox and PA session data.

c. Check status of Submissions and Reviews.

d. Broadcast information letters and feedback.

3.4. Software requirements

Based on the weakest links identified during the analysis of PA processes and the review of existing PASs, the desired requirements are formulated.

o

öd

d

m —

Z № m

m —

Z m

o ?d

HH

n

IZ5

z

o

u> oo

Review request received

Complete PR form

Submit review

—o

KS)

End review Deadline missed

o

1. Session schedule 2 List of students

3. List of reviewers

4. Students-to-reviewers mapping

1. Artifact

2. Empty PR form

Session settings

ps^^jil'^'cmpiy rt

Review request

Review response L

1. Artifact with comments 2 Complete PR form

Create PA session i

i

, i

^Assignment

Prepare assignment

St Check

Distribute * submissions

assignments status

Begin —

D

Approved reviews

a Analyze PA marks & Evaluate special cases

Distribute approved reviews

i

I .fl. Assignment file | Tl.Artifact

Submission request Submission response

Analyze session results

formative feedback

-o

(ÉK

Assignment received

a Artifact IMI

Complete assignment

artifact

Approved review available

Analyze results and send feedback

—o

KS)

End submission Deadline missed

Od

d

m —

Z № m m

hö §

n № m m M m

O Ö

M

r

HH

z O

in

to o

Fig. 1. BPMN diagram of the main PAS processes (from the Teacher's perspective)

1. PAS should support:

a) complex artifacts as submission content (like source code, design documents, complete software projects, etc.);

b) complex PR forms, which should be easily updated and changed;

c) automatic PR form validation;

d) automatic tentative (coarse) assessment of Artifacts based on Reviews.

2. A Student should not learn to use any additional software (we suppose that now everyone uses e-mail and an office suite).

3. A teacher should be able to tightly integrate PAS into the common IT infrastructure of a university.

Other functional requirements for the use cases listed above:

1. Importing lists of students (Initial authors) with emails from external sources.

2. Automated delivery of a task description (assignment) file converted into PDF format.

3. Support of various e-mail addresses for one student.

4. Basic anonymization of artifacts.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

5. Blinding of participants to support a single- and double-blinded review process.

6. Randomization of reviewers.

7. Preparation of PR forms, independent from other PA activities.

8. Automated assessment procedure by PR forms processing after review process.

9. Generating reports about submissions status, reviews status, and final assessment results.

Main non-functional requirements:

1. Use only standard Microsoft Office components.

2. Support any IMAP-compatible mailbox as a "server side".

4. Software design and functionality

PASCA was designed to use the Microsoft Office 2010-2016 or Office 365 on a Teacher's computer. Involved components of the Office suite are Excel, Outlook, and Word. The optimal mail server is Microsoft Exchange (2010 or higher), though PASCA supports any IMAP-compatible mailbox. At present, the project has been fully tested in:

1. Microsoft Windows 7, 8.1 and 10 operating systems.

2. Microsoft Office 2013, 2016 and 365.

3. Mailboxes in Google (http://mail.google.com) and Yahoo! (http://mail.yahoo.com) free mail services.

The main metrics of the project are relatively low

PASCA Worksheet

Session preparation

Open authors file

Prepare authors file structure

Randomize authors file

Broadcast task file

Submissions check

Reviews check

Current session: F:\Work\PRI\PRI-2016-01.xlsx

Download Map submissions to authors Broadcast PR forms to

submissions reviewers

Download Generate Broadcast review results to

reviews PR reports authors

Main options

Mail account: Inbox folder: Archive folder: Adaptive randomize: Pack submissions:

Select

XXX_Robot@gmail.com

XXX_Robot@gmail.com\Inbox

XXX_Robot@gmail.com\TeachersSchool2016

Check mailbox

Main History Tools

Fig. 2. Layout of a main worksheet in the PASCA workbook

(about 2400 lines of source code), because the most of low-level tasks are implemented and executed by the components of the Microsoft Office Suite. We use Excel as an application host (Figure 2), a main data storage, and a report builder (the following illustrations in full resolution can be found at https://bitbucket.org/Siberi-anShaman/pasca/wiki/ScreenShots).

PASCA focuses on main scenarios of a PA process from a Teacher's perspective. The software architecture ensures a high level of PA materials (Authors lists, PR forms, reports) reuse.

4.1. PA session events and authentication of participants

A mail-based PAS evidently relies on an e-mail infrastructure [23]. PASCA uses a certain mail-box for communication with participants of a PA session. This mail-box (i.e. e-mail address) may be changed between sessions, but must be constant throughout a session. All the auto-generated messages from this mailbox are signed by the "Peer Review Robot".

MS Outlook is used for all mailbox management tasks. This means that a Teacher needs to set up an Outlook account and specify the address in PASCA settings. This solution may be treated as a drawback, but the result is very handy because of useful additional tools available in Outlook. Options to change a mailbox account and a mailbox folder used for processing e-mail messages are provided.

There are two main types of events in PASCA: facts of an e-mail message send and delivery with times-tamps generated by an e-mail system. It is significant that any student may use a secondary e-mail address on a submission stage in addition to a primary e-mail address that is fixed in an Initial authors list. The primary and secondary e-mail addresses of each participant are checked for compliance on the following stages of a PA session. All the data objects at the moment of the event are represented as attachments in the corresponding email messages.

4.2. PA session workbook

Each PA session is represented by one excel workbook. The first worksheet of the workbook is a list of Initial authors, the second — PA session parameters2, the following — a randomization scheme, a Sub mis -sions status, status of Reviews, and various reports. Thus, most of data about a PA session are available in a

2 https://bitbucket.org/SiberianShaman/pasca/wiki/ScreenShots

PA session workbook and all that data can be analyzed and visualized by standard Excel tools.

We suppose that a Teacher has a list of students and their e-mail addresses. In very rare cases, the Teacher should additionally check the correctness of this list. For example, we faced a problem with letter "e" of the Russian alphabet, which can be interchanged with "e" in student's name. In the current version of PASCA this case is handled automatically.

4.3. Anonymization and randomization

Each Initial author - and, therefore, Submitter and Reviewer — constantly has a random unique 6-digit identifier (ID) from the range [100000-999999] and each PR form has a random unique 7-digit ID from the range [1000000-9999999]. Thus, the ranges of Participants IDs and PR forms IDs do not overlap. After building a PA session workbook from an Initial authors list, all subsequent actions use those IDs.

Several randomization schemes and algorithms were taken into consideration in an attempt to randomize Authors and Reviewers. At first, a basic non-adaptive algorithm is used for standard randomization of all authors2. It is based on the classical Richard Dursten-feld permutation algorithm [24] with checking for non-equal submissions assigned for one Reviewer.

At second, a more interesting adaptive algorithm is used for a uniform workload of the Reviewers, which takes into account missing submissions. We continue experiments on different randomization schemes.

4.4. Peer review form creation, validation and assessment

A peer review form (PR form) is the main data object from Reviewer's point of view. A separate Excel file embodies a PR form that supports a multi-field review, complex assessment rules, and a validation scheme2. The first row of the first worksheet contains PR form ID, a PA session textual ID (that helps a Reviewer to match PR form and assignment), and a command button that checks the validity of values entered in the form.

The parameters page in a PA session workbook contains a link to the PR form used and the number of fields in the PR form.

5. Pilot adoption and PASCA improvement process

Relying on the results of Nulty [9] and van Zundert et al. [10], we suppose the applicability of peer-assessment to introductory programming courses was sufficiently approved. So, the adoption of the PASCA prototype was encouraged in the Fall semester of 2015-2016 academic year during an introductory programming course at the Faculty of Computer Science of National Research University Higher School of Economics. Software engineering bachelor students (58 in total) were engaged in three PA sessions. 48 students took part in an anonymous post-course survey. The survey among the others contained questions on students' experience with PASCA. More information about the post-course survey, the results and their discussion may be found in [25].

The authors have shared the PA practice with colleagues and have removed some shortcomings discovered by the early adopters. For now, we are working on the following improvements.

1. Integrating PASCA with a new Microsoft Office 365 technology stack.

2. Rewriting the notification system. If the current version of PASCA uses an external IMAP mailbox, there are no automatic notifications at all due to the unavailability of callback functions on the server side.

3. Helping reviewers with a preliminary artifact verification. PASCA should be able to validate general artifacts or some known types of artifacts, for example, check a file size or compile a source code.

4. Adding adaptive randomization schemes.

5. Increasing blinding quality in small groups of students. This problem is linked to an additional anonymi-zation of artifacts. Thus, we will try to implement a basic

check for the presence of personally identifiable information.

6. Adding some analysis and reports for cheating prevention.

Conclusion

This paper has presented a systematic attempt to optimize business processes of peer assessment in education. We have introduced SRS for a modern PAS and have specified them to a mail-based peer assessment system. Moreover, this paper has introduced the Mail-based Randomized Double-Blinded Peer-assessment System for Complex Artifacts (PASCA), which has been developed according to these requirements. Using this system, participants of educational process (teachers and learners) are not required to master a new business process or to use or set up any additional software except for the standard e-mail system. Furthermore, PASCA supports assessment of artifacts of any type, blinding and randomization during a peer assessment session, complex PR forms with automatic validation and scoring.

Currently, PASCA provides all the functionality declared in this work. Moreover, the system was successfully adopted in the introductory programming course for the first-year software engineering bachelor students. The feedback which was gained during the adoption dictated the directions of the work for the near future. PASCA will be improved by adding additional notifications, data validators, and adaptive randomization algorithms.

Finally, PASCA is claimed to be an open-source project and now it is freely available at the Bitbucket repository (http://bitbucket.org/SiberianShaman/pasca). ■

References

1. Clarke S. (2008) Active learning through formative assessment. London: Hodder Education.

2. Tillema H. (2014) Student involvement in assessment of their learning. Designing assessment for quality learning. Dordrecht: Springer.

3. Anson R., Goodman J.A. (2014) A peer assessment system to improve student team experiences. Journal of Education for Business, vol. 89, no. 1, pp. 27—34.

4. Kulrarni C., Wei K.P., Le H., Chia D., Papadopoulos K., Cheng J., Koller D., Klemmer S.R. (2013) Peer and self assessment in massive online classes. ACM Transactions on Computer-Human Interaction (TOCHI), vol. 20, no. 6, pp. 1—31.

5. Hamalainen H., Hyyrynen V., Ikonen J., Porras J. (2011) Applying peer-review for programming assignments. International Journal on Information Technologies & Security, no. 1, pp. 3—17.

6. Davies P. (2006) Peer assessment: judging the quality of students' work by comments rather than marks. Innovations in Education and Teaching International, no. 43, pp. 69—82.

7. Falchikov N., Goldfinch J. (2000) Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks. Review of Educational Research, vol. 70, no. 3, pp. 287—322.

8. Dochy F., Segers M., Sluijsmans D. (1999) The use of self-, peer and co-assessment in higher education: a review. Studies in Higher Education, vol. 24, no. 3, pp. 331—350.

9. Nulty D.D. (2011) Peer and self-assessment in the first year of university. Assessment & Evaluation in Higher Education, vol. 36, no. 5, pp. 493—507.

10. van Zundert M., Sluijsmans D., van Merrienboer J. (2010) Effective peer assessment processes: Reasearch findings and future directions. Learning and Instructions, no. 20, pp. 270—279.

11. Isomottonen V., Tirronen V. (2013) Teaching programming by emphasizing self-direction: How did students react to the active role required of them? ACM Trasactions on Computing Education, vol. 13, no. 2, pp. 6—21.

12. Carroll J.M., Jiang H., Borge M. (2015) Distributed collaborative homework activities in a problem-based usability engineering course. Education and Information Technologies, no. 20, pp. 589—617.

13. Topping K. (1998) Peer assessment between students in colleges and universities. Review of Educational Research, vol. 68, no. 3, pp. 249-276.

14. Hundhausen C., Agrawal A., Ryan K. (2010) The design of an online environment to support pedagogical code reviews. Proceedings of the 41st ACMTechnical Symposium on Computer Science Education, 10—13 March 2010, Milwaukee, Wisconsin, USA, pp. 182-186.

15. Loddington S., Pond K., Wilkinson N., Willmot P. (2009) A case study of the development of WebPA: An online peer-moderated marking tool. British Journal of Educational Technology, no. 40, pp. 329-341.

16. Hyyrynen V., Hamalainen H., Ikonen J., Porras J. (2010) MyPeerReview: An online peer-reviewing system for programming courses. Procceedings of the 10th Koli Calling International Conference on Computing Education Research. 28—31 October 2010, Koli National Park, Finland, pp. 94-99.

17. HEI-UP business process management in higher education institutions. Project results. 2015. Available at: http://www.bpm-hei.eu/index. php/news (accessed 19 August 2015).

18. Lew M.D.N., Awis W.A.M., Schmidt H.G. (2010) Accuracy of students' self-assessment and their beliefs about its utility. Assessment & Evaluation in Higher Education, vol. 35, no. 2, pp. 135-156.

19. Lin S.S.J., Liu E.Z.F., Yuan S.M. (2001) Web-based peer assessment: feedback for students with various thinking-styles. Journal of Computer Assisted Learning, no. 17, pp. 420-432.

20. Kollar I., Fischer F. (2010) Peer assessment as collaborative learning: A cognitive perspective. Learning and Instruction, no. 20, pp. 344-348.

21. Freeman M., McKenzie J. (2002) SPARK, a confidential web-based template for self and peer assessment of student teamwork: Benefints of evaluating across different subjects. British Journal of Educational Technology, no. 33, pp. 551-569.

22. Hamer J., Kell C., Spence F. (2007) Peer assessment using Aropa. Proceedings of the Ninth Australasian Conference on Computing Education, vol. 66, pp. 43-54.

23. Klensin J. (2008) RFC-5321: Simple mail transfer protocol. IETF Documents. Available at: https://tools.ietf.org/html/rfc5321 (accessed 01 May 2016).

24. Durstenfeld R. (1964) Algorithm 235: Random permutation. Communications of the ACM, vol. 7, no. 7, p. 420.

25. Maksimenkova O., Neznanov A. (2016) The PASCA: A mail based randomized blinded peer assessment system for complex artifacts. Procedia Computer Science, no. 96, pp. 826-837.

О бизнес-процессах коллаборативного обучения и его компьютерной поддержки: Разработка системы взаимного оценивания

А.И. Коломиец

студент факультета компьютерных наук

Национальный исследовательский университет «Высшая школа экономики» Адрес: 101000, г. Москва, ул. Мясницкая, д. 20 E-mail: aikolomiets@edu.hse.ru

О.В. Максименкова

старший преподаватель департамента программной инженерии младший научный сотрудник Международной научно-учебной лаборатории интеллектуальных систем и структурного анализа

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Национальный исследовательский университет «Высшая школа экономики» Адрес: 101000, г. Москва, ул. Мясницкая, д. 20 E-mail: omaksimenkova@hse.ru

3 Статья подготовлена в ходе проведения исследования в рамках Программы фундаментальных исследований Национального исследовательского университета «Высшая школа экономики» (НИУ ВШЭ) и с использованием средств субсидии в рамках государственной поддержки ведущих университетов Российской Федерации "5-100

А.А. Незнанов

кандидат технических наук, доцент департамента анализа данных и искусственного интеллекта старший научный сотрудник Международной научно-учебной лаборатории интеллектуальных систем и структурного анализа

Национальный исследовательский университет «Высшая школа экономики» Адрес: 101000, г. Москва, ул. Мясницкая, д. 20 E-mail: aneznanov@hse.ru

Аннотация

Взаимное оценивание — важная часть большинства современных учебных технологий. Внимание к этому методу постоянно растет, причем не только как к варианту процедуры оценивания, но и как к процедуре получения информативной обратной связи и технике развития важнейших системных компетенций. Отметим, что большинство практиков отмечают высокую сложность и ресурсоемкость внедрения и использования взаимного оценивания. Бесспорно, что подходящее программное обеспечение, автоматизирующее процессы взаимного оценивания, может существенно упростить внедрение и повысить эффективность этих процессов.

Обзор литературы и существующих программных решений позволяет выявить несколько «узких мест» в функциональности и взаимодействии с пользователем. Во-первых, большинство систем являются независимо развертываемыми web-службами, что расширяет набор инструментов преподавателей и, что более важно, обучаемых. Также это требует отдельных технологических цепочек для поддержки процессов взаимного оценивания и затрудняет интеграцию бизнес-процессов. Во-вторых, авторами не обнаружено доступной бесплатной системы, в которой можно оценивать артефакты, отличные от текстового документа.

В статье рассматриваются требования к современной системе поддержки взаимного оценивания, которая смогла бы широко использоваться преподавателями. После анализа бизнес-процессов взаимного оценивания авторами была разработана оригинальная программная система, получившая название PASCA (peer assessment system for complex artifacts). Продукт использует привычные сервисы электронной почты и не требует изменения никаких других бизнес-процессов образовательной организации. Он обеспечивает стандартные возможности ослепления и рандомизации, а также имеет несколько важных преимуществ: поддержку оценивания произвольных артефактов, создание сложных оценочных листов с автоматической валидацией и простановкой оценок, простой анализ данных сессий взаимного оценивания.

Ключевые слова: оптимизация бизнес-процессов, взаимное оценивание, коллаборативное обучение с компьютерной

поддержкой, активное обучение, образовательное программное обеспечение, рандомизация, ослепление.

Цитирование: Kolomiets A.I., Maksimenkova O.V., Neznanov A.A. On business processes of computer-supported

collaborative learning: A case of peer assessment system development // Business Informatics. 2016. No. 4 (38). P. 35—46.

DOI: 10.17323/1998-0663.2016.4.35.46.

Литература

1. Clarke S. Active learning through formative assessment. London: Hodder Education, 2008.

2. Tillema H. Student involvement in assessment of their learning // In: Designing assessment for quality learning. Dordrecht: Springer, 2014.

3. Anson R., Goodman J.A. A peer assessment system to improve student team experiences // Journal of Education for Business. 2014. Vol. 89. No. 1. P. 27-34.

4. Peer and self assessment in massive online classes / C.W.K.P. Kulrarni [et al.] // ACM Transactions on Computer-Human Interaction (TOCHI). 2013. Vol. 20. No. 6. P. 1-31.

5. Hamalainen H., Hyyrynen V., Ikonen J., Porras J. Applying peer-review for programming assignments // International Journal on Information Technologies & Security, 2011. No. 1. P. 3-17.

6. Davies P. Peer assessment: judging the quality of students' work by comments rather than marks // Innovations in Education and Teaching International. 2006. No. 43. P. 69-82.

7. Falchikov N., Goldfinch J. Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks // Review of Educational Research. 2000. Vol. 70. No. 3. P. 287-322.

8. Dochy F., Segers M., Sluijsmans D. The use of self-, peer and co-assessment in higher education: a review // Studies in Higher Education. 1999. Vol. 24. No. 3. P. 331-350.

9. Nulty D.D. Peer and self-assessment in the first year of university // Assessment & Evaluation in Higher Education. 2011. Vol. 36. No. 5. P. 493-507.

10. van Zundert M., Sluijsmans D., van Merrienboer J. Effective peer assessment processes: Reasearch findings and future directions // Learning and Instructions. 2010. No. 20. P. 270-279.

БИЗНЕС-ИНФОРМАТИКА № 4(38) - 2016

11. Isomottonen V., Tirronen V. Teaching programming by emphasizing self-direction: How did students react to the active role required of them? // ACM Trasactions on Computing Education. 2013. Vol. 13. No. 2. P. 6-21.

12. 12. Carroll J.M., Jiang H., Borge M. Distributed collaborative homework activities in a problem-based usability engineering course // Education and Information Technologies. 2015. No. 20. P. 589-617.

13. Topping K. Peer assessment between students in colleges and universities // Review of Educational Research. 1998. Vol. 68. No. 3. P. 249-276.

14. 14. Hundhausen C., Agrawal A., Ryan K. The design of an online environment to support pedagogical code reviews // Proceedings of the 41st ACM Technical Symposium on Computer Science Education, 10-13 March 2010, Milwaukee, Wisconsin, USA. P. 182-186.

15. Loddington S., Pond K., Wilkinson N., Willmot P. A case study of the development of WebPA: An online peer-moderated marking tool // British Journal of Educational Technology. 2009. No. 40. P. 329-341.

16. Hyyrynen V., Hamalainen H., Ikonen J., Porras J. MyPeerReview: An online peer-reviewing system for programming courses // Procceedings of the 10th Koli Calling International Conference on Computing Education Research. 28-31 October 2010, Koli National Park, Finland. P. 94-99.

17. HEI-UP business process management in higher education institutions. Project results. 2015. [Электронный ресурс]: http://www. bpm-hei.eu/index.php/news (дата обращения: 19.08.2015).

18. Lew M.D.N., Awis W.A.M., Schmidt H.G. Accuracy of students' self-assessment and their beliefs about its utility // Assessment & Evaluation in Higher Education. 2010. Vol. 35. No. 2. P. 135-156.

19. Lin S.S.J., Liu E.Z.F., Yuan S.M. Web-based peer assessment: feedback for students with various thinking-styles // Journal of Computer Assisted Learning. 2001. No. 17. P. 420-432.

20. Kollar I., Fischer F. Peer assessment as collaborative learning: A cognitive perspective // Learning and Instruction. 2010. No. 20. P. 344-348.

21. Freeman M., McKenzie J. SPARK, a confidential web-based template for self and peer assessment of student teamwork: Benefints of evaluating across different subjects // British Journal of Educational Technology. 2002. No. 33. P. 551-569.

22. Hamer J., Kell C., Spence F. Peer assessment using Aropa// Proceedings of the Ninth Australasian Conference on Computing Education. 2007. Vol. 66. P. 43-54.

23. Klensin J. RFC-5321: Simple mail transfer protocol // IETF Documents. 2008. [Электронный ресурс]: https://tools.ietf.org/html/ rfc5321 (дата обращения: 01.05.2016).

24. Durstenfeld R. Algorithm 235: Random permutation // Communications of the ACM. 1964. Vol. 7. No. 7. P. 420.

25. Maksimenkova O., Neznanov A. The PASCA: A mail based randomized blinded peer assessment system for complex artifacts // Procedia Computer Science. 2016. No. 96. P. 826-837.

i Надоели баннеры? Вы всегда можете отключить рекламу.