Научная статья на тему 'Russian Standard for assessment Centers'

Russian Standard for assessment Centers Текст научной статьи по специальности «Экономика и бизнес»

CC BY
131
46
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
ASSESSMENT CENTERS

Аннотация научной статьи по экономике и бизнесу, автор научной работы — Task Force On Assessment Center Standard

This document is a first guidelines and ethical considerations for assessment center in Russia. Development of the Russian Standard for Assessment Centers was initiated in 2013 by the National Confederation for Human Capital Development and supported by the expert community. Today the Assessment Center method is widely used in Russia. There is an international Assessment Center Standard as well as a number of national standards. However, the Russian professional community does not view these as documents that are imperative to observe, because none of these standards fully reflects features specific to Russian Assessment Center practices and corresponds to Russian theoretical and methodological traditions of scientific research and development. For these reasons, the need to design a national Standard regulating Assessment Center development and implementation practices and reflecting modern-day Russian realities has now become evident. This Standard is a national-level, scientifically grounded, and cultural and ethical set of guidelines which describe the Assessment Center method and outline the minimum set of requirements for Assessment Centers. The Standard also contains recommendations based on best Assessment Center practices that will help to increase the quality of assessment procedures. The Standard addresses Assessment Center’s participants, observers, facilitators, administrators, developers and designers, sellers, scientists and researchers, consultants and experts, also trainers, university instructors teaching personnel assessment, students and Assessment Center training participants.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Russian Standard for assessment Centers»

Организационная психология. 2013. Т. 3. №2. С. 33-52

ОРГАНИЗАЦИОННАЯ ПСИХОЛОГИЯ

www.orgpsyjournal.hse.ru

Russian Standard for Assessment Centers

Task Force on Assessment Center Standard1:

Evgeny Vuchetich (EXECT Partners Group)

Dmitry Gofman (Axes Management)

Aleksandr Erofeev (Moscow State University, Assessment Center LASPI)

Evgeny Lurie (ECOPSY Consulting)

Maria Maltseva (DDI)

Yuriy Mikheev (“Training Institute - ARB Pro” Group)

Yulia Poletaeva (SHL)

Sergey Sergienko (State University of Management)

Svetlana Simonenko (Detech)

Yulia Sinitsina (TalentQ)

Sergey Umnov (ECOPSY Consulting)

Tatiana Khvatinina (SHL / Chair of Personnel Assessment Federation of NC HCD) Aleksandr Shmelyov (Moscow State University, Human Technologies)

This document is a first guidelines and ethical considerations for assessment center in Russia. Development of the Russian Standard for Assessment Centers was initiated in 2013 by the National Confederation for Human Capital Development and supported by the expert community. Today the Assessment Center method is widely used in Russia. There is an international Assessment Center Standard as well as a number of national standards. However, the Russian professional community does not view these as documents that are imperative to observe, because none of these standards fully reflects features specific to Russian Assessment Center practices and corresponds to Russian theoretical and methodological traditions of scientific research and development. For these reasons, the need to design a national Standard regulating Assessment Center development and implementation practices and reflecting modern-day Russian realities has now become evident. This Standard is a national-level, scientifically grounded, and cultural and ethical set of guidelines which describe the Assessment Center method and outline the minimum set of requirements for Assessment Centers. The Standard also contains recommendations based on best Assessment Center practices that will help to increase the quality of assessment procedures. The Standard addresses Assessment Center’s participants, observers, facilitators, administrators, developers and designers, sellers, scientists and researchers, consultants and experts, also trainers, university instructors teaching personnel assessment, students and Assessment Center training participants.

1 Comments and reviews were also provided by: Tahir Bazarov, professor (Higher School of Economics, Moscow State University, CPT

XXI Century), Marina Barabanova (TalentQ), Vladimir Stolin, professor (ECOPSY Consulting), Yuriy Shipkov (SHL).

The Task Force is extending gratitude to international experts who provided valuable notes and comments to the text: David Bartram, professor (SHL, Great Britain), Alyssa Gibbons, assistant professor (Colorado State University, USA), Filip Lievens, professor (Ghent University, Belgium), Christof Obermann, professor (Obermann Consulting, Germany), Vina Pendit (Daya Dimensi, Indonesia), Nigel Povah, professor (A&DC, Great Britain), Sandra Schlebusch (LeMaSa, South Africa), George Thornton III, professor (Colorado State University, USA).

The Task Force and NC HCD are grateful to S. Sergienko for scientific support and E. Lurie for project moderation.

E-mail: [email protected]. You may ask questions and get an expert opinion in this Facebook group https://www.facebook.com/groups/assessment.rus/

33

Организационная психология. 2013. Т. 3. №2

www.orgpsyjournal.hse.ru

1. Russian AC Standard overview

1.1. Terminology

There are two popular names for the Assessment Center (AC) method in Russia: the translated "центр оценки’’ and the transliterated “ассессмент центр". In the global personnel management professional community “assessment center” denotes a specific method of people assessment. The product of this process is a judgment about person’s qualities and competencies, i.e. assessment can be understood both as a process and its result. In Russian language the word “оценка" is used to denote both the process (the process of assessment) and the product (assessment as the result of this process). In this sense, Russian term “центр оценки’’ precisely corresponds to the English “assessment center”. For this reason, the suggested term for this method in Russia is the translated "центр оценки’’ (though the direct transliteration “ассессмент центр’’ is also acceptable).

1.2. Rationale behind the development of Russian AC Standard

Today the AC method is widely used in Russia. There is an international AC Standard as well as a number of national standards (see 1.3.). However, the Russian professional community does not view these as documents that are imperative to observe.

Further, none of these standards:

• Fully reflects features specific to Russian AC practices;

• Fully corresponds to Russian theoretical and methodological traditions of scientific research and development.

The need to design a national Standard regulating AC development and implementation practices and reflecting modern-day Russian realities has now become evident.

1.3. Prototypes for this document

Existing documents were taken into account in the preparation of the Russian AC Standard, namely, documents regulating AC development and implementation in Germany, Great Britain, South Africa and the two latest editions of AC guidelines and ethical norms endorsed at the International AC Congress.

1. Arbeitskreis Assessment Center e.V. (2004). Standards der Assessment Center Technik. Hamburg, Deutschland.

2. Assessment Center Study Group. (2007). Guidelines for Assessment and Development Centers in South Africa (4th ed.).

3. International Task Force on Assessment Center Guidelines. (2000). Guidelines and ethical considerations for assessment center operations. Public Personnel Management, 29, 315-331.

4. International Task Force on Assessment Center Guidelines. (2009). Guidelines and ethical considerations for assessment center operations. International Journal of Selection and Assessment, 17(3), 243-254.

5. The British Psychological Society (2005). Design, implementation and evaluation of assessment and development centres. Best practice guidelines.

1.4. Status of this Standard

1. This Standard is a national-level, not an organization-level document;

2. The Standard is scientifically grounded;

3. The Standard is a cultural and ethical set of guidelines, i.e. it is suggested that the professional community accepts requirements and regulations of the Standard voluntarily;

4. The Standard is not a legal requirement;

34

Организационная психология. 2013. Т. 3. №2

www.orgpsyjournal.hse.ru

5. All the parts of the standard describing the AC method are a practical guideline. They outline the minimum set of requirements for ACs. If at least one of the requirements is not met while preparing, delivering and providing results of an assessment program, such an assessment program should not be called an AC.

This Standard also contains recommendations based on best AC practices that will help to increase the quality of assessment procedures.

1.5. Purpose and objectives

1. Formation of a modern, scientifically grounded image of the AC method;

2. Regulation of AC development and implementation activities;

3. Increasing the quality of teaching disciplines connected with personnel assessment;

4. Facilitating AC professionals training;

5. Encouraging scientific research supporting AC method;

6. Providing informational support for people making decisions on AC development and implementation;

7. Strengthening the AC status in the field of personnel assessment;

8. Providing informational support for expert evaluation of the quality of developed and implemented ACs.

1.6. Intended Audience

The Standard addresses those who:

1. Participate in ACs (participants);

2. Deliver ACs (observers, facilitators, role players, administrators);

3. Create ACs (developers and designers);

4. Sell ACs as a service;

5. Study ACs (scientists and researchers in the field of personnel assessment);

6. Purchase ACs as a service (internal or external customers - representatives of state, commercial and public organizations);

7. Consult and examine processes of AC development and implementation (consultants, experts);

8. Manage organizations delivering AC-related services;

9. Teach AC methods (trainers);

10. Teach personnel assessment (university instructors);

11. Learn personnel assessment and AC methods (students and AC training participants);

12. Are potential AC users.

The Standard can also be useful to professionals who employ other assessment methods (qualification testing, certification exams, psychometric tests, etc.).

2. The concept of an assessment center

2.1. Definition of the AC

An Assessment Center (AC) is a complex method of estimating potential success within a job (professional success) that includes a set of various techniques and is based on participants’ behavioral assessment by a group of expert observers during simulation exercises.

If developmental objectives are of primary importance, the term AC may be changed to development center.

35

Организационная психология. 2013. Т. 3. №2

www.orgpsyjournal.hse.ru

2.2. AC Specific features

1. The main purpose of ACs is to estimate potential professional success.

2. Assessment should be carried out on the basis of the competencies (dimensions and tasks). Each competency consists of a group of behavioral indicators.

3. The assessment is based on comparison of observed behavior to the established behavioral indicators, not on comparison of participants to each other.

4. Overt behavior is assessed in ACs.

5. A set of multiple techniques should be used in ACs. Simulation exercises are the core AC technique. It is also possible to use such other techniques as interviews, qualification and psychometric tests, questionnaires.

6. It is necessary to use interactive simulation exercises (exercises reproducing the most essential behavioral aspects of collaborative work in at job).

7. Assignment of scores on each of the competencies should be based on the observations of at least two expert observers who have taken a special training course.

8. Determination of the demonstrated level of a competency is the main result of observers’ work.

9. All aggregate scores should be agreed upon during data integration by means of consensual discussion of observers or obtained by a justified statistical integration process.

2.3. AC purpose and objectives

The main purpose of ACs is to estimate potential success within a job (professional success). Jobs may be determined specifically (e.g. a position within an organization) or by type (e.g. a certain management level).

Participants’ job performance and their current responsibilities should not be referred to during AC process. Even in the case of selection for a particular position, potential success is what should be evaluated not prior results.

Today ACs are used to meet the following objectives:

1. Selection (e.g. hiring, rotation of staff, creating a talent pool, building a managerial team);

2. Determination of individual developmental routes (e.g. making personal development plans (PDPs), career guidance);

3. Determination of directions for group or system development within the organization (e.g. building a managerial team, developing corporate training programs, developing job profiles);

4. Training within the course of the AC (e.g. development of assessed competencies, professional adaptation). It is important to note that in this case AC results cannot be used for selection purposes.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

AC objectives are not limited to the ones listed above. Emergence of new objectives is possible. At the same time, all the AC objectives should comply with the organization development strategy and should not impair participants’ rights.

2.4. Assessment methods that are not an AC

An assessment is not an AC if:

1. It includes only one assessment technique (whether it is a simulation exercise or not);

2. It does not contain multiple simulation exercises;

3. It does not contain interactive simulation exercises, i.e. those that reproduce the most essential aspects of collaborative work in the target job;

36

Организационная психология. 2013. Т. 3. №2

www.orgpsyjournal.hse.ru

4. It only consists of a test or questionnaire battery;

5. It only includes one interview or a series of interviews;

6. It is based on the judgments of a single observer (even if various assessment techniques are used);

7. It does not include a data integration procedure, even if there are several assessment techniques and several observers.

3.1. Preparation for the AC

3.1.1. Making the decision to use an AC

Organizations deciding to use ACs should have a clear understanding of the role of ACs in their human resource management systems. It is recommended that decisions regarding the key aspects of using an AC are clearly recorded in internal documentation within the company. As a rule, it is desirable to include the following key aspects:

1. AC objectives (e.g. selection, personal development planning);

2. Groups of candidates or employees recommended for participation in an AC; rules regarding what information should be provided to participants prior to and after an AC;

3. Requirements for qualifications and experience of observers;

4. Regulations for storage and usage of AC results and materials generated (including a list of people who have access to them);

5. Decisions that can be made on the basis of the AC results.

Organization executives are recommended to openly express support for the implementation of an AC and guarantee that AC results will be used to make appropriate managerial decisions.

3.1.2. Job analysis

Job analysis is one of the most essential components of preparation for an AC. It is crucial for the subsequent development or selection of competencies, simulation exercises and other AC techniques. Job analysis should result in identifying the following:

1. Key tasks. What people do within the target job and under what circumstances they do it;

2. Behavioral indicators. How the tasks should be performed in order for an individual to be considered successful within a specific organization or job.

The following information sources can be used for analysis:

• Successful employees’ job performance;

• Organization’s corporate culture, its strengths and major weaknesses, organization development strategy;

• Theoretical job frameworks (e.g. existing internal competency models, universal competency models);

• Results of prior analyses of target job prototypes. In this case, evidence should be provided that the prototypes are comparable to the current target job.

Depending on the situation, the following may differ for different ACs:

1. Depth of analysis (depends on AC objectives, target job complexity, adequacy of available job information, similarity of the job to those that have been analyzed earlier, availability of existing and suitable AC exercises, etc.);

2. Scope of analysis (see below Typical situations requiring job analysis);

37

Организационная психология. 2013. Т. 3. №2

www.orgpsyjournal.hse.ru

3. Sources of information and their priorities. For example, when existing job performers are assessed the emphasis should be made on analyzing successful performers’ activities. On the other hand, if a new job is being introduced, the emphasis can be made on analyzing its general requirements, organization strategy and corporate culture.

Examples of typical situations requiring job analysis:

1. The organization lacks a competency model for the job that will be the focus of an AC. In this case, full job analysis should be conducted to identify behavioral indicators, which should then be combined into competencies as described in Section 3.1.3.

2. AC exercises are developed on the basis of the competency model existing in the organization. In this case it is necessary to identify key behaviors (behavioral indicators) that ensure success within the target job. Analysis can be somewhat simplified (e.g. limiting it to interviewing all the key managers and subject matter experts (SMEs)). These identified indicators are compared to the existing competency model. In case of large discrepancies it is recommended to re-design the model for the purposes of the AC.

3. The organization is planning to use a scientifically grounded (see Section 4) AC program developed for the same job in a different organization. Evidence should be presented that the jobs, which have been analyzed earlier, are comparable to the target job. Analysis can be substantially simplified (e.g. interviewing selected key managers and SMEs). In case of major discrepancies the AC program should be re-designed.

3.1.3. Identification of competencies

Competencies (dimensions and/or tasks) for target job assessment are a necessary condition for AC development and implementation.

There are two possible scenarios in which competencies are identified:

1. There is no competency model within the organization. In this case, all competencies must be developed and it is necessary to:

• Select those behavioral indicators obtained during job analysis that are suitable for assessment.

• Group the selected behavioral indicators on the basis of their essential similarities or differences.

• Name the groups. These names will be further used as names for competencies. AC participants’ behavior should be assessed against the behavioral indicators, not against competency names. The reason for this is that competency names are labels given on the basis of a specific group of behavioral indicators.

2. There is a competency model within the organization. In this case, key behavioral indicators enabling success in the target job should be identified. The indicators should then be compared to the existing competency model. If discrepancies are large, the existing model should be adapted to be used in the AC.

3.1.4. Development and selection of simulation exercises

Simulation exercises are the core technique that distinguishes ACs from other methods. It is these exercises that ensure behavioral assessment and an accurate success prediction in the target job.

Main types of simulation exercises:

1. Group interactive exercises involving at least 3 participants (discussions, collaborative projects, group presentations, etc.);

38

Организационная психология. 2013. Т. 3. №2

www.orgpsyjournal.hse.ru

2. One-to-one interactive exercises (role play, fact finding, presentation, etc.);

3. Individual exercises (in-basket, case studies, etc.).

Both customized and off-the-shelf simulation exercises can be used. In development and selection of simulation exercises one should apply the following rules:

1. Development and/or selection of simulation exercises is carried out on the basis of the specific competency model;

2. AC exercises should model the key tasks of the target job: what people do and under what circumstances they do it within that job. If an AC is developed for a new job, simulation exercises are developed on the basis of the analysis of its prototypes (or similar jobs);

3. Exercises should not contradict the corporate culture of the organization;

4. Simulation exercises should generally possess high fidelity and face validity, i.e. create a genuine feeling of performing the target job;

5. Before new simulation exercises are used, they should be tested in a pilot study. It is necessary to make sure that:

• all the behavioral indicators developed for an AC program manifest themselves in overt behavioral responses (see Section 5);

• the time allocated to the exercise is enough for the participants to be able to demonstrate the needed behavioral responses;

• participants are provided with equal opportunities to demonstrate target behavioral responses.

6. Prior to development of separate exercises it is necessary to outline the AC blueprint and submit it for customer approval.

Materials for simulation exercises can be presented as printed documents, audio or video recordings or in electronic format.

3.1.5. Using psychometric tests in ACs

It is acceptable to use techniques that do not involve direct behavior observation (e.g. psychometric tests). To do so, it is necessary to conduct a comparative analysis (mapping) of the test scales against the established competencies. Mapping results should be reflected in the Test scales by Competencies matrix.

It is only acceptable to employ those psychometric tools that were developed for use in a business environment (for recruiting, succession planning, career guidance, etc.). This information should be found in the test Manual or Technical report available from the test publisher.

Results of psychometric assessment can only be used as supplementary information. If test results are used and expert integration is performed, the integration session must be attended by a professional trained to work with the specific psychometric tools in use (see 3.2.3.).

3.1.6. AC program development

An AC program is the final document created in the preparation for an AC. It sets standards for carrying out specific AC events. The program should include:

1. Descriptions of competencies and corresponding rating scales;

2. Competencies by Techniques matrix;

3. Assessment techniques description, including simulation exercises;

4. AC work plan.

39

Организационная психология. 2013. Т. 3. №2

www.orgpsyjournal.hse.ru

AC program development should be based on the corporate culture and working practices of the organization. When implementing an AC program in different culture, one should also consider local national and ethnic features.

1. Description of competencies includes the following set of materials:

• List of competencies

• List of behavioral indicators comprising each competency

• AC behavior rating scales (see Glossary);

2. Competencies by Techniques matrix shows the correspondence between competencies and

techniques used in a specific AC program.

Competencies are primary; techniques are selected later so as to fit them. The matrix should be organized so that each competency is assessed in at least two techniques, one of which should be a simulation exercise. Each simulation exercise should assess no more than 5 competencies, although the optimum number is 3.

3. Description of simulation exercises. Description of each simulation exercise should consist

of a set of materials for both participants and AC delivery professionals.

Materials for participants:

• Instructions;

• Exercise content;

• Response forms (in case written responses are collected from participants)

Materials for AC delivery professionals:

• Instructions including logistical constraints and time frames for exercises;

• Observation sheet (form for overt behavioral responses recording);

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

• Evaluation forms for observers;

• Instructions for role players (if using role plays);

• Supplementary materials (e.g. rules for scoring, possible responses to case study, questions for post-exercise interviews, additional behavioral indicators not included in evaluation forms).

• If the AC program includes an interview, its description and technique should be given in this section of the program.

The set of materials can be presented in paper or electronic format, including audio/video recordings.

4. AC working plan contains a timetable and an observation plan.

The timetable is the sequence and precise starting and ending time for exercises and other techniques. If possible, it is recommended to mix individual and interactive exercises and techniques. The timetable should take account for the time necessary to change rooms. Time should also be allocated to the observers for classifying and evaluating participants’ behavior at the end of each exercise.

The observation plan is a table that shows which observers assess each of the participants in each of the exercises. The plan should indicate the rooms in which each of the exercises is carried out as well as participants and observers who are working in each room at each moment. The observation plan should be made in such a way that a participant’s behavior during the AC on the whole is assessed by at least two observers. Observing a participant in the course of a specific exercise can be carried out by a single observer.

40

Организационная психология. 2013. Т. 3. №2

www.orgpsyjournal.hse.ru

3.1.7. Training of AC professionals Training content

This section describes the minimum set of requirements for the training of major AC team roles. If an AC program is implemented in other cultural or multicultural contexts, AC team members should be specially selected and trained.

In practice, one person may fulfill several roles, e.g. facilitator and administrator, designer and developer. What follows next is a set of the basic roles of AC professionals.

Observer

Observers observe, record, classify and evaluate (ORCE) behavior of AC participants. Observers should go through a specially organized training period after which they must:

• Have a general understanding of the AC method, its strengths and limitations;

• Have basic knowledge about the customer organization and target job;

• Be familiar with the internal documentation related to the AC (see 3.1.1);

• Know the objectives that the customer pursues by implementing the AC;

• Be familiar with the AC program:

a) Assessed competencies, behavioral indicators;

b) Rating scales and rules for using them;

c) Content of all simulation exercises and other techniques;

d) Competencies by Techniques matrix.

• Be able to observe, record, classify and evaluate participants' behavior within specific exercises (see 3.2.2.);

• Be familiar with the typical mistakes in the ORCE process (including the difference between recording overt behavioral responses of a participant and recording one’s own inferences);

• Have interviewing skills (if an interview is included in the AC program);

• Have data integration skills (see 3.2.3.);

• Be familiar with the feedback process and understand its significance in the AC (facilitate acceptance of the feedback by the participant and motivation for change in behavior; see 3.3.1);

• Know the principles of writing AC based individual reports.

Facilitator

A Facilitator is responsible for the content of the AC and integration session organization. Typical functions of the facilitator are: encouraging a constructive attitude towards the AC program among the participants, briefing participants, moderating group interaction, supervising participants’ activities, organizing the work of observers throughout the AC and during the integration session in particular. Only people who have been trained as observers and have experience in this field can claim the role of the facilitator. Facilitators should go through a specially organized training period, after which they must:

• Have basic understanding of the customer organization and target job;

• Be familiar with the internal documentation related to the AC;

• Know the objectives that the customer organization pursues by implementing the AC;

• Be familiar with the AC program:

a) Assessed competencies, behavioral indicators;

b) Rating scales and rules for using them;

c) Content of all simulation exercises and other techniques;

d) Competencies by Techniques matrix.

41

Организационная психология. 2013. Т. 3. №2

www.orgpsyjournal.hse.ru

• Have skills in organizing others;

• Know all the requirements for final outcomes of the AC program;

• Have skills in organizing expert integration procedures (see 3.2.3.).

Administrator

An Administrator is responsible for the logistical aspects of the AC process. Their responsibilities include rooms preparation, following the timetable, organization of refreshments; preparation, distribution and collection of AC materials, etc. The administrator role does not require any special skills apart from general organizational skills. Administrators must:

• Be familiar with the internal documentation related to the AC (see 3.1.1.);

• Be familiar with the AC exercises list and their description;

• Be familiar with the AC work plan;

• Be familiar with requirements for the rooms and space;

• Be familiar with the regulations for the AC materials storage.

Role-player

Role-players act as participants’ partners in interactive simulation exercises. This function can be performed either by a trained actor or an observer that has taken a special training course. Role-players should go through a specially organized training period after which they must:

• Have general understanding of the AC method, its strengths and limitations;

• Be familiar with the AC work plan;

• Be familiar with role-play exercise scenario and instructions for its participants;

• Thoroughly know their role;

• Be familiar with the competencies assessed in the exercise;

• Be able to play the role in strict accordance to the exercise scenario, providing equal opportunities for all participants;

• Be able to demonstrate behavior challenging participants to show the required behavioral responses.

AC program designer

A Designer creates the AC program. Designers should go through a specially organized training period after which they must:

• Be familiar with AC methodology and practice;

• Know the essence oftarget job: key tasks, instances of effective and non-effective performance;

• Know the objectives that the customer organization pursues by implementing the AC;

• Know the main types of simulation exercises and other techniques used in ACs;

• Be familiar with the main types of AC validity;

• Be able to outline the AC blueprint;

• Be able to select exercises that have sufficient construct-related validity in relation to the specified competencies;

• Be able to select a set of AC exercises that have sufficient content-related validity in relation to the target job;

• Be able to determine an exercise sequence and make up an AC timetable that meets specified AC objectives;

• Be able to formulate a task for the developer if it is necessary to develop a new simulation exercise or adapt an existing one.

If a new AC program and exercises are being developed, designers must also have skills in job analysis (see 3.1.2).

42

Организационная психология. 2013. Т. 3. №2

www.orgpsyjournal.hse.ru

Exercise developer

A Developer creates simulation exercises for ACs. It is recommended to recruit a developer with background training in social psychology or management. Developers should take a special training course after which they must:

• Have general understanding of the AC method, its strengths and limitations;

• Know the objectives that the customer organization pursues by implementing the AC;

• Know the general features of the AC blueprint;

• Know the essence of the target job: key tasks, instances of effective and non-effective performance;

• Be familiar with the major types of simulation exercises used in ACs;

• Be familiar with behavioral indicators that should be assessed in the exercise;

• Be familiar with the major types of AC simulation exercises’ validity;

• Be able to develop exercises with high face validity and construct-related validity in relation to specific competencies.

Forms, duration and expiration of training

This section addresses the minimum set of requirements for the forms, duration and expiration period of observers training. Forms of training for other key AC professionals are different in different organizations, so they are not yet subject to standardization.

Training of observers who do not have prior experience in ACs The training program should include two parts:

1. An informational part, in which the main objective is to provide general understanding of the AC method;

2. Observation, recording, classifying, evaluation (ORCE) training:

a. Training of basic ORCE skills. Duration of this training should not be less than 1 day. Main focus should be upon observation and recording skills;

b. Training of ORCE skills within specific exercises of the AC program. Duration of this training depends on the number of exercises on the program as well as their complexity

To fully prepare an observer it is desirable to ensure their participation in AC procedures under the supervision of a coach (experienced observer).

Coaching can include several stages:

1. Observation of participants’ behavior, discussion of their scores with a coach, and watching observers and the facilitator work during the integration session;

2. Working as a shadow observer: independent ORCE and subsequent discussion of the rationale behind the scores with the coach. Participation in the integration session without the right of discussing other observers’ scores;

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

3. Working as a fully-fledged observer supported by feedback from the coach.

The time between the end of the training and the beginning to work, as a professional observer should not exceed three months; otherwise an additional training course is necessary.

Observers training who have prior experience in ACs

Observers training comes down to preparation for the delivery of specific AC programs (see the role of observer). Observers who have not had ORCE experience in ACs for more than 1 year should attend an additional training session. To evaluate how well an observer is ready to take part in a

43

Организационная психология. 2013. Т. 3. №2

www.orgpsyjournal.hse.ru

specific AC program after training, it is recommended to assess concordance of their scores with those of experienced observers in each of the AC simulation exercises.

3.1.8. Briefing

In preparation for the AC its potential participants should be briefed in advance. This provides proper motivation and ensures participants make an informed decision on taking part in the AC. It is recommended to provide the following information in written form prior to AC delivery:

1. Description of the AC method and its strengths;

2. AC program objectives;

3. Criteria of pre-selection for participation in the AC;

4. Description of anticipated AC results;

5. The status of AC results and regulations for their use;

6. Information about observers and facilitators, their qualifications and AC experience;

7. Information on who and when will provide participants with their AC results;

8. Decisions that may be made on the basis of the AC results;

9. Anticipated forms of training (if the AC program will be used for personnel development);

10. Contact details.

3.2. AC delivery

3.2.1. Organization of AC delivery

AC delivery should be organized in accordance with the AC program. Correct organization of the procedure is a necessary condition for the successful implementation of the program. Lack of attention at this stage can have a negative impact on AC results and the participants’ attitude towards assessment. Besides, lack of organization can violate the principle of providing equal conditions for every participant.

Rooms for the AC should be prepared as specified in the program (equipped with necessary facilities and have enough space for both participants and observers (see 3.1.5.). Rooms should also be well illuminated and ventilated, isolated from disturbances (noise, presence of other people). Rooms in close proximity to participants’ workplace should not be used, as it might distract them from work during the AC.

AC professionals should be trained to deliver the AC in advance according to their role in the AC program (see 3.1.6). The work of participants and AC professionals should be organized in strict accordance to the working plan (timetable and observation plan - see 3.1.5). A number of requirements for the organization of the AC procedure should also be met:

• At the beginning of the AC participants should be informed about the timetable and rules of interaction. This informational introduction should also include all the items of section 3.1.7, in case participants have not been briefed prior to the start of the AC;

• The facilitator and administrator should see to it that participants do not make use of any supplementary materials and means other than those permitted by the AC program;

• AC professionals should strictly control and stick to the general AC timetable and time limits within each of the exercises;

• All AC professionals and participants should limit their contacts with the external environment (in particular, mobile calls)

44

Организационная психология. 2013. Т. 3. №2

www.orgpsyjournal.hse.ru

Confidentiality of AC procedure should be maintained. The following regulations should be followed:

• Materials for each exercise should not be accessible to participants before the start of the exercise;

• Participants’ work with AC materials should be controlled to ensure that they are not copied or passed on to third parties;

• All materials related to the exercises should not be left with the participants after the exercises are over.

It is also recommended to acquire written consent from the participants at the beginning of the AC allowing AC team members to process their personal data.

3.2.2. Rules of observation, recording, classifying, and evaluation (ORCE)

Facilitators and observers take part in assessment procedures. Independently from each other observers should provide precise assessment of participants’ behavior in accordance with the specified competencies. For that purpose, they should be guided by the following sequence of actions within each simulation exercise: observation (O), recording (R), classifying (C), and evaluation (E) of behavior (ORCE).

1. Observation is a process of tracking participants’ overt behavioral responses during the exercises. Observation should be done by standard rules established beforehand and reflected in the AC program (description of competencies, Competencies by Techniques matrix). In each simulation exercise an observer should not observe more than three participants. To make observation more precise video recording can be used with the consent of participants.

2. Recording is precise registration of overt behavioral responses in an observation sheet. Observers should register overt behavioral responses but not their own inferences. It is not acceptable at this stage to use names of competencies, labels or value judgments instead of concrete descriptions of overt behavioral responses. Abbreviations in recordings are allowed but it is recommended to decipher them prior to classifying.

3. Classifying is a process of relating the recorded overt behavioral responses to behavioral indicators and further on to competencies. During classification, observers should only work with the observation sheets and refrain from making any additional judgments about participants’ behavior.

4. Evaluation is a process of determining the demonstrated level of a competency. First, observers should assess participants’ behavior against behavioral indicators and based on that give a score for each competency.

Observation and recording are done during the exercise. Classifying and evaluation should be done before the integration procedure. The AC facilitator should ensure that observers’ ORCE processes are independent. In particular, facilitator should prevent or forbid any discussions of participants’ behavior prior to the integration procedure. All AC professionals should ensure that ORCE materials are kept confidential and are not made available to participants or third parties.

3.2.3. Data integration and decision-making

Data integration is the process of individual expert scores aggregation. Expert data integration is based on collective discussion and consensus. It is also acceptable to use statistical methods to compute aggregate scores for competencies and overall assessment rating (statistical data integration). However, AC delivery professionals can only use such methods in case there are scientific data justifying their usage in a specific AC program.

45

Организационная психология. 2013. Т. 3. №2

www.orgpsyjournal.hse.ru

Organization of expert integration procedures

• It is recommended to have an integration session as early after the end of the AC as possible;

• To get high quality scores the integration session should be allowed enough time. Quality and accuracy of scores should be the top priority of observers and facilitator but not speed;

• The integration session should be organized by the specialist who performs the role of facilitator. Only those who took part in AC delivery should have the opportunity to influence decisions regarding participants’ competency scores;

• The integration session should be carried out in a room that ensures the process to be confidential.

Rules of expert data integration

• For each participant, the final score for each competency should be based on discussion and consensus among observers.

• The final score for each competency is determined on the basis of scores for this competency in multiple exercises. These scores should be given by at least two different observers. It is not acceptable to base the final competency score on the observation materials of one observer only.

• The scores that individual observers provide for consensual discussion should be justified in a material form. Following justifications are acceptable: observation sheets, audio-and video-recordings and completed evaluation forms. The facilitator should only accept justifications that comply with ORCE standards.

• During the discussion each observer must justify their scores with reference to specific behavioral responses.

• Only the information obtained by using AC techniques should be used and discussed in the course of data integration. It is not acceptable to refer to prior experiences or observations of particular AC participants.

• During data integration, priority should be given to the scores obtained in AC simulation exercises. Information obtained by other techniques (e.g. psychometric tests, interview) is considered supplementary and less important.

3.2.4. AC materials and rules of their storage

The Standard distinguishes between two types of AC products: materials and results (see 3.3.4).

All intermediate products used to obtain results are AC materials.

1. Materials subject to accounting and storage before the AC final stage:

• Participants’ response forms filled out during written exercises;

• Video-recordings of participants’ behavior during exercises (if any);

• Observers’ records made while observing participants (observation sheets);

• Materials relating overt behavioral responses to competencies;

• Competency evaluation forms filled out for separate exercises.

• Integration session notes (if any).

2. AC materials subject to destruction:

• All draft papers of participants;

• All used copies of instructions, exercises and testing forms.

3. All materials obtained during an AC are confidential. Rules for storage, usage and providing

access to third parties both during and after the AC should be reflected in the internal

documentation of the organization (see 3.1.1.). Access to AC materials can only be given to

authorized individuals.

46

Организационная психология. 2013. Т. 3. №2

www.orgpsyjournal.hse.ru

3.3. Final stage of the AC

3.3.1. Presenting AC results to participants

All participants should be informed about AC results in accordance with the announced AC objectives. Presenting results to participants is only possible after the main stage of the AC (after data integration).

Individual written reports

• The necessity to prepare individual written reports and requirements for their content are determined by the customer. There are currently no standard requirements for report content.

• If individual written reports are to be provided, they should be prepared by an observer who has taken part in the AC, has observed the participant's behavior in simulation exercises and possesses skills of report writing.

For rules regarding storage and granting access to information in the individual written reports see 3.2.4.

Feedback to AC participants

Feedback process is an important recommended part of an AC. It enables achievement of one of the assessment aims - formation of a constructive attitude towards assessment results and motivates participants to use the information for their further development. This is why usage of feedback considerably increases the efficiency of managerial decisions related to AC results.

1. Feedback should be given orally (face-to-face, via telephone or a video call). Individual written reports can be an addition to oral feedback;

2. Feedback should necessarily include an interaction between an observer and a participant with a discussion of overt behavioral responses and drawn conclusions and inferences;

3. Feedback should be given as soon after the AC as possible;

4. Feedback should include:

• Explanation of the AC method;

• Explanation of the competencies used;

• Evaluation judgments on each competency accompanied with description and discussion of participant's overt behavioral responses during AC exercises.

5. Upon a participant's request and prior agreement with the customer recommendations related to the development of the participant in their target job can be given;

6. Feedback can only be given by an observer who observed the participant's behavior in simulation exercises or the AC facilitator;

7. An observer who provides feedback should possess the necessary skills.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

3.3.2. Feedback to the customer

Feedback to the customer about AC participants can only be given after the main stage of the AC is over (after data integration). Any personal information not related to the announced AC objectives and assessed competencies should not be provided to the customer. The customer should be informed about possibilities and limitations of using AC results in managerial personnel-related decisions.

47

Организационная психология. 2013. Т. 3. №2

www.orgpsyjournal.hse.ru

3.3.3. Status of AC results

AC results can only be used in decisions connected to the development of organization and it's personnel (see 2.3.). AC results cannot be used as the only basis for a decision about professional mismatch of an AC participant.

3.3.4. AC results and regulations regarding use

AC results may include:

• Final individual competency scores;

• Written justifications of competency scores;

• Conclusions and recommendations (individual and group).

The content and form of AC results should be agreed upon with the customer at the preparation stage. AC results can be presented in the form of individual written reports, overall assessment ratings, group reports, etc. Individuals authorized to access AC results should be listed in the internal documentation of the organization. Organization members having access to AC results must ensure correct usage of AC results in accordance with their status (see 3.3.3.). Using AC results to pursue aims different than those established beforehand is not allowed. It is not permissible to use AC results in ways that bring discredit on participants within or outside of the organization or that violate their rights (see 5.). It is also not acceptable to hand individual results over to third parties that are not authorized by internal documentation. It is recommended not to use AC results more than 2 years after the AC. After this period AC results can be used in a form where all personal data are removed and only for research purposes.

4. Information technology in ACs

Technologies like computers, mobile devices and the Internet are acquiring growing importance both for contemporary organizations and for ACs as a method of modeling the activities within these organizations. Usage of such technologies increases AC's face validity and makes the work of AC professionals more efficient. If an AC delivery organization is geographically dispersed, the Internet can cut travelling expenses, enabling remote delivery of ACs.

IT in ACs can be used in automation purposes at each of the stages during an AC. Possible fields of IT application in ACs are:

1. Collecting and structuring data in the course of job analysis (identification of competencies and development of job profiles);

2. Working out a timetable and an observation plan for an AC program;

3. Organization of AC delivery procedures:

AC events planning;

Control over the time of participants’ work.

4. Delivering simulation exercises and other AC techniques via computers (e.g. e-tray, psychometric tests, online role-play exercises);

5. Automation of ORCE and data integration processes. Observers can use computers or mobile devices to record, classify and evaluate behavior. Facilitators can use IT to collect observers’ postexercise scores as well as during data integration;

6. Storing AC results and materials, including video recordings of participants’ behavior;

7. Automation of report writing elements.

48

Организационная психология. 2013. Т. 3. №2

www.orgpsyjournal.hse.ru

If exercises or other techniques are computer-assisted, it is necessary to make sure that:

• Target job analysis has shown that computers are used to address the key professional tasks;

• Technologies that are used do not give advantage to participants with better computer knowledge and experience (unless such behavioral indicators are included in the model of competencies);

• Technologies that are used do not contradict requirements for the organization of AC delivery (see 3.2.1.);

• Using IT does not compromise validity of the AC program compared to the “paper and pencil” version.

If a virtual AC is being delivered, where all the participants and observers are geographically dispersed and interact online, it is necessary to make sure that the following additional requirements are met:

• A significant part of the target job’s key tasks can be simulated using this delivery format;

• Each participant is provided with the necessary and comparable conditions of working online (stable internet connection and audio / video stream, properly functioning computers, software installed and tested beforehand);

• Participants are not able to use help of the third parties;

• Work environment makes it possible to ensure that the AC delivery process is confidential;

• Technologies being used provide sufficient flexibility of AC delivery, accounting for time zone difference and possible Internet disconnection.

5. Validity of ACs

Validity of ACs refers to the relevance and suitability of using an AC program in certain circumstances. It is evident that in order for validity estimation to be reasonable all the requirements of this Standard should be met but the requirements for the AC delivery and the preparation stage of the AC are particularly important.

There are three major types of AC validity:

1. Content-related validity is the extent to which AC simulation exercises, behavioral indicators and competencies correspond to the key set of the target job’s tasks and actions. Evidence for this type of validity is a necessary minimal requirement for an AC program;

2. Construct-related validity is the extent to which an AC is shown to estimate the selected scientifically grounded competencies. Construct-related validity enables to show that an observer’s score will accurately reflect how well a participant’s behavior will correspond with the competencies determined during job analysis. There are three types of construct-related validity in ACs:

• The extent to which the assessed competencies correspond to scientifically grounded theoretical job frameworks;

• Correlation between AC competency scores and scores obtained with other scientifically grounded techniques. This includes techniques that can be used to assess competencies;

• Correspondence degree between the competencies and simulation exercises chosen to be employed in the AC. Evidence for this type of validity is a necessary minimal requirement for an AC program;

3. Criterion-related validity is the degree of statistical correspondence between results of a specific AC program and professional success indicators that are used as a reference point in the professional environment. These indicators should be established at the preparation stage of the AC. It is recommended to estimate criterion-related validity some time after the end of the AC. While estimating criterion-related validity it is acceptable to use indicators of

49

Организационная психология. 2013. Т. 3. №2

www.orgpsyjournal.hse.ru

professional success established within the organization: key performance indicators (KPI), HR committees’ expert evaluations and other justified measures not dependent on the results of the AC. Demonstration of AC program’s criterion-related validity is the most important type of evidence that this program can actually predict professional success in the target job.

The validity of the AC method has been demonstrated in numerous research studies. However, every AC program implemented under new conditions should be validated. Validation of an AC program should be carried out in the following situations:

Situation Content-related validity Construct-related validity

AC program is applied for the first time + +

An existing program is translated in another language + +

The program is employed to achieve goals other than those for which it was originally designed (e.g., development, succession planning) + +

The program is applied under different conditions (organizational, cultural, etc.) + +

Essential modifications were introduced to the program (changes in the set of competencies, exercises’ content, etc.) + +

The program is applied to another target group +

Validity estimation procedures

This standard describes the minimum requirements for AC validity: estimation of content and construct-related validity. The full estimation of all types of validity requires complying with accepted professional validation standards and should be performed by trained professionals.

To establish content-related validity it is necessary that a group of SMEs gives a written confirmation that AC simulation exercises, behavioral indicators and competencies reflect the most essential tasks of the target job.

To establish construct-related validity two subsequent procedures examining AC program’s quality are required:

1. A group of AC experts independently estimates how adequate specific techniques of an AC program can measure chosen competencies.

2. A group of AC experts independently establishes the correspondence between competencies and behavioral indicators developed for each exercise on the AC program.

Experts’ work should result in a document containing:

1. An agreed-upon Competencies by Techniques matrix

2. An agreed-upon matrix of correspondence between within-exercise behavioral indicators and competencies.

At the stage of AC program development it is also recommended to examine concordance of observers’ scores. To do so a group of observers should independently assess participants’ behavior in the course of AC exercises. Statistical methods should be used to examine concordance of scores for each competency in each exercise.

General validation principles and standards are published in Standard requirements for psychological measurement tools (Russian Psychological Society (RPS), 2012), Principles for the Validation and Use of Personnel Selection Procedures, Fourth Edition (Society for Industrial and Organizational Psychology Inc., 2003), Ethical code of Russian Psychological Society (RPS, 2012).

Customers (people making decisions on AC implementation) and participants have the right to have access to AC validation information.

50

Организационная психология. 2013. Т. 3. №2

www.orgpsyjournal.hse.ru

6. Participants rights and responsibilities

6.1. Participants rights

• Potential AC participants have the right to obtain information about the AC in advance (see 3.1.7.).

• Potential AC participants have the right to withdraw from participation prior to the start of the AC;

• All participants have the right to have equal conditions in the course of the AC;

• All participants have the right to know the decisions made in relation to them on the basis of the AC results;

• All participants have the right to appeal against their results;

• All participants have the right to receive feedback after the AC if it is reflected in the internal documentation of the organization;

• If participant-related AC materials and results are to be used to address tasks other than those announced in advance, participants should be informed of this and their permission should be sought. In this sense, participants have the right to preserve confidentiality of information about them.

6.2. Participants responsibilities

• During the AC, participants must follow rules of conduct established by the facilitator during briefing;

• The AC program and all the accompanying materials are intellectual property. Participants do not have the right to copy, publish or pass these materials to third parties.

Glossary

AC behavior rating scales - rules for drawing correspondence between overt behavioral responses and behavioral indicators. Ordinal scales are used in ACs for behavior assessment.

AC program - a document that sets standards for carrying out specific AC events. It includes:

• Description of competencies and their rating scales;

• Competencies by Techniques matrix;

• Description of assessment techniques including simulation exercises;

• AC working plan.

Assessment Center (AC) - a complex method of estimating potential success within a job (professional success) that includes a set of various techniques and is based on participants’ behavioral assessment by a group of expert observers during simulation exercises (for greater detail see 2.2.).

Behavioral indicator - typical (stable and regularly occurring) pattern of successful or unsuccessful behavior. A group of behavioral indicators comprises the content of a competency. Two types of behavioral indicators are differentiated in the Standard (see Figure 1):

1. Indicators identified in the course of job analysis;

2. Indicators developed for specific AC simulation exercises. They are developed on the basis of behavioral indicators of the first type and used in evaluation forms.

Competency (dimension and/or task) - a group of behavioral indicators associated with professional success. Grouping of behavioral indicators is done on their essential similarity/difference.

Data integration - the process of aggregation of individual expert scores.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Exercise - see Simulation exercise.

51

Организационная психология. 2013. Т. 3. №2

www.orgpsyjournal.hse.ru

Feedback - presenting final assessment results to customers and participants with the aim of ensuring AC results acceptance.

Job analysis - collection of information about key tasks and behavioral indicators of the target job. Obtained information serves as the basis to determine the set of competencies needed to perform the job successfully. It is also used to select or develop assessment techniques.

AC key roles and responsibilities

• Administrator organizes the logistical aspects of AC procedures;

• Facilitator is responsible for AC delivery and organization of data integration procedures;

• Designer creates the AC program;

• Observer observers, records, classifies and evaluates AC participants’ behavior;

• Developer creates AC simulation exercises;

• Role player acts as a participant’s partner in interactive simulation exercises. Either specially trained actors or observers who took a special training course can be role players;

• Participant - individual whose behavior is assessed during an AC.

ORCE - sequence of actions of an observer in the course of simulation exercises, the process of observation (O), recording (R), classifying (C), and evaluation (E) of behavior. ORCE is related to independent expert assessment.

Overt behavior - AC participant’s verbal or nonverbal behavior accessible to observers’ perception without distortion or information loss.

Overt behavioral response - aa behavioral pattern of accomplishing a professional task accessible to direct observation and objective recording. Two types of overt behavioral responses are distinguished in the Standard (see Figure 1):

1. Occurring in real-life target job

2. Demonstrated during an AC.

Performance indicator - estimate of professional success independent of AC results and used as a reference point in the professional environment.

Simulation exercise - a business activity that enables reproduction (simulation) of most essential aspects of the target job.

Validity of AC - relevance and suitability of using an AC program in certain circumstances.

Figure 1. Behavioral indicators and overt behavioral responses.

52

i Надоели баннеры? Вы всегда можете отключить рекламу.