Научная статья на тему 'Исследование неопределенности результатов экспертных измерений в системе управления качеством'

Исследование неопределенности результатов экспертных измерений в системе управления качеством Текст научной статьи по специальности «Электротехника, электронная техника, информационные технологии»

CC BY
49
12
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
ОЦЕНИВАНИЕ НЕОПРЕДЕЛЁННОСТИ / РЕЗУЛЬТАТЫ ЭКСПЕРТНЫХ ИЗМЕРЕНИЙ / КАЧЕСТВО ЭКСПЕРТОВ / РЕКОМЕНДАЦИИ ДЛЯ СТАНДАРТИЗАЦИИ / UNCERTAINTY ESTIMATION / EXPERT MEASUREMENTRESULTS / EXPERT QUALITY / STANDARDIZATION RECOMMENDATIONS

Аннотация научной статьи по электротехнике, электронной технике, информационным технологиям, автор научной работы — Bubela T., Mykyychuk M., Hunkalo A., Boyko O., Basalkevych O.

В целях обеспечения достоверности результатов оценивания и контроля качества продукции, услуг, процессов, знаний и других объектов предложена методика расчёта неопределённости результатов измерения, осуществлённых экспертными методами. Методика апробирована путём исследования неопределенности экспертных измерений в системе управления качеством высшего образования. Сформированы рекомендации для нормирования характеристик персонала по оцениванию качества

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

A study of uncertainty of expert measurement results in the quality management system

Since the quality of measuring in international practice is assessed by uncertainty of the results, and an apparatus for its calculation in the area of expert measurement has not been developed yet, the study focuses on the methods of estimating uncertainty of expert measurement results.The authors have conducted analytical research on the sources of expert measurement results’ uncertainty, among which the main ones herewith singled out are: imperfection of experts, wrong choice of their number, and assessment conditions. The system of expert quality indices and the methods of their identification are suggested in the article. It enables making the right choice of the optimum methods of estimating the expert quality indices in any concrete case. The expert assessment of the significance of student activity components with regard to their uncertainty calculation has proved that the most important component is a “study activity”, and the least important one is a “social activity”.The suggested recommendations for standardizing the specialist experts’ quality indices suggest setting the lower limits of the admissible values. It allows normalizing their characteristics and optimizing the process of their attestation and hereby ensures coherence in expert measurements.

Текст научной работы на тему «Исследование неопределенности результатов экспертных измерений в системе управления качеством»

З метою забезпечення достовiрностi результатiв оцтювання та контролю якостi продукци, послуг, процеЫв, знань та тших об'eктiв запропоновано методику розрахунку невизначеностi резуль-татiв вимiрювання, здшснених експерт-ними методами. Методику апробовано шляхом дослидження невизначеностi екс-пертних вимiрювань у системi управ-лтня ятстю вищог освти. Сформовано рекомендаци для нормування характеристик персоналу з оцтювання якостi

Ключовi слова: оцтювання невизначе-ностi, результати експертних вимiрю-вань, ятсть експертiв, рекомендаци для

стандартизаци

□-□

В целях обеспечения достоверности результатов оценивания и контроля качества продукции, услуг, процессов, знаний и других объектов предложена методика расчёта неопределённости результатов измерения, осуществлённых экспертными методами. Методика апробирована путём исследования неопределенности экспертных измерений в системе управления качеством высшего образования. Сформированы рекомендации для нормирования характеристик персонала по оцениванию качества

Ключевые слова: оценивание неопределённости, результаты экспертных измерений, качество экспертов, рекомендации для стандартизации

UDC 006.015.5; 621.317

|DOI: 10.15587/1729-4061.2016J1607|

A STUDY OF UNCERTAINTY OF EXPERT MEASUREMENT RESULTS IN THE QUALITY MANAGEMENT SYSTEM

T. B u b e l a

Doctor of technical sciences Department of Metrology, Standardization and Certification*

Е-mail: paholuk@ukr.net M. Mykyychuk Doctor of technical sciences, Professor, Director Institute of computer technologies, automation and metrology*

Е-mail: mykolamm@ukr.net A. Hunkalo PhD, Associate Professor Department of Metrology, Standardization and Certification*

Е-mail: allagunkalo@ukr.net O. Boyko PhD, Associate Professor** Е-mail: oxana_bojko@ukr.net O. Basalkevyc h Assistant** Е-mail: elenelf22@gmail.com *Lviv Polytechnic National University Bandera str., 12, Lviv, Ukraine, 79013 **Department of Medical Informatics Danylo Halytsky Lviv National Medical University Pekarska str., 69, Lviv, Ukraine, 79010

1. Introduction

The process of determining the quality indices (QIs) of products and services is accompanied with uncertainties provoked by different causes that can essentially affect the final assessment of the object quality. To secure a unified QI assessment, it is necessary to establish rigid requirements for the calculation accuracy of those indices that are often derived from application of expert methods.

The quantitative estimate of the measurement accuracy is an uncertainty of the measurement results. A well-established apparatus [1, 3] for calculating the QI measurement results is known to be widely used today. Particularly, as is mentioned in [2], "Due to the expanding range of the use of measuring data processing and the possibility of the use of new instruments and procedures, the problem of the assessment of the accuracy of the experimental determination of statistical characteristics (particularly, the arithmetic mean) of correlated data was and still remains topical." Besides, the methods of uncertainty evaluation are rapidly developing and adapting to specific computational tasks as in [3]: "This practical case shows an example

of how a powerful tool can be metrology for engineers, not only for validation of models but also providing better knowledge of the parameters that have a greater influence on both the model and the experiment. The designer is thus aware of the aspects that can be improved to minimize the difference between the model and experiments or the limits he cannot surpass in using the model according to the design criteria." Nevertheless, many QIs are identified by applying expert measurement methods with appropriate peculiarities.

High reliability of the results of such measurements is a pledge of an effective functioning of the quality management system. Therefore, the development of a methodical apparatus to determine and assess uncertainty of expert measurements is of topical significance.

2. Analysis of previous studies and statement of the problem

There are no generally-adopted conventional recommendations for managing uncertainty of expert evaluations.

©

Some authors suggest evaluating separate properties of expert measurement, which could not be treated as an exhaustive estimate of its quality. One of the most widespread properties is a coefficient of an expert's opinion coordination [4, 5]. So the dominating criterion of expert group quality estimation is a degree of the reached consensus, which is the basis of important managerial decisions [6]. Other authors [7] find the main attributes of expert estimation in the "completeness and speed of its conduction as well as in actualization of partial statements and conclusions," which does not reflect all the components that influence the quality of expert evaluation.

Nowadays, expert systems that are based on experts' knowledge and experience are being developed [8], which again highlights the importance of accuracy (uncertainty measurement) of research results and such systems' functioning.

The fact that an expert quality assessment problem has not been tackled yet is obvious from the absence of a systematic approach to its solution. To identify the expert quality, above all, means to know the properties with which it is associated. In scientific literature [5, 9, 10], a limited number of quality options is given without regard to their stipulation and interaction. For example, in one study, competence, impartiality and objectivity stand for the main properties; the other studies recognize just one or two of them. As to the expert's competence per se, it is frequently defined as a reliability and rationale of the applied indices, or as some informative content and unfaltering judgement. In addition to those most frequently mentioned properties, it is also recommended to take into account the expert's participation interest, ability to operate a relationship scale as well as attention to a number of scaled gradations [11]. Results of expert measurement are often used not only for factual estimations but also for predicting certain phenomena. In the latter case, the prediction accuracy and estimation impartiality are determined with the help of statistical methods and regression analysis [12], whose classical usage is complicated with uncertainties. Therefore, when determining the estimates, the authors of studies [13-16] prefer to apply fuzzy mathematics, in particular, fuzzy regression models, which is just a partial solution to the problem of assuring the needed reliability and accuracy of expert research.

It is worth noticing that some scholars suggest estimating the accuracy of expert measurement by contrasting its results against those gained by other methods. So the author of [17] suggests comparing the findings of expert and sociological research of the same objects. In an emergency case, this approach can be implemented, but it does not reflect the impartial estimate of accuracy and uncertainty of expert measurement results.

Thus, absence of a method for assessing expert measurement quality and its results' uncertainty with regard to modern international requirements necessitates further research in this direction.

3. Research objectives and tasks

The intended objective is to work out a methodical approach to the application of the uncertainty concept [1] in assessing the quality of expert measurements. To reach this goal, the following tasks were set and solved:

- to analyze the sources of the expert measurement result uncertainty;

- to suggest methods of calculating the uncertainty of expert measurement results;

- to recommend and rationalize standards for determining the quality indices of experts' quality assessment;

- to test the suggested methods in estimating the uncertainty of the results of expert measurement of importance degrees of student activity components in a higher education institution for the purpose of assuring an efficient functioning of its quality management system.

4. Materials and methods of the research on uncertainty of expert measurement results

4. 1. An analytical study of the sources of uncertainty of expert measurement results

To stipulate the authorial methods of estimating the uncertainty of expert measurement results, an analytical study of its sources [4-11] has been primarily conducted to stratify the expert measurement process and to reveal the main reasons for any emerging uncertainty related to the experts' imperfection, an undue choice of their number, and the conditions of making the assessment (Fig. 1).

Expert imperfection. Based on the analysis of special literature [4-20] and on previous experience, there appears an opinion that expert quality indices should be classified into four groups: namely, competence, motivation, impartiality, and reliability (Fig. 2), following which an expert's imperfection degree that leads to uncertainty of an expert measurement result seems to be interpretable. For these indices, we have developed recommendations on how to choose the practices of their defining (Fig. 2). Expert's competence should be extended both to the object of quality assessment (professional competence) and the evaluation methodology (qualimetry competence).

Fig. 1. Sources of expert measurement uncertainties

Professional competence covers knowledge of the following aspects: the evaluated object development history (alteration in its properties and quality indices); the object creation process (research, design, and manufacturing); the QI values of various object modifications, including the best analogues; development perspectives; scientific research results and patent materials leading to the improvement of quality properties and indices; and consumer needs, their conditions and nature.

Fig. 2. Systems of expert QIs and methods of their defining

Qualimetry competence provides: the expert's clear understanding of the approach towards quality assessment; efficient use of quality assessment methods, especially those of expert nature; and abilities to apply different types of estimation scales while distinguishing between a number of gradations. Extra information necessary to improve qualimetry competence could be communicated to an expert in the process of the preparation work. However, a comparatively short term of the preparation stage complicates perception, which in its turn leads to a decrease in the expert's efficiency. The expert's interest in the assessment results depends on a number of factors: the degree of the expert's being overloaded with his or her main work, regularly combined with the mentioned assessment; the possibilities of using the obtained results; the assessment goals; the nature of conclusions possible on gaining quality assessment results; and the individual expert's peculiarities.

As to impartiality, it could be regarded as an ability to consider only information sufficient for evaluating the satisfaction of the needs for a product, service or process. Partiality of an expert consists in an overestimation or underestimation of the product quality on the basis of factors unrelated to quality itself, such as impossibility of resisting most experts' opinion due to the lack of self-confidence (conformism). Partiality of an expert could be revealed also in another situation. The matter is that expert evaluation refers to the type of a product (for example, weight coefficients and quality indices assessment) or to its concrete pattern (organoleptic estimation of aesthetic and ergonomic quality indices). Thus, partiality of an expert tends to be revealed mainly in the second case, during estimation of the real patterns - for instance, when an expert overestimates the aesthetical and ergonomic indices of a product manufactured by an enterprise with which the expert has certain dealings.

The expert's reliability degree is judged by the stability of his or her opinion. Therefore, its extent can be estimated through the reproducibility of the results on the same product quality estimation in time (during several rounds of evaluations made periodically).

Methods of the expert QI evaluation There exist many methods among which we could discern the following:

- heuristic, in which estimates are made by a person (self-estimation, mutual experts' estimations, and estimations by assessment organizers); they are supposed to be used for identification of the experts' competence level and interest degree;

- experimental, in which estimates are obtained as a result of special experiments conducted by experts; it is expedient to consider them for identifying an expert's competence and reliability level;

- statistical, in which estimates result from elaboration of experts' opinions on the considered object as well as their comparison against an average expertise; they are supposed to be used for identifying an expert's impartiality degree;

- documental, in which estimates are based on the analysis of documental legends of experts; they could be used for identifying the experts' competence degree.

The regarded methods could be combined in different ways, and the resulting estimates might be pooled in while considering their weight. Moreover, it is possible to obtain a combined estimate Ccomb.

Further, it seems to be relevant to study the methods of experts' QI identification in terms of their application correctness.

Heuristic evaluation is based on the formation of:

(a) a self-estimate (Qse), when an expert independently evaluates his or her professional competence, i. e. the level of different sides of familiarity with the object, involving a questionnaire [21]. The degree of an expert's self-estimate Cse could be identified as a sum of the expert's self-estimation parameters, with considering their weight coefficients. Consequently, the degree of an expert group self-estimation could be determined as an average self-estimate of all group members;

(b) mutual estimates (Qmt), when in order to decrease impartiality, the competence estimate of each expert Cmt could be determined as an average of grade points attributed by the other experts;

(c) the assessment organizers' estimates Qgo, when the characteristic of an expert's interest in the assessment partaking and his or her concentration during an interview are provided in a quantified form. It is recommended to represent the parameter values Cse, Cmt, and Coe on a 10-point scale.

Experimental estimates are gained as the results of special tests on the expert's proficiency:

(a) the expert's competence (Qec), when the level of theoretical knowledge and practical skills is determined;

(b) the expert's disposition to conformism, which can be determined by a «false group» method: the person passing a test and the group of some false experts who are in agreement with the experimenter are shown the same object of interest. The level of his or her proximity to the collective opinion characterizes the readiness to conformism. To simplify the expert's conformism level, we can use the expression:

C - P - P

cl indp grp '

(1)

where Pindp and Pgrp are the respective numbers of the expert's mistakes during the independent judgement practice and those made collectively with the false group, and (c) the results reproducibility (Qrp). The reproducibility estimation (on the 10-point scale) testifies to the reliability degree of a certain expert. It can be based on the Spearman coefficient of range correlation between two identical expert rounds (e. g., the weight coefficient ranging), reproduced by each j-th expert:

C(rp)j= 10 ■ j

(2)

6! d;

rj =1 -

(3)

where dij is a difference between the ranges attributed by the j-th expert (Cexpert is the number of the experts) to the i-th weight coefficient (n is the number of the evaluated objects) in the first and the second questioning rounds.

Using the method of deviation from the average (one round of an expert assessment), the expression for the calculation of the r could be as follows:

rj =

Mi - M,

(4)

where Mj is a value of a weight coefficient for the i-th object, derived by the j-th expert; Mi is an average value of weight coefficients calculated on the basis of estimates of all the experts for the given object.

As compared to heuristic estimation, the method requires extra time for repeating interviews and extended calculation, but it seems to be more impartial.

Statistical estimation is based on evaluating the expert's opinion deviation from the average viewpoint of the group of experts and on the use of:

(a) a method of ranging the estimated values (calculation of a concordance coefficient, i.e. the experts' opinions coordination) (Qvr), when the genuine value is an average expert's estimate. Correspondently, the lower the deviation value of an expert's individual opinion from the collective one, the larger the concordance index of the experts' opinions. The coefficient of the concordance W for the Cexpert of experts is determined as:

¿d?

W = -

J_ 12

C2

expert

'(H3 - n)-Cexpert IT

j=1

(5)

here

y S.

^^^ i Cexpert L

d-=Si - ^ Si =yR^ Tj=y(t3 - tl),

n j=1 l=1

where L is a quantity of groups of equal ranges; tl is a quantity of related ranges in each group; the value Rij denotes ranges suggested by each j-th expert for each i-th object.

Since 0<W<1, then at W=0 among N experts there is no concord at all, and on the contrary: W=1 represents a complete agreement. The method requires considerable time spending to conduct the whole set of calculations, for example, in comparison with a priori heuristic estimation. While estimating the coordination of experts' thoughts, it is important to determine the extent to which each expert influences the generalized concordance of the group. For this purpose, one expert is taken gradually out of research, and a concordance coefficient is calculated without considering the excluded expert's thought. If during the deduction of an expert's opinion the W increases, it is viewed as a negative characteristic; if the W falls, the estimation seems to be positive. To convert the Wj into a 10-point system, it is recommended for any expert to accept that: if Wj=W, then Wj is equal to 5 points; if Wj -W=+max (the maximum of the positive values of the difference Wj-W), then

for this expert Wj=5-5=0; if Wj -W=+min (the minimum of the positive values), then Wj= 5-1=4, and the intermediate values that are between +max and +min are calculated as proportional points;

(b) determination of the quantitative expression of the estimated values (Qqe) based on the notion of a distance between the estimates. This method does not require considerable expenses;

(c) impartiality estimates (Qimp), for which it is necessary to develop special methods of evaluating the experts' impartiality; however, estimates of a deviation from the average are also made in this respect.

Documental estimates (Qdoc) are based on the analysis of documental impartial data on expert characteristics and can be used in line with other methods of expert QI determination. Uncertainty in this case can be related to the partial availability of information on the expert's merits. C(do)j is a coefficient of a documental estimate of the j-th expert that could be evaluated as a sum of parameters of documental expert estimation with regard to weight coefficients. Then the degree of documental estimation of an expert group is determined as an average value of documental estimates of all experts in the group. Like in previous cases, it is expedient to use a 10-point scale.

The results of analyzing the methods of expert QI estimation are given in the table below, where the juxtaposition of the methods is conducted through the evaluating criteria of their advantages, disadvantages, and usability degree.

The data in Table 1 help make the right choice of the optimum methods of expert QI estimation while deducing a combined evaluation. In a general case, provided that all the expert Qls are considered, the combined quality index of a j-th expert could be represented according to the expression:

Q = Qsej + Qmtj + Qoej + Qecj + Qrpj + Qvrj + Qqej + Qimpj + Qdocj (g)

j q '

where q is the number of the components considered during calculating the combined quality index of the j-th expert.

Table 1

Comparison of the methods of expert QI assessment

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Methods Advantages Disadvantages Maximal application

Heuristic Oset Qmt and Qeo high technological indices of the method preparation and realization, in particular, low time and labour consumption as well as a substantial informative content judgement impartiality evaluation of an expert's competence and interest degree

Experimental Qec and Qrp a sufficient level of impartiality, i.e. lower uncertainty of an evaluation result, which could be estimated by a standard deviation long-lasting realization and labour-consuming processing of the obtained results evaluation of an expert's competence level and reliability

Statistical Qvr, Qqe, and Qimp high impartiality high labour and time consumption for the preparation work and the method realization evaluation of an expert's impartiality degree

Documental Qdoc impartiality, substantiation, and a high technological realization of the method the results of documental evaluation depend on an expert's competence field an expert's competence evaluation

If any quality index component could be reflected in points (on a 10-point scale), then 1 < Qj < 10 (unless it is

Cexpert

assumed that Q = ^ Qj = 1 ).

j=i

Wrong choice of the number of experts. To assess and eliminate uncertainty related to the incorrect choice of the number of experts, it is important to consider the statements of the probability theory (namely, the expression of an error confidence level) [22, 23] and represent the evaluation of the expert number Cexpert at the given confidence probability P within the range of values inherent in metrology - namely, from 0.9 to 0.99 with the error A. Using the expression for calculating a confidence interval, the formula for calculating Cexpert, which is a prototype of the number of observations, could be written as follows:

C

expert

(7)

where t is the Student's coefficient for the given confidence probability; S is the standard deviation in the quality assessment.

If S is unknown (for example, the assessment is made for the first time), the error A is supposed to be set prior to the evaluation as part of S by the following ratio:

A, =A.

1 S

Then expression 7 acquires the form of

C =— •

expert - 2 ' A1

consequently, t

A, -

(8)

(9)

(10)

expert

\Number of \experts P, in % 2 3 4 5 7 10 15 20 30 40

90 4.50 1.75 1.80 1.00 0.73 0.58 0.45 0.39 0.31 0.26

95 8.98 2.48 1.59 1.24 0.93 0.71 0.55 0.47 0.37 0.31

Thus, while making an expert assessment, the expert remains in conditions of limited motion ability within a closed space, which can result in unfavorable influences on the final estimate and thus provoke uncertainty due to the following factors:

- deviation from the normative characteristics of the microclimate in the working area;

- increased levels of noise and vibration;

- insufficient lighting of the working area;

- absence or lack of natural light;

- increased light brightness;

- increased or decreased air humidity;

- increased or decreased pressure;

- excessive use of labour and time.

4. 2. Methods of calculating the uncertainty of QI expert measurement results

To calculate uncertainty caused by the experts' quality and quantity, it is recommended to involve both uncertainty types - A and B. Particularly, a Type A uncertainty should be calculated through a standard deviation of the experts' estimates from the average both for equal-point (a prototype of convergence for equal-point observations in metrology) and unequal-point (a prototype of reproducibility for unequal-point observations in metrology) expert measurements. So an expert's estimate is regarded as a prototype of an observation result received through measuring.

Convergence of the experts' estimates in the case of a certain evaluated object could be calculated under condition that expert QIs are practically the same. Then the Type A uncertainty for the i-th evaluated object is calculated according to the formula:

I (Xij-x,)2

j=1_

C

' (Cexpert 1)

(11)

The values of the errors A1, calculated according to (10) for a different number of experts Cexpert,, and the confidence probability of the expert estimation P are tabulated in Table 2. It seems obvious that starting from the number of experts equal to 7 the given estimation error A does not exceed S and constitutes its part. Thus, the minimum number of experts should not be less than 7.

Table 2

Error values ±A-| for a different number of experts Cexpert and the confidence probability of the expert estimation P

where x,j is a result of expert estimation, i. e. an estimate of a j-th expert for an i-th object; x, is an average value of the expert's estimates of all Cexpert experts for the i-th object.

Correspondently, the standard uncertainty of an expert group, evaluating a series of objects of the same designation, is calculated as follows:

= JI u

(12)

where n is the quantity of objects studied by an expert group.

Under the condition that combined expert QIs are not the same, we deal with unequal-point observations [24] for which the estimation of each expert has its own weight coefficient Qj that is calculated according to expression (6). Then formula (11) is transformed into the expression:

Thus, the data given in Table 2 should be used for calculating a Type B uncertainty based on an insufficient number of experts; this value should be considered as a component of total standard uncertainty of a QI expert estimation result.

Conditions of assessment. Since special rooms are provided for a qualimetry assessment, their state and climate characteristics should comply with health and safety regulations.

I (Qj'(x,-i,)2) j=1_

Q ' (Celpert - 1) '

(13)

where

X =j

I (Qj' xi) C=pCTl

-- and Q = I Qj

Q j=1

(14)

2

C

C

C

where xij is a result of expert estimation, i. e. an estimate of the j-th expert for the i-th object; xi is an average value of the expert's estimates of all Cexpert experts for the i-th object; Qj is a weight coefficient of the j-th expert.

For the parameters characterizing the conditions of making an assessment, there are standardized indices the deviation from which is a reason for the Type B uncertainty.

Thus, the total standard uncertainty of expert measurement uc makes the following:

= JE uB +X uV

(15)

j=i

Then the extended uncertainty U makes the following:

U = c ■ uc, (16)

where c is a coverage coefficient for the given confidence probability P.

Based on the value of an extended uncertainty decision regarding the accuracy of expert research and the need for an additional study, the results of an uncertainty evaluation can be based on a comparison of several similar studies regarding their authenticity.

5. Results of the research of expert measurement uncertainty

The methods used for estimating the uncertainty of expert measurement results were tested to ensure efficient functioning of a quality management system in the higher education institution. The expert research was conducted on the significance degrees of components of such student activities as study combined with scientific, methodical, and social work as well as self-improvement. Such investigations in higher education institutions are necessary to modernize the processes of quality management in the spheres of educational services; therefore, it is very important to estimate the uncertainty of their evaluation results. A questionnaire was developed for this study, and 40 teachers were involved as experts in expert measurement. The results of the questionnaire processing reflect the rating of the importance of student activity components in points on a 10-point scale (Fig. 3).

Fig. 3. The measured values of importance degrees of student activity components: 9.6 is study activity, 8.1 is self-improvement, 7.7 is scientific activity, 5.0 is methodical activity, and 4.7 is social activity

To calculate the absolute values of a standard uncertainty of this expert research (Table 4) according to the methods suggested in this study, the expert weight coeffi-

cients Qj (Table 3) were calculated according to a 10-point scale. These coefficients were introduced into the formula for the calculation of the Type A standard uncertainty -(13) and (14). The weight coefficients were statistically determined while considering such components as: the degree of impartiality according to the concordance coefficient (5); the degree of confidence with regard to the repro-ducibility of expert evaluation in time, which was made in two rounds - (2) and (3); the degree of competence, defined by the documental method on the basis of objective data; and the degree of the expert's interest, established heuristically by the research organizers.

The relative values of the uncertainties in Tables 4, 5 were calculated by the division of the correspondent values of absolute uncertainties by the measured values of importance degrees of student activity components (Fig. 3); they are represented in percentage.

Combined expert QIs (weight coefficients Qj their components

Table 3 ) and

The The The The A combined

expert's expert's expert's expert's expert

Expert impartial- reliability competence degree of quality

ity degree, degree, in degree, in interest, index, Qj, in

in points points points in points points

Expert 1 7.0 10.000 6.5 10.0 8.375

Expert 2 7.3 9.917 7.3 8.0 8.129

Expert 3 10.0 10.000 10.0 10.0 10.000

Expert 4 9.0 10.000 7.3 10.0 9.075

Expert 5 7.0 9.917 7.2 8.0 8.029

Expert 6 6.0 10.000 8.0 10.0 8.500

Expert 7 7.0 9.917 6.3 8.0 7.804

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Expert 8 9.0 9.917 6.5 8.0 8.354

Expert 9 6.0 9.75 9.2 6.0 7.738

Expert 10 6.0 9.834 7.2 7.0 7.508

Expert 11 6.0 10.000 7.2 10.0 8.300

Expert 12 6.0 9.834 6.5 7.0 7.334

Expert 13 7.0 9.917 6.5 8.0 7.854

Expert 14 9.0 9.917 10.0 8.0 9.229

Expert 15 9.0 9.917 6.5 8.0 8.354

Expert 16 7.0 9.834 8.0 7.0 7.958

Expert 17 6.0 9.917 6.5 8.0 7.604

Expert 18 7.3 9.834 3.5 7.0 6.908

Expert 19 7.3 9.917 3.2 8.0 7.104

Expert 20 9.0 9.917 6.3 8.0 8.304

Expert 21 7.3 9.5 7.2 5.0 7.250

Expert 22 9.0 9.917 3.8 8.0 7.679

Expert 23 6.0 10.000 9.2 10.0 8.800

Expert 24 9.0 9.917 7.2 8.0 8.529

Expert 25 7.0 9.917 6.5 8.0 7.854

Expert 26 10.0 10.000 2.5 10.0 8.125

Expert 27 7.0 9.917 9.2 8.0 8.529

Expert 28 6.0 9.917 6.5 8.0 7.604

Expert 29 7.3 9.917 4.2 8.0 7.354

Expert 30 9.0 9.917 6.5 8.0 8.354

Expert 31 7.3 9.917 5.5 8.0 7.679

Expert 32 7.3 10.000 7.0 10.0 8.575

Expert 33 10.0 9.917 4.5 8.0 8.104

Expert 34 7.0 9.834 5.6 7.0 7.358

Expert 35 7.0 9.917 6.0 8.0 7.729

Expert 36 10.0 10.000 6.5 10.0 9.125

Expert 37 7.3 10.000 6.5 10.0 8.450

Expert 38 7.0 9.917 4.8 8.0 8.529

Expert 39 7.3 9.917 9.2 8.0 8.604

Expert 40 7.0 9.917 3.5 8.0 7.104

Table 4

Standard uncertainty of the Type A of the results of expert measurements of the importance degrees of student activity components

Name of the student activity component Absolute value of the standard uncertainty uA, in points Relative value of the standard uncertainty ua,, in %

Study 0.2770 2.88

Self-improvement 0.1937 2.39

Scientific activity 0.2130 2.77

Methodical activity 0.5487 10.97

Social activity 0.6046 12.86

In the course of the research, the conditions of the experiment met the established standard, and the 40 experts was a sufficient number (Table 2) to support a high degree of the expert measurement result reliability. Thus, the calculation of an extended uncertainty (Table 5) of expert research results according to (16) was the final stage in the method realization.

Table 5

The extended uncertainty of expert measurement results of student activity component importance for P=95 %

For the purpose of the experts' attestation, the study standardized the expert quality indices by establishing the lower limit of an admissible value. Based on the results of the expert research, these standards have been formulated and represented in Table 6.

Table 6

Recommendations for standardizing the QIs of specialist experts on quality assessment

——Standardization type Index title "———^^^ The upper limit of the threshold value

Competence index, calculated according to the results of the introspection Qse, the mutual evaluation Qmt, the documental evaluation Qdoc, and the experimental evaluation Qev 3 points

Interest index, calculated according to the results of the organizer's evaluation Qoe 5 points

Impartiality index, calculated according to the results of the statistical data processing through the concordance coefficient Qw for each expert 5 points

Reliability index, calculated according to the results of the experimental testing through the coefficient of the results' reproducibility Qrp, received in several rounds 5 points

Thus, for the purpose of a successful experts' attestation, their quality indices should be quite high. Moreover, the lower limit of the admissible values of their quality indices should not be below the correspondent limit indicated in Table 6.

Name of student activity component Absolute value of the extended uncertainty U, in points Relative value of the extended uncertainty U, in %

Study 0.5595 5.82

Self-improvement 0.3913 4.83

Scientific activity 0.4646 5.60

Methodical activity 1.1084 22.16

Social activity 1.2213 25.98

The data in Table 5 prove the different uncertainty degree during expert measurement of the importance degree of student activity components.

6. Discussion of research results and suggestion of recommendations

The results of the expert measurement have helped establish the importance degrees of student activities. They could be represented by a range set from the most important to the least important item - namely, study, self-improvement, scientific, methodical and social activities. For the expert measurement of the importance degree of each component, uncertainty values were calculated. Moreover, the smallest uncertainty values were obtained for the components "self-improvement", "scientific work", and "study", which proved the high reproducibility degree of the results of these components' importance degree in expert measurement. The largest uncertainty values were revealed for the components "methodical activity" and "social activity". It was expedient to make decisions on the realization of the repeated expert measurements with another expert set.

Expert weight coefficients were determined to calculate the uncertainty (Table 3). It is worth noticing that the expert's impartiality degree values ranged from 6 to 10 points, the reliability degrees - from 9.5 to 10 points, the competence degrees - from 3.2 to 10 points, and the interest degrees - from 5 to 10 points.

7. Conclusion

1. The analysis of the principle stages of expert measurement has revealed the main reasons of the resulting uncertainty of the latter. The uncertainty reasons are: experts' imperfection, wrong choice of the number of experts, and assessment conditions.

2. The study has suggested the methods of calculating the uncertainty of expert evaluation results that can help adjust the process of accuracy evaluation of such parameters to international requirements (namely, to represent the results of expert measurement based on the uncertainty concept). The system of expert quality indices and the methods of expert quality evaluation have been developed in the study to help deduce the weight coefficients while calculating a Type A uncertainty of expert measurement. The method of considering a Type B uncertainty has also been suggested. It was ascertained that the reasons for its appearance are an insufficient quantity of experts and the conditions of carrying out expert measurement.

3. Recommendations have been formulated to standardize the quality indices of specialist experts by setting the lower limit of the admissible values (within a 10-point scale): namely, it has been recommended to attribute to the competence index 3 points, whereas the interest, impartiality and reliability indices are suggested to have 5 points each. This approach helps standardize the expert characteristics and improve the process of their attestation. Besides, standardization of expert quality indices is an important component of the expert measurement coherence.

4. The results of the conducted expert research on the estimation of the student activity components importance level and the calculation of the components' evaluation uncertainty according to the suggested methods are specified in the work. They have proved that the most important component of student activities is the study process, and the least ponderable one is social activity. Moreover, we have calculated the values

of the result uncertainty of the expert measurement. The least value was obtained for the component "self-improvement", and the largest - for the component "social activity".

The research results are supposed to be topical in all activity spheres where expert measurements are normally made, since their accuracy is crucial for the support of efficient functioning of a management system in organizations.

References

1. Guide to the Expression of Uncertainty in Measurement. Second edition [Text]. - ISO, Switzerland, 1995. - 101 p.

2. Kowalczyk, A. Standard uncertainty determination of the mean for correlated data using conditional averaging metrology and measurement systems [Text] / A. Kowalczyk, A. Szlachta, R. Hanus // Metrology and Measurement System. - 2012. -Vol. 19, Issue 4. - P. 787-796.

3. Gutiérrez, R. An uncertainty model of approximating the analytical solution to the real case in the field of stress prediction [Text] / R. Gutiérrez, M. Ramírez, E. Olmeda, V. Díaz // Metrology and Measurement System. - 2015. - Vol. 22, Issue 3. -P. 429-442. doi: 10.1515/mms-2015-0031

4. Kondruk, N. Development of system for processing of fuzzy expert information [Text] / N. Kondruk // Managing the Development of Complex Systems. - 2014. - Vol. 18. - P. 173-176.

5. Danylkovych, A. Selecting the nomenclature of quality indicators of hybrophobized fur velour by expert method [Text] / A. Danylkovych, N. Hlebnikova, N. Omeljchenko // Eastern-European Journal of Enterprise Technologies. - 2014. - Vol. 5, Issue 3 (71). - P. 34-39. doi: 10.15587/1729-4061.2014.27613

6. Parratt, J. A. Expert validation of a teamwork assessment rubric: A modified Delphi study [Text] / J. A. Parratt, K. M. Fahy, M. Hutchinson, G. Lohmann, C. R. Hastie, M. Chaseling, K. O'Brien // Nurse Education Today. - 2016. - Vol. 36. - P. 77-85. doi: 10.1016/j.nedt.2015.07.023

7. Snytyuk, V. Optimization of the evaluation process under uncertainty based on structuring the domain and axioms unbiasedness [Text] / V. Snytyuk, G. Gnatienko // Artificial Intelligence. - 2008. - Vol. 3. - P. 217-223.

8. De Carlo, P. J. The design and development of an expert system prototype for enhancing exam quality [Text] / P. J. De Carlo, N. Rizk // International Journal of Advanced Corporate Learning (IJAC). - 2010. - Vol. 3, Issue 3. - P. 10-13. doi: 10.3991/ijac.v3i3.1356

9. Hunkalo, A. Improvement of the products quality level by competent experts [Text] / A. Hunkalo, O. Shpak // Technology audit and production reserves. - 2014. - Vol. 4, Issue 1 (18). - P. 36-38. doi: 10.15587/2312-8372.2014.26368

10. Baytsar, R. Certification of professional competence of personnel [Text] / R. Baytsar, M. Skolozdra, O. Garasym // Measuring equipment and metrology. - 2008. - Vol. 69. - P. 108-113.

11. Chin, K.-S. An evidential reasoning based approach for quality function deployment under uncertainty [Text] / K.-S. Chin, Y.-M. Wang, J.-B. Yang, K. K. Gary Poon // Expert Systems with Applications. - 2009. - Vol. 36, Issue 3. - P. 5684-5694. doi: 10.1016/j.eswa.2008.06.104

12. Lin, V. S. Accuracy and bias of experts' adjusted forecasts [Text] / V. S. Lin, P. Goodwin, H. Song // Annals of Tourism Research. -2014. - Vol. 48. - P. 156-174. doi: 10.1016/j.annals.2014.06.005

13. Hong, D. H. Fuzzy linear regression analysis for fuzzy input-output data using shape-preserving operations [Text] / D. H. Hong, S. Lee, H. Y. Do // Fuzzy Sets and Systems. - 2001. - Vol. 122, Issue 3. - P. 513-526. doi: 10.1016/s0165-0114(00)00003-8

14. Yang, M.-S. Fuzzy least-squares linear regression analysis for fuzzy input-output data [Text] / M.-S. Yang, T.-S. Lin // Fuzzy Sets and Systems. - 2002. - Vol. 126, Issue 3. - P. 389-399. doi: 10.1016/s0165-0114(01)00066-5

15. Seraya, O. V. Linear regression analysis of a small sample of fuzzy input data [Text] / O. V. Seraya, D. Demin // Journal of Automation and Information Sciences. - 2012. - Vol. 44, Issue 7. - P. 34-48. doi: 10.1615/jautomatinfscien.v44.i7.40

16. íjen, D. Error measures for fuzzy linear regression: Monte Carlo simulation approach [Text] / D. íjen, H. Demirhan // Applied Soft Computing. - 2016. - Vol. 46. - P. 104-114. doi: 10.1016/j.asoc.2016.04.013

17. Livotov, P. Estimation of new-product success by company's internal experts in the early phases of innovation process [Text] / P. Livotov // Procedia CIRP. - 2016. - Vol. 39. - P. 150-155. doi: 10.1016/j.procir.2016.01.181

18. Kuo, T.-C. Integration of environmental considerations in quality function deployment by using fuzzy logic [Text] / T.-C. Kuo, H.-H. Wu, J.-I. Shieh, // Expert Systems with Applications. - 2009. - Vol. 36, Issue 3. - P. 7148-7156. doi: 10.1016/j.es-wa.2008.08.029

19. Carnevalli, J. A. Review, analysis and classification of the literature on QFD [Text] / J. A. Carnevalli, P. C. Miguel // International Journal of Production Economics. - 2008. - Vol. 114, Issue 2. - P. 737-754. doi: 10.1016/j.ijpe.2008.03.006

20. Chan, L.-K. Quality function deployment: A literature review [Text] / L.-K. Chan, M.-L. Wu // European Journal of Operational Research. - 2002. -Vol. 143, Issue 3. - P. 463-497. doi: 10.1016/s0377-2217(02)00178-9

21. Bekhtieriev, V. Influence of Staff on Personality. Pedology and Upbringing [Text] / V. Bekhtieriev, M. Lange. - Moscow: Enlightenment Worker, 1998. - P. 44-97.

22. Novitsky, P. Estimation of measurement results' errors [Text] / P. Novitsky, I. Zograf. - Leningrad: EnergoAtomIzdat, 1991. - P. 10-251.

23. Venttsel, E. Probability theory [Text] / E. Venttsel. - Moscow: Gosudarstvennoe izdatel'stvo fiziko-matematicheskoy literatury, 1969. - P. 28-204.

24. Obozovski, S. Information measurement technics: methodology questions of measurement theory, study handbook [Text] / S. Obo-zovski. - Kyiv: ISDO, 1993. - P. 56-89.

i Надоели баннеры? Вы всегда можете отключить рекламу.