Научная статья на тему 'Building a Ranking System for Lecturers Based on Student Evaluations in Teaching a Specific Course: A Case Study at a University in Vietnam'

Building a Ranking System for Lecturers Based on Student Evaluations in Teaching a Specific Course: A Case Study at a University in Vietnam Текст научной статьи по специальности «Науки об образовании»

CC BY
0
0
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
Lecturers ranking / MCDM / PSI / SRP / RAM / PIV

Аннотация научной статьи по наукам об образовании, автор научной работы — Do Duc Trung, Branislav Dudić, Duong Van Duc, Nguyen Hoai Son, Alexandra Mittelman

In the current landscape of higher education, the quality of teaching plays a crucial role in supporting the comprehensive development of students. To ensure the effectiveness of the learning process, evaluating lecturers based on student opinions is an essential means of providing feedback and optimizing the learning experience. This paper focuses on constructing a lecturer ranking system, particularly in the context of a specific course through the evaluation process from students. Four different methods were employed to assess lecturers, including the PSI method, SRP method, RAM method, and PIV method. The evaluation results using these four methods were compared with each other and also with the traditional evaluation approach currently utilized in the educational institution. The achieved results demonstrate that the approach outlined in this paper is highly suitable for determining the rankings of lecturers when teaching individual courses.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Building a Ranking System for Lecturers Based on Student Evaluations in Teaching a Specific Course: A Case Study at a University in Vietnam»

Original scientific paper

UDC:

378.147.091.32(597) 378-057.175

d 10.23947/2334-8496-2024-12-2-335-350

Received: March 25, 2024. Revised: July 12, 2024. Accepted: July 30, 2024.

'II) Check for updates

Building a Ranking System for Lecturers Based on Student Evaluations in Teaching a Specific Course: A Case Study at a University

in Vietnam

Do Due Trung1 , Branislav Dudic23" , Duong Van Due1 , Nguyen Hoai Son1 , Alexandra Mittelman2

1 School of Mechanical and Automotive Engineering, Hanoi University of Industry, Hanoi, Vietnam, e-mail: doductrung@haui.edu.vn, duongduc67@gmail.com, nguyenhoaison@haui.edu.vn 2 Comenius University Bratislava, Faculty of Management, Slovakia, e-mail: branislav.dudic@fm.uniba.sk, alexandra.mittelman@fm.uniba.sk 3 Faculty of Economics and Engineering Management, Novi Sad, Serbia

Abstract: In the current landscape of higher education, the quality of teaching plays a crucial role in supporting the comprehensive development of students. To ensure the effectiveness of the learning process, evaluating lecturers based on student opinions is an essential means of providing feedback and optimizing the learning experience. This paper focuses on constructing a lecturer ranking system, particularly in the context of a specific course through the evaluation process from students. Four different methods were employed to assess lecturers, including the PSI method, SRP method, RAM method, and PIV method. The evaluation results using these four methods were compared with each other and also with the traditional evaluation approach currently utilized in the educational institution. The achieved results demonstrate that the approach outlined in this paper is highly suitable for determining the rankings of lecturers when teaching individual courses.

Keywords: Lecturers ranking, MCDM, PSI, SRP, RAM, PIV

Ranking lecturers when multiple individuals teach a course plays a crucial role in ensuring the quality of education and the professional development of the teaching staff (Ventista and Brown, 2023; Ekinci et al., 2022). Each lecturer brings their own perspective and teaching style, and evaluating the quality of their teaching can help schools and students make more informed decisions about how they approach a specific course. Ranking lecturers not only helps identify the best teachers but also highlights issues that need addressing in the teaching process (Munna and Kalam, 2021). Highly rated lecturers may be considered for roles such as department heads, sharing effective teaching methods with colleagues, or even being invited to teach additional classes to expand their positive impact. Conversely, lecturers facing challenges may require support, additional training, or, in some cases, a reevaluation of their teaching abilities in that specific course. Ranking lecturers also aids schools in managing lecturer resources more effectively (Girvan et al., 2016). If any lecturer is assessed as unsuitable or not meeting the requirements for teaching a specific course, decisions to minimize or cease their teaching assignments may be made to optimize the quality of education. This not only helps avoid issues with teaching quality but also optimizes the professionalism and motivation of the teaching staff (Oliver and Reschly, 2017). In summary, ranking lecturers when multiple individuals teach a course is not only an assessment tool but also a means of managing and developing lecturer resources, ensuring diversity and quality in the teaching process.

If lecturer evaluations rely solely on the subjective opinions of managers or focus on specific criteria such as the number of published papers or participation in projects, important limitations may arise due to subjective managerial opinions and overlooking the complexity and diversity of lecturer roles. Some

'Corresponding author: branislav.dudic@fm.uniba.sk

© 2024 by the authors. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

Introduction

universities in Vietnam have implemented lecturer evaluations based on students' evaluations of them. This approach is relatively novel, where lecturer evaluations are conducted within the university's education system, requiring each student to evaluate the lecturer who taught them if they wish to know their grades for that course. To clarify, students must evaluate their lecturers through a set of criteria provided by the management agency to access their grades for a specific course. Once all students have completed grading lecturers on each criterion, the education management system calculates the average value for each criterion for each lecturer and then computes the overall average score for the lecturer. However, calculating the overall average value for all criteria for each lecturer overlooks the importance of individual criteria. This means that not all criteria for evaluating lecturers are equally important; their significance varies depending on the characteristics of each course. For example, in practical courses, lecturers need good operational skills, while in theoretical courses, lecturers need extensive academic knowledge. To objectively evaluate lecturers based on multiple criteria, multi-criteria decision-making (MCDM) methods are considered, as they have been applied successfully in various fields (Kalyan and Pramanik, 2019; Malik et al., 2021; Ghorui et al., 2021).

MCDM is a technique for ranking available options to facilitate the selection of the best option and avoid the worst ones (Truong et al., 2023). With more than 200 existing MCDM methods, they have been applied to rank alternatives in various fields, from industry to transportation, healthcare, construction, and more (Trung and Tung, 2022). In the field of education, the application of MCDM methods has also been found in some studies. The PROMETHEE method has been used to rank teachers in a competition for excellent teaching (Monalisa and Kusnawi, 2017). Two methods, AHP and ARAS, have been used to rank lecturers based on criteria such as work experience, academic qualifications, etc. (Akmaludin et al., 2023). Ranking university departments using the RAPS method has been implemented. This research used criteria to describe each department, such as research productivity, total number of published chapters and articles in Scopus, total number of professors, total number of associate professors, etc. (Bafail et al., 2022). The TOPSIS method was used to select websites for online teaching during the Covid-19 period (Toan et al.,

2021). The TOPSIS method has also been used to evaluate students' academic performance (Sirigiri et al., 2015). Applying the VIKOR method to rank universities based on student opinions has been carried out in (Ayyildiz et al., 2023), and so on.

This article employs MCDM methods to evaluate lecturers in each course based on student evaluations. The four MCDM methods used in this study include the PSI, SRP, RAM, and PIV methods, chosen for their distinct characteristics. The PSI method requires data normalization but does not require weighting for criteria (Maniya and Bhatt, 2010; Do et al., 2023). Recent studies have applied this method in various fields, such as ranking transportation companies (Uluta§ et al., 2021), selecting materials in mechanical manufacturing (Dua, 2024), developing decision systems for awarding student scholarships (Arifin and Saputro,

2022), and choosing plastic injection molding machines (Trung et al., 2024a). In contrast, the SRP method does not require data normalization but necessitates weighting for criteria (Zakeri et al., 2023), and is recommended for cases where the number of alternatives to be ranked is greater than 5 (Zakeri et al., 2024). Recent publications have successfully applied this method in selecting regression models for surface roughness in grinding (Thinh and Dua, 2024), choosing 3D printing materials (Mian et al., 2024), and evaluating the financial health of banks (Trung et al., 2024b).

RAM is a relatively new method, first introduced in September 2023, known for its ability to balance favorable and unfavorable criteria (Sotoudeh-Anvari, 2023). This method has been applied in various fields such as selecting cultivation methods for mushrooms (Trung et al., 2024c), choosing materials for gear manufacturing, materials for screw manufacturing, and lubricants for two-stroke engines (Dua et al., 2024), selecting options in mechanical manufacturing (Trung et al., 2024d), and ranking universities (Do, 2024). PIV is known for minimizing the phenomenon of rank reversal (Mufazzal and Muzakkir, 2018). Some of the most recent studies have successfully applied this method for selecting materials for crankshaft manufacturing (Nguyen et al., 2024), choosing materials for connecting rod manufacturing (Thinh and Mai, 2023), and comparing the impact of the Covid-19 pandemic on various countries (Komasi et al., 2024). When applying the RAM and PIV methods, both tasks of data normalization and weighting for criteria must be carried out. The normalization methods for data in RAM and PIV are different, and it's important to note that the data normalization method significantly affects the ranking of alternative options that need to be prioritized (Trung, 2022; Ha, 2023). The selection of the four methods, PSI, SRP, RAM, and PIV, takes advantage of their different characteristics and outstanding merits, aiming to rank lecturers accurately and objectively.

Materials and Methods

PSI Method

The application of the PSI method to rank alternatives is carried out in eight steps as follows (Maniya and Bhatt, 2010; Do et al., 2023).

Step 1: Construct a decision matrix with m rows and n columns, where m and n represent the number of alternatives to be ranked and the number of criteria for each alternative, respectively. Let x.. denote the value of criterion jfor alternative i, with j=1 to n, i=1 to m. The letters B and C are used to signify the benefit and cost criteria, respectively.

Step 2: Normalize the data using formulas (1) and (2).

xmin

(2)

ij

Step 3: Calculate the average value of the normalized data using formula (3).

m

(3)

¡=i

Step 4: Determine the priority value from the average value using formula (4).

m

vj = - n]2 (4)

Step 5: Identify the deviation in priority values using formula (5).

0j = 1 - <Pj (5)

Step 6: Determine the overall priority value for criteria using formula (6).

' ¿k (6) Step 7: Calculate the priority selection index for each alternative using formula (7).

n

Y^-ft (7)

i= 1

Step 8: Rank the alternatives based on the principle that the best alternative is the one with the highest 0r SRP Method

To rank alternatives using the SRP method, four simple sequential steps need to be applied (Zakeri et al., 2023; Zakeri et al., 2024).

Step 1: Similar to Step 1 of the PSI method.

Step 2: Internally rank the alternatives for each criterion. For criterion j, the rank of alternative i is denoted as r... Note that the internal ranking of alternatives uses only natural numbers, meaning r..wN. This can be illustrated through a simple example. Suppose there are five alternatives to be ranked, namely A1, A2, A3, A4, and A5. There are three criteria describing each alternative, namely C1, C2, and C3. Among them, C1 and C2 are two benefit criteria, while C3 is a cost criterion. The illustrative data for this example is presented in Table 1.

Table 1. Data to illustrate Step 2 ofthe SRP method

Alternative C1 C2 C3

A1 7 3 8

A2 5 6 2

A3 7.5 4 4

A4 5.4 6 2

A5 4.2 3 2

The internal ranking of alternatives is performed as follows. For C1 (B form): Since the values of this criterion for all five alternatives are different, the rankings of the alternatives are arranged in decreasing order of the criterion values in the alternatives, i.e., r31 = 1, r.. = 2, r., =3, r0, = 4, and r,, = 5.

11 ' 41 ' 21 ' 51

For C2 (B form): Since the criterion values at A2 and A4 are both equal to 6, r22 = r42 = 1, followed by r32 = 2. As the criterion values at A1 and A5 are also equal, r12 = r52 = 3.

For C3 (C form): Since the criterion values at three alternatives A2, A4, and A5 are equal, r23= r43 = r53 = 1, followed by r33 = 2 and r13 = 3.

Step 3: Calculate the score for each alternative using formula (8), where w. is the weight of criterion _/.

^ > :j ■ Wj

(8)

i '

2023).

Step 4: Rank the alternatives in ascending order based on their scores. RAM Method

Ranking alternatives using the RAM method is performed in six steps as follows (Sotoudeh-Anvari,

Step 1: Similar to Step 1 of the PSI method. Step 2: Normalize the data using formula (9).

r,, = ■

Xij

(9)

y vm v Li=1xij

Step 3: Calculate the normalized values considering the weights of the criteria according to (10).

yn=wrrv (10)

Step 4: Calculate the sum of normalized scores considering the weights of the criteria using (11) and (12).

Step 5: Calculate the score for each alternative using formula (13). Step 6: Rank the alternatives in descending order based on their scores.

(11) (12)

PIV Method

The PIV method employs six steps to rank alternatives (Mufazzal and Muzakkir, 2018).

Step 1: Similar to Step 1 of the PSI method.

Step 2: Calculate the normalized values using formula (14).

XH

nn =

J

£¿=1 xv

(14)

Step 3: Calculate the normalized values considering the weights of the criteria according to formula (15).

Vij = Wjxnij (15)

Step 4: Evaluate the boundary weight index using formulas (16) and (17). Step 5: Determine the overall neighboring value range using formula (18).

di =Ë U

j=i

(18)

Step 6: Rank the alternatives based on the principle that the best alternative is the one with the smallest deviation.

Based on the implementation steps of the four methods PSI, SRP, RAM, and PIV as described above, the block diagram illustrating the application of these four methods for ranking options is presented in Figure 1. Notably, when applying the SRP, RAM, and PIV methods, an additional task of calculating weights for the criteria is required, whereas the PSI method does not require the calculation of weights for the criteria.

DECISION MATRIX

Eq.( 1), (2)

Eq. (3)

Eq. (4)

Eq. (5)

Eq. (6)

Eq (7)

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Internally rank the alternatives for each criterion

Eq. (8)

RANK THE ALTERNATIVE

PSI method SRP method RAM method PIV method Weight method

Figure 1. Blockdiagram forrankingoptions

Results

The selected course for this study is "Machine Manufacturing Technology Project" because it provides a flexible combination of direct classroom teaching and self-study at home. This allows students to directly interact with the fundamental knowledge conveyed by the lecturer, while also providing opportunities for in-depth research, application, and experimentation with the acquired knowledge. The course is appealing as students can apply learned knowledge to practical situations, showcasing their creativity and problem-solving skills. Additionally, lecturer guidance during project execution is crucial, helping students gain a deeper understanding of the machine manufacturing process and enhancing teamwork skills. Furthermore, the ability to ask questions through applications is a significant advantage, enabling quick access to information and problem-solving.

To view the grades of a course on the education system, each student needs to evaluate the lecturer who taught them based on seventeen established criteria. Evaluation scores for each criterion range from 0 to 5, where a higher score indicates better fulfillment of that criterion. This means that all seventeen criteria fall into the B category.

C1: The lecturerconsistentlyprepares well foreach class.

This criterion emphasizes the importance of preparation for the teaching process. When lecturers invest time and effort in thorough preparation for classes, they can create a quality learning environment, enhancing students' understanding and interest in the content. Preparation also helps lecturers be adaptable to unexpected situations and improves effective communication. Students perceive this preparation through clear and accurate lectures, playing a vital role in building trust and respect for the lecturer.

C2: Lessons are structured and organized systematically.

The importance of structuring and organizing lessons lies in helping students understand and absorb content in an organized and coherent manner. When lectures are systematically constructed, students can easily follow and organize information. Logical connections between teaching sections help students gain a deeper understanding and connect knowledge, forming a solid foundation. Lesson structure also helps students predict lesson content, providing reassurance and confidence while promoting focus and learning efficiency.

C3: The lecturerteaches with passion.

Passion in teaching is not only a powerful motivator but also creates a positive learning environment. When lecturers convey knowledge with enthusiasm, they inspire students and encourage creative thinking. The excitement of the lecturer can spread and stimulate curiosity and learning enthusiasm in students. Passion also helps lecturers overcome challenges in the teaching process, creating a positive environment that fosters students' comprehensive development.

C4: The lecturermotivates students to study this course.

This highlights the importance of lecturers playing a significant role in motivating students. When lecturers stimulate students' interest and desire to study, they create a positive environment, enhancing self-reliance and responsibility in the learning process. Motivation can arise from solving real-world problems, applying knowledge to practical situations, or even the recognition of the value of the course for personal and career development.

C5: Students are timelysupported with the syllabus and official materials ofthe course.

The importance of supporting students in accessing the syllabus and learning materials is crucial to ensure they have the necessary information. Timely support helps students overcome difficulties and enhances their understanding and application of knowledge. Providing official materials ensures consistency in information transmission, making it easier for students to follow and review.

C6: Course materials are easilysearchable and regularlyupdated.

The ease of searching and updating course materials plays a significant role in optimizing students' study time. When materials are well-organized and easy to search, students can quickly access the information they need, enhancing efficiency in the review and grasp of knowledge. Regular updates ensure that materials reflect the latest progress in the course field, helping students maintain continuity and apply knowledge to real-life situations.

C7: Lecturers provide sufficient, clear overall content information and distribute the course schedule before teaching.

Providing sufficient and clear information before class emphasizes the importance of preparation and transparency in the teaching process. Students can better prepare when they know the content and schedule of the course, allowing them to manage their study time more effectively. This transparency creates a collaborative learning environment, where lecturers and students can work together effectively.

C8: Lecturers provide sufficient, clearcourse outcomes before teaching.

Course outcomes are an essential indicator of the knowledge and skills students will acquire after completing the course. Clear communication about course outcomes helps students understand the course's objectives and shape an appropriate study plan. Simultaneously, lecturers can align expectations and evaluate learning outcomes fairly and transparently.

C9: Lecturers start class on time, ensuring the time and teaching volume follow the course syllabus.

Ensuring timeliness and the teaching volume not only creates a disciplined learning environment but also helps students manage their study time effectively. When lecturers start classes on time and adhere to the course syllabus, students can rely on the schedule to prepare and actively participate in classes. Accuracy and consistency in teaching contribute to the stability and quality of the learning process.

C10: Lecturers guide you on studymethods when starting the course.

The importance of guiding students on study methods at the beginning of a course is to help them create a study plan that suits their learning style. This not only enhances students' autonomy in the learning process but also lays the foundation for developing self-learning skills, a crucial factor in their overall learning journey.

C11: The lecturer's teaching method is understandable and effective.

The effectiveness and clarity of teaching methods are decisive factors in students' understanding and absorption. When lecturers apply teaching methods appropriate to the target audience, information delivery becomes smooth and vibrant. This method helps students form a comprehensive view of the course and enhances the ability to apply knowledge to real-life situations, creating a tight connection between theory and practice.

C12: Lecturers create opportunities for you to actively participate in the learning process.

The importance of lecturers creating opportunities for students to actively participate lies in encouraging interaction and positivity in the learning process. When students have the opportunity to discuss, ask questions, and contribute opinions, they not only enhance understanding but also develop communication and critical thinking skills. Active participation helps students become confident and enjoy the learning process, creating a motivating and supportive learning environment.

C13: Student academic performance is evaluated through various forms and aligns with the course outcomes.

The diversity in evaluating academic performance and alignment with course outcomes is crucial to ensuring accuracy and fairness in the evaluation process. When lecturers use various assessment methods such as exams, projects, presentations, or group discussions, students have the opportunity to demonstrate their skills and knowledge from different perspectives. This not only strengthens the learning process but also creates conditions for the multidimensional development of subject competencies.

C14: Feedback from tests and evaluations helps you improve academic performance.

Feedback is an essential part of the learning process, helping students understand more about their level of success and areas for improvement. When lecturers provide detailed and constructive feedback, students can self-assess their abilities and performance, thereby seeking ways to improve. Accurate and constructive feedback not only supports the current learning process but also serves as a foundation for continuous development and lifelong learning.

C15: The lecturer's teaching method stimulates students' critical thinking and problem-solving abilities.

The importance of stimulating students' critical thinking and problem-solving abilities lies in helping them develop crucial skills for the real world. When lecturers create lectures and activities that encourage logical thinking, information analysis, and problem-solving, students not only learn a subject but also

develop essential skills for their careers and daily lives.

C16: I learned a lot about the course from the lecturer.

This statement reflects the success of conveying knowledge and creating a positive learning experience. When students perceive the learning and personal development through the lecturer, they tend to positively evaluate the learning process and consider the course as an essential part of their educational journey.

C17: General Evaluation - The lecturer is highly suitable for teaching this course.

The overall evaluation is the final and comprehensive result of the lecturer's success in teaching this course.

Data extraction from the university's training management system on the scores of the seventeen criteria for eight lecturers teaching the Machine Manufacturing Technology Project course is presented in Table 2.

Table 2. Synthesized comments from students for each lecturer (source: author's compilation)

Lecturers C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 C13 C14 C15 C16 C17

L1 4.67 4.7 4.7 4.7 4.7 4.73 4.67 4.7 4.73 4.73 4.67 4.67 4.7 4.7 4.7 4.67 4.67

L2 4.58 4.54 4.54 4.46 4.51 4.55 4.6 4.54 4.57 4.52 4.55 4.6 4.55 4.48 4.57 4.6 4.65

L3 4.66 4.63 4.68 4.63 4.76 4.71 4.68 4.74 4.76 4.68 4.74 4.74 4.74 4.74 4.76 4.76 4.76

L4 4.51 4.49 4.51 4.49 4.54 4.52 4.52 4.54 4.55 4.52 4.58 4.57 4.55 4.57 4.55 4.58 4.57

L5 4.82 4.77 4.88 4.85 4.86 4.92 5 4.85 5 5 5 5 5 5 5 4.98 4.97

L6 5 4.5 5 4.5 5 5 4.5 5 4.5 5 4.5 4.5 5 4.5 4.5 5 5

L7 4.76 4.68 4.76 4.68 4.72 4.76 4.8 4.72 4.8 4.76 4.76 4.8 4.72 4.64 4.76 4.72 4.76

L8 4.67 4.63 4.63 4.59 4.52 4.52 4.53 4.53 4.55 4.56 4.64 4.66 4.66 4.66 4.66 4.73 4.73

Currently, the university's management level relies on the data of the seventeen criteria in Table 2 to calculate the average scores for each lecturer and then rank the lecturers in Table 3. This result will be used for comparison with the lecturer rankings using the MCDM methods that this research is conducting.

Table 3. Lecturerrankings using the traditionalmethod

Lecturers Score Rank

L1 4.69 5

L2 4.55 7

L3 4.72 4

L4 4.54 8

L5 4.94 1

L6 4.76 2

L7 4.74 3

L8 4.62 6

Applying the PSI method

Normalized values have been calculated using formulas (1) and (2), resulting in Table 4.

Table 4. Normalized values in the PSI method

Lecturers C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 C13 C14 C15 C16 C17 ~

L1 0.9340 0.9853 0.9400 0.9691 0.9400 0.9460 0.9340 0.9400 0.9460 0.9460 0.9340 0.9340 0.9400 0.9400 0.9400 0.9340 0.9340

L2 0.9160 0.9518 0.9080 0.9196 0.9020 0.9100 0.9200 0.9080 0.9140 0.9040 0.9100 0.9200 0.9100 0.8960 0.9140 0.9200 0.9300

L3 0.9320 0.9706 0.9360 0.9546 0.9520 0.9420 0.9360 0.9480 0.9520 0.9360 0.9480 0.9480 0.9480 0.9480 0.9520 0.9520 0.9520

L4 0.9020 0.9413 0.9020 0.9258 0.9080 0.9040 0.9040 0.9080 0.9100 0.9040 0.9160 0.9140 0.9100 0.9140 0.9100 0.9160 0.9140

L5 0.9640 1.0000 0.9760 1.0000 0.9720 0.9840 1.0000 0.9700 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 0.9960 0.9940

L6 1.0000 0.9434 1.0000 0.9278 1.0000 1.0000 0.9000 1.0000 0.9000 1.0000 0.9000 0.9000 1.0000 0.9000 0.9000 1.0000 1.0000

L7 0.9520 0.9811 0.9520 0.9649 0.9440 0.9520 0.9600 0.9440 0.9600 0.9520 0.9520 0.9600 0.9440 0.9280 0.9520 0.9440 0.9520

L8 0.9340 0.9706 0.9260 0.9464 0.9040 0.9040 0.9060 0.9060 0.9100 0.9120 0.9280 0.9320 0.9320 0.9320 0.9320 0.9460 0.9460

Applying formula (3) to calculate the average values of standardized data (n); priority values from the average values (cp) calculated using formula (4); deviation in priority values [0j) calculated using formula (5); overall priority values for criteria calculated using formula (6). All calculated values have been summarized in Table 5.

Table 5. Some parameters in PSI

Par. XI C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 C13 C14 C15 C16 C17

n 0.9418 0.9680 0.9425 0.9510 0.9403 0.9428 0.9325 0.9405 0.9365 0.9443 0.9360 0.9385 0.9480 0.9323 0.9375 0.9510 0.9528

^ 0.0065 0.0031 0.0077 0.0051 0.0085 0.0092 0.0081 0.0078 0.0082 0.0106 0.0069 0.0068 0.0086 0.0076 0.0071 0.0070 0.0064

0.9935 0.9969 0.9923 0.9949 0.9915 0.9908 0.9919 0.9922 0.9918 0.9894 0.9931 0.9932 0.9914 0.9924 0.9929 0.9930 0.9936

pj 0.0589 0.0591 0.0588 0.0590 0.0588 0.0587 0.0588 0.0588 0.0588 0.0586 0.0588 0.0589 0.0587 0.0588 0.0588 0.0588 0.0589

The preference selection index (q) for each lecturer has been calculated using formula (7), resulting in Table 6. The ranking of each lecture r has also been arranged based on their qi values and has been placed in the last column of this table.

Table 6. Preference selection index (q) and rankings oflecturers

Lecturers 0i Rank

L1 0.9433 5

L2 0.9149 7

L3 0.9475 4

L4 0.9120 8

L5 0.9915 1

L6 0.9571 2

L7 0.9526 3

L8 0.9275 6

Applying the SRP method

The internal ranking of lecturers has been carried out according to step 2, resulting in Table 7.

Table 7. Internalrankings oflecturers using the SRP method

Lecturers C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 C13 C14 C15 C16 C17

L1 4 2 4 2 5 4 4 5 4 3 4 4 4 3 3 6 5

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

L2 6 5 7 8 8 6 5 6 5 6 7 6 6 8 5 7 6

L3 5 4 5 4 3 5 3 3 3 4 3 3 2 2 2 3 3

L4 7 7 8 7 6 7 7 6 6 6 6 7 6 6 6 8 7

L5 2 1 2 1 2 2 1 2 1 1 1 1 1 1 1 2 2

L6 1 6 1 6 1 1 8 1 7 1 8 8 1 7 7 1 1

L7 3 3 3 3 4 3 2 4 2 2 2 2 3 5 2 5 3

L8 4 4 6 5 7 7 6 7 6 5 5 5 5 4 4 4 4

Individual scores (Si) for each lecturer have been calculated using (8), where the weights of the criteria are chosen to be equal, meaning each criterion has a weight of 1/17. The scores and rankings of lecturers have been consolidated in Table 8.

Table 8. Scores and rankings oflecturers

Lecturers Si i Rank

L1 3.8824 5

L2 6.2941 7

L3 3.3529 3

L4 6.6471 8

L5 1.4118 1

L6 3.8824 4

L7 3.0000 2

L8 5.1765 6

Applying the RAM method

Normalized values have been calculated using (9), resulting in Table 9.

Table 9. Normalized values in the RAM method

Lecturers C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 C13 C14 C15 C16 C17

L1 0.1240 0.1272 0.1247 0.1274 0.1250 0.1254 0.1252 0.1249 0.1263 0.1252 0.1247 0.1244 0.1239 0.1260 0.1253 0.1228 0.1225

L2 0.1216 0.1229 0.1204 0.1209 0.1199 0.1207 0.1233 0.1207 0.1220 0.1197 0.1215 0.1225 0.1200 0.1201 0.1219 0.1209 0.1220

L3 0.1237 0.1253 0.1241 0.1255 0.1266 0.1249 0.1255 0.1260 0.1271 0.1239 0.1266 0.1263 0.1250 0.1271 0.1269 0.1251 0.1249

L4 0.1197 0.1215 0.1196 0.1217 0.1207 0.1199 0.1212 0.1207 0.1215 0.1197 0.1223 0.1217 0.1200 0.1226 0.1213 0.1204 0.1199

L5 0.1280 0.1291 0.1294 0.1314 0.1292 0.1305 0.1340 0.1289 0.1335 0.1324 0.1335 0.1332 0.1319 0.1341 0.1333 0.1309 0.1304

L6 0.1327 0.1218 0.1326 0.1220 0.1329 0.1326 0.1206 0.1329 0.1201 0.1324 0.1202 0.1199 0.1319 0.1207 0.1200 0.1314 0.1312

L7 0.1264 0.1267 0.1263 0.1268 0.1255 0.1262 0.1287 0.1255 0.1281 0.1260 0.1271 0.1279 0.1245 0.1244 0.1269 0.1241 0.1249

L8 0.1240 0.1253 0.1228 0.1244 0.1202 0.1199 0.1214 0.1204 0.1215 0.1207 0.1239 0.1241 0.1229 0.1250 0.1243 0.1243 0.1241

Normalized values, considering the weights of criteria, have been calculated using (10). Here, the weights of each criterion have also been chosen to be equal (1/17), resulting in Table 10.

Table 10. Normalized values considering the weights ofcriteria (in the RAM method)

Lecturers C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 C13 C14 C15 C16 C17

L1 0.0073 0.0075 0.0073 0.0075 0.0074 0.0074 0.0074 0.0073 0.0074 0.0074 0.0073 0.0073 0.0073 0.0074 0.0074 0.0072 0.0072

L2 0.0072 0.0072 0.0071 0.0071 0.0071 0.0071 0.0073 0.0071 0.0072 0.0070 0.0071 0.0072 0.0071 0.0071 0.0072 0.0071 0.0072

L3 0.0073 0.0074 0.0073 0.0074 0.0074 0.0073 0.0074 0.0074 0.0075 0.0073 0.0074 0.0074 0.0074 0.0075 0.0075 0.0074 0.0073

L4 0.0070 0.0071 0.0070 0.0072 0.0071 0.0071 0.0071 0.0071 0.0071 0.0070 0.0072 0.0072 0.0071 0.0072 0.0071 0.0071 0.0071

L5 0.0075 0.0076 0.0076 0.0077 0.0076 0.0077 0.0079 0.0076 0.0079 0.0078 0.0079 0.0078 0.0078 0.0079 0.0078 0.0077 0.0077

L6 0.0078 0.0072 0.0078 0.0072 0.0078 0.0078 0.0071 0.0078 0.0071 0.0078 0.0071 0.0071 0.0078 0.0071 0.0071 0.0077 0.0077

L7 0.0074 0.0075 0.0074 0.0075 0.0074 0.0074 0.0076 0.0074 0.0075 0.0074 0.0075 0.0075 0.0073 0.0073 0.0075 0.0073 0.0073

L8 0.0073 0.0074 0.0072 0.0073 0.0071 0.0071 0.0071 0.0071 0.0071 0.0071 0.0073 0.0073 0.0072 0.0074 0.0073 0.0073 0.0073

Total normalized scores, considering the weights of criteria, have been calculated using (11) and (12); Individual scores for each lecturer have been obtained according to (13). Table 11 synthesizes the calculated values and the rankings of lecturers based on their scores (RI).

Table 11. Some parameters in the RAM method and rankings oflecturers

Lecturers Si +i Si -i R|i Rank

L1 0.125002 1.4577 5

L2 0.121237 1.4564 7

L3 0.125559 1.4579 4

L4 0.120848 n 1.4563 8

L5 0.131401 0 1.4599 1

L6 0.126821 1.4584 2

L7 0.126235 1.4582 3

L8 0.122897 1.4570 6

Applying the PIV method

Standardized values have been calculated using (14), consolidated in Table 12.

Table 12. Standardized values in the PIVmethod

Lecturers C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 C13 C14 C15 C16 C17

L1 0.3505 0.3598 0.3524 0.3601 0.3532 0.3545 0.3539 0.3532 0.3569 0.3539 0.3526 0.3517 0.3504 0.3563 0.3543 0.3471 0.3464

L2 0.3437 0.3475 0.3404 0.3417 0.3390 0.3411 0.3486 0.3411 0.3449 0.3382 0.3436 0.3464 0.3392 0.3396 0.3445 0.3419 0.3450

L3 0.3497 0.3544 0.3509 0.3548 0.3578 0.3530 0.3547 0.3562 0.3592 0.3502 0.3579 0.3570 0.3533 0.3593 0.3588 0.3538 0.3531

L4 0.3385 0.3437 0.3382 0.3440 0.3412 0.3388 0.3425 0.3411 0.3433 0.3382 0.3458 0.3442 0.3392 0.3464 0.3430 0.3404 0.3390

L5 0.3617 0.3652 0.3659 0.3716 0.3653 0.3688 0.3789 0.3644 0.3773 0.3741 0.3775 0.3765 0.3727 0.3790 0.3769 0.3701 0.3687

L6 0.3753 0.3445 0.3749 0.3448 0.3758 0.3748 0.3410 0.3757 0.3396 0.3741 0.3398 0.3389 0.3727 0.3411 0.3392 0.3716 0.3709

L7 0.3572 0.3583 0.3569 0.3586 0.3547 0.3568 0.3638 0.3547 0.3622 0.3562 0.3594 0.3615 0.3519 0.3517 0.3588 0.3508 0.3531

L8 0.3505 0.3544 0.3472 0.3517 0.3397 0.3388 0.3433 0.3404 0.3433 0.3412 0.3504 0.3509 0.3474 0.3533 0.3513 0.3515 0.3509

Standardized values, considering the weights of criteria, have been calculated using (15), resulting in Table 13. Here, the weights of each criterion have also been chosen to be equal (1/17).

Table 13. Standardized values considering the weights ofcriteria (in the PIVmethod)

Lecturers C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 C13 C14 C15 C16 C17

L1 0.0206 0.0212 0.0207 0.0212 0.0208 0.0209 0.0208 0.0208 0.0210 0.0208 0.0207 0.0207 0.0206 0.0210 0.0208 0.0204 0.0204

L2 0.0202 0.0204 0.0200 0.0201 0.0199 0.0201 0.0205 0.0201 0.0203 0.0199 0.0202 0.0204 0.0200 0.0200 0.0203 0.0201 0.0203

L3 0.0206 0.0208 0.0206 0.0209 0.0210 0.0208 0.0209 0.0210 0.0211 0.0206 0.0211 0.0210 0.0208 0.0211 0.0211 0.0208 0.0208

L4 0.0199 0.0202 0.0199 0.0202 0.0201 0.0199 0.0201 0.0201 0.0202 0.0199 0.0203 0.0202 0.0200 0.0204 0.0202 0.0200 0.0199

L5 0.0213 0.0215 0.0215 0.0219 0.0215 0.0217 0.0223 0.0214 0.0222 0.0220 0.0222 0.0221 0.0219 0.0223 0.0222 0.0218 0.0217

L6 0.0221 0.0203 0.0221 0.0203 0.0221 0.0220 0.0201 0.0221 0.0200 0.0220 0.0200 0.0199 0.0219 0.0201 0.0200 0.0219 0.0218

L7 0.0210 0.0211 0.0210 0.0211 0.0209 0.0210 0.0214 0.0209 0.0213 0.0210 0.0211 0.0213 0.0207 0.0207 0.0211 0.0206 0.0208

L8 0.0206 0.0208 0.0204 0.0207 0.0200 0.0199 0.0202 0.0200 0.0202 0.0201 0.0206 0.0206 0.0204 0.0208 0.0207 0.0207 0.0206

The weight proximity index has been calculated using (16) and (17), resulting in Table 14.

Table 14. Weightproximityindex values in the PIVmethod

Lecturers C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 C13 C14 C15 C16 C17

L1 0.0015 0.0003 0.0013 0.0007 0.0013 0.0012 0.0015 0.0013 0.0012 0.0012 0.0015 0.0015 0.0013 0.0013 0.0013 0.0014 0.0014

L2 0.0019 0.0010 0.0020 0.0018 0.0022 0.0020 0.0018 0.0020 0.0019 0.0021 0.0020 0.0018 0.0020 0.0023 0.0019 0.0017 0.0015

L3 0.0015 0.0006 0.0014 0.0010 0.0011 0.0013 0.0014 0.0011 0.0011 0.0014 0.0012 0.0012 0.0011 0.0012 0.0011 0.0010 0.0010

L4 0.0022 0.0013 0.0022 0.0016 0.0020 0.0021 0.0021 0.0020 0.0020 0.0021 0.0019 0.0019 0.0020 0.0019 0.0020 0.0018 0.0019

L5 0.0008 0.0000 0.0005 0.0000 0.0006 0.0004 0.0000 0.0007 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001 0.0001

L6 0.0000 0.0012 0.0000 0.0016 0.0000 0.0000 0.0022 0.0000 0.0022 0.0000 0.0022 0.0022 0.0000 0.0022 0.0022 0.0000 0.0000

L7 0.0011 0.0004 0.0011 0.0008 0.0012 0.0011 0.0009 0.0012 0.0009 0.0011 0.0011 0.0009 0.0012 0.0016 0.0011 0.0012 0.0010

L8 0.0015 0.0006 0.0016 0.0012 0.0021 0.0021 0.0021 0.0021 0.0020 0.0019 0.0016 0.0015 0.0015 0.0015 0.0015 0.0012 0.0012

The overall nearness value (d) for each lecturer has been calculated according to (18), resulting in Table 15. The ranking of each lecturer has also been determined based on their dvalues and has been consolidated in the last column of this table.

Table 15. The d. values and rankings ofeach lecturer

Lecturers di i rank

L1 0.0213 5

L2 0.0319 7

L3 0.0197 4

L4 0.0330 8

L5 0.0032 1

L6 0.0161 2

L7 0.0178 3

L8 0.0272 6

Therefore, the ranking of lecturers using the PSI, SRP, RAM, and PIV methods has been completed. The results of the rankings using the traditional method that the university is currently using are also used for comparison with the rankings obtained using these four methods. Figure 2 shows the lecturer rankings using different methods when the weights of the criteria are equal.

Figure 2. Rankings oflecturers using differentmethods when the weights ofthe criteria are equal

We observe that the rankings of lecturers when assessed by the PSI, RAM, and PIV methods are entirely identical and also completely match the rankings obtained through the traditional method used at the university. There is a slight difference in the rankings of lecturers when assessed by the SRP method compared to when using other methods. However, the lecturer rated as the best and the one rated as the worst when ranked by the SRP method always aligns with the rankings obtained using other ranking methods. The difference in rankings when using the SRP method compared to using other methods is explained by the fact that SRP only uses natural numbers for internal ranking of lecturers, reducing adaptability compared to normalizing data to generate real numbers (both natural and decimal) when using other methods.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

The results achieved and the observations made apply when all selected criteria have equal weights (all equal to 1/17). However, to answer the question about the accuracy of results when weights of criteria are calculated using different methods, ranking of lecturers is necessary. Two methods, Entropy and MEREC, have been proposed with high accuracy in calculating weights for criteria (Trung and Thinh, 2021). They have also been widely used in recent studies (Hoang, 2023; Trung, 2021a; Trung, 2021b; Le et al., 2022), and they were selected for application in this research. The weights of each of the seventeen criteria for each lecturer, as determined by the Entropy and MEREC methods, can be found in Table 16.

Table 16. Weights ofcriteria

Weight method C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 C13 C14 C15 C16 C17

Entropy MEREC 0.0588 0.0591 0.0648 0.0422 0.0588 0.0657 0.0591 0.0503 0.0588 0.0619 0.0588 0.0589 0.0588 0.0589 0.0587 0.0589 0.0624 0.0527 0.0556 0.0593 0.0645 0.0587 0.0588 0.0628 0.0587 0.0608 0.0589 0.0592 0.0588 0.0611 0.0587 0.0559 0.0586 0.0622

Ranking lecturers using the SRP, RAM, and PIV methods when the weights of criteria are calculated using the Entropy and MEREC methods has also been conducted similarly to when the weights of all criteria are equal, as we have previously performed. Figures 3 and 4 respectively represent the rankings of lecturers when the weights of criteria are calculated using the Entropy and MEREC methods. It is important to reiterate that since the PSI method is not related to the weights of criteria, the rankings of lecturers using this method are independent of the weights of criteria.

Figure 3. Rankings oflecturers using differentmethods when the weights ofcriteria are calculated using the

Entropy method

Figure 4. Rankings oflecturers using differentmethods when the weights ofcriteria are calculated using the

MEREC method

From the observations in Figures 3 and 4, we also notice that the rankings of lecturers are entirely identical when ranked by the PSI, RAM, PIV methods, and the traditional method of the university, regardless of the weights of criteria. Furthermore, the ranking of each lecturer remains unchanged when the weights of criteria change (review all three Figures 2, 3, and 4). Moreover, the complete dissimilarity in rankings of lecturers when ranked by the SRP method compared to using other methods persists, as observed in Figure 2 above. However, the lecturer rated as Rank 1 and the lecturer rated as Rank 8 remain entirely identical when using different methods. All of these findings indicate that the application of the four methods PSI, SRP, RAM, and PIV is entirely appropriate for identifying the best lecturer for a course. The lecturer deemed least suitable for teaching a course will also be identified when applying these methods.

Discussions

The performance evaluation of university lecturers is a crucial issue in today's higher education system. Our study has demonstrated that methods such as PSI, SRP, RAM, and PIV, alongside traditional ranking methods, lead to comparable assessment outcomes for lecturers teaching mechanical engineering design projects. This consistency not only affirms the effectiveness of these methods but also underscores the reliability of the evaluation process.

We have identified that lecturers ranked as most and least suitable in teaching this course maintain their positions when different evaluation methods are applied throughout the academic year. This indicates the consistency and stability of the methods in assessing lecturer performance. The ability to expand the application of these evaluation methods from mechanical engineering design projects to other teaching fields not only enhances flexibility but also ensures the feasibility and effectiveness of the evaluation process.

In summary, our research not only addresses a specific issue but also opens avenues to explore and apply more effective teacher evaluation methods across different teaching domains, contributing to continuous improvement in the quality of higher education.

Conclusions

s The PSI, RAM, PIV methods, and the traditional ranking approach all lead to similar lecturer rankings, demonstrating consistency in evaluating their performance. s The lecturer identified as the most suitable and least suitable for teaching the machine manufacturing technology project consistently maintains this position when using all four evaluation methods, including PSI, SRP, RAM, PIV, and the traditional ranking approach. s There is potential to extend the application of the evaluation methods used in the machine manufacturing technology project to rank lecturers when teaching other courses, providing flexibility in the evaluation process.

s When determining the importance of different criteria, such as according to the opinions of university administrators, various methods for calculating weights, such as PIPRECIA (Dragisa et al., 2021), LOPCOW (Ecer and Pamucar, 2022), can be employed to ensure accuracy and adaptability to specific environments. s In summary, the research has provided a comprehensive insight into the lecturer evaluation process in the machine manufacturing technology project and opens up opportunities to extend the application of these methods to other teaching areas.

Conflict of interests

The authors declare no conflict of interest.

Author Contributions

Do Duc Trung proposed the idea, Duong Van Duc and Nguyen Hoai Son contributed to data collection and analysis. Do Due Trung and Alexandra Mittelman drafted the initial version. Branislav Dudic provided critical feedback on the work. All authors collaborated on revising the draft and reached consensus on the final version of the manuscript.

References

Akmaludin, A., Gernaria E., S., Rinawati, R., Arisawati, E., & Dewi, L., S. (2023). Decision Support for Selection of The Best Teachers Recommendations MCDM-AHP and ARAS Collaborative Methods. Sinkron: Jurnal dan Penelitian Teknik Informatika, 8(4), 2036-2048. https://doi.org/10.33395/sinkron.v8i4.12354

Arifin, N., & Saputro, P. H. (2022). Selection Index (PSI) Method in Developing a Student Scholarship Decision Support System. International Journal of Computer and Information System, 3(1), 12-16 Ayyildiz, E., Murat, M., Imamoglu, G., & Kose, Y. (2023). A novel hybrid MCDM approach to evaluate universities based on student perspective. Scientometrics, 128, 55-86. https://doi.org/10.1007/s11192-022-04534-z

Bafail, O., A., Abdulaal, R., M., S., & Kabli, M., R. (2022). AHP-RAPS Approach for Evaluating the Productivity of Engineering

Departments at a Public University. Systems, 10(107). https://doi.org/10.3390/systems10040107 Do, D. T. (2024). Assessing the Impact of Criterion Weights on the Ranking of the Top Ten Universities in Vietnam. Engineering,

Technology& Applied ScienceResearch, 14(4), 14899-14903. https://doi.org/10.48084/etasr.7607 Do, D., T., Tran, V., D., Duong, V., D., & Nguyen, N., T. (2023). Investigation of the appropriate data normalization method for combination with Preference Selection Index method in MCDM. Operational Research in Engineering Sciences: Theory and Applications, 6(1), 44-64. https://oresta.org/menu-script/index.php/oresta/article/view/329 Dragisa, S., Darjan, K., & Gabrijela, P. (2021). Ranking alternatives using PIPRECIA method: A case of hotels' website evaluation. Journal of Process Management and New Technologies, 9(3-4), 62-68. https://doi.org/10.5937/joupro-man2103062S

Dua, T. V. (2024). PSI-SAW and PSI-MARCOS Hybrid MCDM Methods. Engineering, Technology & Applied Science Research,

14(4), 15963-15968. https://doi.org/10.48084/etasr.7992 Dua, T. V., Duc, D. V., Bao, N. C., & Trung, D. D. (2024). Integration of objective weighting methods for criteria and MCDM methods: application in material selection. EUREKA: Physics and Engineering, 2, 131-148. https://doi.org/10.21303/2461-4262.2024.003171

Ecer, F., & Pamucar, D. (2022). A novel LOPCOW-DOBI multi-criteria sustainability performance assessment methodology: An application in developing country banking sector. Omega, 112, Art. No. 102690. https://doi.org/10.1016/j.ome-ga.2022.102690

Ekinci, Y., Orbay, B.Z., & Karadayi, M. A. (2022). An MCDM-based game-theoretic approach for strategy selection in higher

education. Socio-EconomicPlanning Sciences, 81, 101186. https://doi.org/10.1016/j.seps.2021.101186 Ghorui, N., Ghosh, A., Mondal, S. P., Kumari, S., Jana, S., & Das, A. (2021). Evaluation Of Performancefor School Teacher Recruitment Using MCDM Techniques With Interval Data. Multicultural Education, 7(5), 380-395. https://doi.org/10.5281/ zenodo.4837226

Girvan, C. Conneely, C., & Tangney, B. (2016). Extending experiential learning in teacher professional development. Teaching

and TeacherEducation, 58, 129-139. https://doi.org/10.1016Zj.tate.2016.04.009 Ha, L., D. (2023). Selection of Suitable Data Normalization Method to Combine With the CRADIS Method for Making Multi-

Criteria Decision. Applied Engineering Letters, 8(1), 24-35. https://doi.org/10.18485/aeletters.2023.8.1.4 Hoang, X., T. (2023). Multi-objective optimization of turning process by FUCA method. Strojnicky casopls - Journal ofMechani-

cal Engineering, 73(1), 55-66. https://doi.org/10.2478/scjme-2023-0005 Kalyan, M., & Pramanik. S. (2019). Multi-criteria Group Decision Making Approach for Teacher Recruitment in Higher Education under Simplified Neutrosophic Environment. Neutrosophlc Sets and Systems, 6, 28-34. Komasi, H., Nemati, A., Hashemkhani Zolfani, S., Williams, N. L., & Bazrafshan, R. (2024). Investigating the effects of CO-VID-19 on tourism in the G7 countries. Technological and Economic Development of Economy, 30(4), 1064-1086. https://doi.org/10.3846/tede.2024.20821 Le, H. A., Hoang, X. T., Trieu, Q. H., Pham, D. L., & Le, X. H. (2022). Determining the Best Dressing Parameters for External

Cylindrical Grinding Using MABAC Method. Appliedscicences, 12(16), 8287. https://doi.org/10.3390/app12168287 Malik, D. A. A., Yusof, Y., & Khalif, K. M. N. K. (2021). A view of MCDM application in education. Journal of Physics: Conference

Series, 1988, 012063. https://doi.org/10.1088/1742-6596/1988/1/012063 Maniya, K., & Bhatt, M.G. (2010). A selection of material using a novel type decisionmaking method: Preference selection index

method. Materials&Design, 31(4), 1785-1789. https://doi.org/10.1016/j.matdes.2009.11.020 Mian, S. H., Nasr, E. A., Moiduddin, K., Saleh, M., Abidi, M. H., & Alkhalefah, H. (2024). Assessment of consolidative multi-criteria decision making (C-MCDM) algorithms for optimal mapping of polymer materials in additive manufacturing: A case study of orthotic application. Heliyon, 10, Art. No. e30867. https://doi.org/10.1016peliyon.2024.e30867 Monalisa, R., & Kusnawi, K. (2017). Decision support system of model teacher selection using PROMETHEE method. International Conference on Innovative and Creative Information Technology (ICITech). https://doi.org/10.1109/INNOC-IT.2017.8319147

Mufazzal, S., & Muzakkir, S., (2018). A New Multi-Criterion Decision Making (MCDM) Method Based on Proximity Indexed Value for Minimizing Rank Reversals. Computers & Industrial Engineering, 119, 427-438. https://doi.org/10.1016/j. cie.2018.03.045

Munna, A. S., & Kalam, M. A. (2021). Teaching and learning process to enhance teaching effectiveness: a literature review.

InternationalJournalofHumanitiesandlnnovation (IJHI), 4(1), 1-4. https://doi.org/10.33750/ijhi.v4i1.102 Nguyen, H. S., Hieu, T. T., Thang, N. M., Tan, H. N., Can, N. T., Thao, P. T., & Bao, N. C. (2024). Selection of Crankshaft Manufacturing Material by the PIV Method. Engineering, Technology & Applied Science Research, 14(4), 14848-14853. https://doi.org/10.48084/etasr.7514

Oliver, R. M., & Reschly, D. J. (2007). Effective Classroom Management: Teacher Preparation and Professional Development,

National Comprehensive Center for Teacher Quality, Washington, USA. Sirigiri, P., Hota, H.,S., & Sharma, L., K. (2015). Students Performance Evaluation using MCDM Methods through Customized

Software. InternationalJournalofComputer Applications, 130(15), 11-14. https://doi.org/10.5120/ijca2015907171 Sotoudeh-Anvari, A. (2023). Root Assessment Method (RAM): A novel multi-criteria decision making method and its applications in sustainability challenges. Journal of Cleaner Production, 423, Art. No. 138695. https://doi.org/10.1016/j. jclepro.2023.138695

Thinh, H. X., & Mai, N. T. (2023). Comparison of two methods in multi-criteria decision-making: application in transmission rod

material selection. EUREKA: Physics andEngineering, 6, 59-68. https://doi.org/10.21303/2461-4262.2023.003046 Thinh, H., X. & Dua, T. V. (2024). Optimal Surface Grinding Regression Model Determination with the SRP Method. Engineering, Technology & Applied ScienceResearch, 14(3), 14713-14718. https://doi.org/10.48084/etasr.7573 Toan, P., N., Dang, T., T., & Hong, L., T., T. (2021). E-Learning Platform Assessment and Selection Using Two-Stage Multi-Criteria Decision-Making Approach with Grey Theory: A Case Study in Vietnam. Mathematics, 9(23), Art.No. 3136. https://doi.org/10.3390/math9233136 Trung, D. D., & Tung, N. N. (2022). Applying COCOSO, MABAC, MAIRCA, EAMR, TOPSIS and weight determination methods for multi-criteria decision making in hole turning process. Strojnicky casopis - Journal of Mechanical Engineering, 72(2), 15-40. https://doi.org/10.2478/scjme-2022-0014 Trung, D. D., Dudic, B., Duc, D. V., Son, N. H. &Asonja, A. (2024). Comparison of MCDM methods effectiveness in the selection

of plastic injection molding machines. Teknomekanik, 7(1), 1-19. https://doi.org/10.24036/teknomekanik.v7i1.29272 Trung, D. D., Dudic, B., Dung, H. T., & Truong, N. X. (2024). Innovation in financial health assessment: Applying MCDM techniques to banks in VIETNAM. ECONOMICS - Innovative and Economics Research Journal, 12(2). https://doi. org/10.2478/eoik-2024-0011

Trung, D. D., Duc, D. V., Bao, N. C., & Thuy, D. T. T. (2024). Using the root assessment method to choose the optimal solution

for mushroom cultivation. Yugoslav Journal ofOperations Research. https://doi.org/10.2298/YJOR240115007T Trung, D. D., Dudic, B., Nguyen, N. T., &Asonja, A. (2024). Data Normalization for RootAssessment Methodology. International Journal oflndustrial Engineering and Management, 15(2), 156-168. https://doi.org/10.24867/IJIEM-2024-2-354 Trung, D., D. (2021). A combination method for multi-criteria decision making problem in turning. Manufacturingreview, 8, Art.

No. 26. https://doi.org/10.1051/mfreview/2021024 Trung, D., D. (2021). Application of TOPSIS and PIV methods for multi-criteria decision making in hard turning process. Journal

ofMachine Engineering, 21(4), 57-71. https://doi.org/10.36897/jme/142599 Trung, D., D. (2022). Expanding Data Normalization Method to CODAS Method for Multi-Criteria Decision Making. Applied

Engineering Letters, 7(2), 54-66, https://doi.org/10.18485/aeletters.2022.7.2.2 Trung, D.D, & Thinh, H.X. (2021). A multi-criteria decision-making in turning process using the MAIRCA, EAMR, MARCOS and TOPSIS methods: A comparative study. Advances in Production Engineering & Management, 16(4), 443-456, https:// doi.org/10.14743/apem2021.4.412 Truong, N. X., Asonja, A., & Trung, D. D. Enhancing Handheld Polishing Machine Selection: An Integrated Approach of MACROS Methods and Weight Determination Techniques. Applied Engineering Letters, 8(3), 2023: 131-138. https://doi. org/10.18485/aeletters.2023.8.3.5 Uluta§, A., Popovic, G., Radanov, P., Stanujkic, D., & Karabasevic, D. (2021). A new hybrid fuzzy PSI-PIPRECIA-COCOSO MCDM based approach to solving the transportation company selection problem. Technological and Economic Devel-opmentofEconomy, 27(5), 1227-1249. https://doi.org/10.3846/tede.2021.15058 Ventista, O. M., & Brown, C. (2023). Teachers' professional learning and its impact on students' learning outcomes: Findings from a systematic review. Social Sciences & Humanities Open, 8(1), 100565. https://doi.org/10.1016/j.ssaho.2023.100565 Zakeri, S., Chatterjee, P., Konstantas, D., & Ecer, F. (2023). A decision analysis model for material selection using simple ranking process. ScientifcReports, 13, Art. No. 8631. https://doi.org/10.1038/s41598-023-35405-z Zakeri, S., Chatterjee, P., Konstantas, D., & Ecer, F. (2024). A comparative analysis of simple ranking process and faire un Choix Adéquat method. Decision AnalyticsJournal, 10, Art. No. 100380. https://doi.org/10.1016Zj.dajour.2023.100380

i Надоели баннеры? Вы всегда можете отключить рекламу.