An analysis of two evaluative models for a university MA English for Communication course
by Phillip Wade
Phillip Wade Lancaster University philawade@gmail.com
Published in Training, Language and Culture Vol 1 Issue 1 (2017) pp. 77-91 doi: 10.29366/2017tlc.1.1.5 Recommended citation format: Wade, P. (2017). An analysis of two evaluative models for a university MA English for Communication course. Training, Language and Culture, 1(1), 77-91. doi: 10.29366/2017tlc.1.1.5
Choosing the most appropriate evaluation model for a university course can be a challenging task. This paper builds a RUFDATA framework to enable the presentation, analysis, application and comparison of applied Developmental Evaluation and Utilisation-focused evaluation models to a French university language course. A tailored integrated model is detailed, which embraces the suitable aspects of both models and utilises a business digital evaluation tool to increase the efficiency of the given teaching context. The conclusion highlights the need for personalised solutions to every course evaluation and provides an example for other teachers on which to base their own evaluation solution decision.
KEYWORDS: evaluation, language testing, RUFDATA, developmental evaluation, Utilisation-focused Evaluation, communication course
This is an open access article distributed under the Creative Commons Attribution 4.0 International License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0)
1. INTRODUCTION
Evaluations play an important role in French university courses. They enable the course teacher, and often creator, to make assessments about the content and the teaching. Scriven (2003) defines evaluation as 'the process of determining the merit, worth, or significance of things' (Scriven, 2003, p. 13). In the French higher education context, course evaluation data is generally gathered from student feedback at the end of courses and based on their opinions of what was taught and how. The feedback is extremely valuable for course and teaching improvement on an academic and professional level. Evaluations
can be placed on a continuum with formative, 'in progress' monitoring, at one end and summative, final 'completion' assessment at the other. Scriven (1996) argues that 'the formative vs summative distinction is context-dependent' (Scriven, 1996, p. 153) and so we can assume the large influence of the teacher, the students, the course, external factors and even the evaluator on the type of chosen evaluation. The context could even necessitate the combination of both formative and summative evaluations. For example, an evaluator may utilise a formative evaluation to enhance the course via weekly low stake learning diaries and compliment it with a summative high stake written
© Phillip Wade 2017
This content is licensed under a Creative Commons Attribution 4.0 International License
Training, Language and Culture Volume 1 Issue 1, 2017
' Whichever type or form of evaluation model is utilised by a teacher, the benefits of learning about what their students think about aspects of their course is invaluable'
test to understand how well the course content and teaching style were received. Whichever type or form of evaluation model is utilised by a teacher, the benefits of learning about what their students think about aspects of their course is invaluable.
The first part of this assignment describes a RUFDATA analysis for constructing a framework for conducting the course evaluation. Then two contrasting evaluation methods are presented, applied, analysed and assessed in regards to their compatibility with my objectives and the context. A comparison is conducted and a proposed evaluative solution is presented.
2. SETTING
The chosen course for this analysis is an MA English for Communication course which I have been developing, teaching and testing for three years at a university in France. The students are aged between 21 and 30. They attend various core content courses but this is their only English one. I have a 13-week window to teach and test the
doi: 10.29366/2017tlc.1.1.5 rudn.tlcjournal.org
students and then to retest any failed students.
I was provided with the official course title and a concise overview of the course aims three years ago. I then created the first course iteration and have slightly adapted it and my teaching method since based on informal student feedback and test scores. I consider each version of the course to have been successful as I was always asked to repeat it by the course administrator. There have been no complaints and the pass rate has almost been 100 percent. Nevertheless, I would like to improve the course and the test for the students and my own professional development.
3. MOTIVATION
Chelimskey (1997) suggests 3 general 'evaluation perspectives' or motivations that underlie every evaluation or can be utilised to categorise them: evaluation for accountability, evaluation for development, and evaluation for knowledge.
As evaluations occur in the real world and involve real people, motivations are not clear cut and quantifiable. This is perhaps why Saunders (2006) notes that teachers may be motivated by two or even all three of Chelimskey's 'perspectives'. I am primarily motivated by Chelimskey's 'evaluation' in order to gain greater understanding of the opinions of the students in relation to the course and the evaluation process. I am additionally motivated by 'development' as I wish to improve
by Phillip Wade
the course and myself. There may be an additional element of evaluation for 'accountability' as I would like to prove how beneficial the course is and how much students like it to my superiors. Positive results will certainly help me secure the course in future terms.
4. RUFDATA
Using online surveys or 'like sheets' and informal feedback conversations is a useful way of getting opinions but not as a basis for serious evaluation or course development policy.
A RUFDATA process, on the other hand, is utilised to develop the evaluation from an intention to an actionable process. 'RUFDATA is an acronym for the procedural decisions that would shape evaluation activity' (Saunders, 2000, p. 15). It begins by looking at the reasons and purposes (R) for the evaluation, its uses (U) and then focuses (F) on what is to be evaluated. This is followed by considering the analysis of the data (D), who the audience (A) of the results will be and finally when the evaluation will occur (T) and who will conduct it (A). Completing a RUFDATA analysis provides evaluators with a valuable pre-evaluation 'reflective moment' to create a framework and road plan for their evaluation. Saunders (2011) argues that RUFDATA 'involves a process of reflexive questioning during which key procedural dimensions of an evaluation are addressed leading to an accelerated induction to key aspects of
evaluation design' (Saunders, 2011, p. 16). For Saunders, RUFDATA is a 'meta-evaluative' tool that provides the missing planning link between objectives and the actual evaluation. Forss et al. (2002) note that 'more learning takes place before and during the evaluation, than after' (Forss et al., 2002, p. 40).
Therefore, we can see RUFDATA analysis as not just an evaluation planning tool but also a guide for conducting the process and from an evaluator development viewpoint, it is fair to assume the 'learning' also applies to those people involved.
5. RUFDATA ANALYSIS
5.1 Reasons and aims
The reason for the analysis is simple; continuous improvement to make the course fulfilling for students. I have three aims in conducting an evaluation: (1) to measure how effectively the course helps students practice and improve their English speaking skills; (2) to assess how much students enjoy the course; (3) to evaluate how well the course helps students pass the test.
5.2 Uses
I hope to improve the course so to provide better lessons and tasks that enable students to speak and help them increase their English speaking skills and level in an enjoyable environment. I also wish to adapt or rebuild the test so there is better cohesion between the taught lessons and the
Volume 1 Issue 1, 2017
rudn.tlcjournal.org
assessment at the end of term. They need to work seamlessly so the course not only helps students develop but also supplies them with the tools to pass the test.
5.3 Foci
The aim of the evaluation is to firstly uncover the opinions of each student as they relate to the effectiveness of the course for proving them with opportunities to speak English and how they feel their speaking skills have increased as a result. Secondly, it will look at their opinions of how they have liked or not the course and lastly, the results of the tests which will be analysed to comprehend the amount who have passed and the general scores.
I hope to understand what the students feel about the course and if they believe it functions in relation to the test i.e. if the former prepares them for the latter and if the latter is an accurate representation of the course content.
5.4 Data
The data will be collected via open or more closed questions delivered through a written or spoken survey and possibly 1-2-1 interviews for the first 2 objectives. The last objective requires statistical data about test scores and percentages of who passed and failed and with what scores. This is attainable from the final lesson where the test takes place and scores are earned.
5.5 Audience
The results of the evaluation are mainly intended for myself as I design and teach the course. They will be made available to my boss and any involved students or interested colleagues as well as the administration department.
5.6 Timescale
The course only has ten lessons and the final one usually includes the test. Thus, I generally utilise session Nine to revise and prepare students for the exam. As a result, the first two objectives could be assessed from lessons one or two to nine or after the test. Objective 3 can only be measured after lesson ten. I will aim to begin the evaluation process at the beginning of the course.
5.7 Agency
The evaluation process will be conducted by myself and include any other relevant teachers or department members who are suitable and available. The exact make up of a possible evaluation group depends on the type of evaluation chosen at the end of this assignment and availability of colleagues.
6. METHODOLOGY
The epistomological position adopted in this paper is grounded in a general constructivist perspective which argues that we build our understanding of reality internally as we are involved in what Piaget (1977) terms 'construction of meaning'. The
by Phillip Wade
process is ongoing as we learn and change our understanding based on interaction in the social world. Vygotsky (1978) refers to it as 'a developmental process deeply rooted in the links between individual and social history' (Vygotsky, 1978, p. 16). By embracing the influence of the external world and people, we can adopt the term 'social constructivism' as used by Lodico et al. (2010), which reflects a relativist ontology of reality is human experience and human experience is reality (Levers, 2013). This paper acknowledges the importance of experience but maintains the focus on the cognitive or 'brain-based' construction of meaning as opposed to a more interpretivist perspective. Iofciu et al. (2012) believe that cognitive researchers must place those under study 'human instruments', i.e. people and their internal mental constructions at the centre of the research. They suggest utilising portfolios, performance assessments, peer assessments and self-evaluation as effective research methods for uncovering what people understand.
To provide a more cohesive theoretical stance, this paper draws on Personal Construct Theory (PCT) which believes a person's unique psychological processes are channelled by the way s/he anticipates events (Fransella, 2015). The prediction of events leads people to create a theory or construct and then they assimilate or consolidate based on their experience. People are thus constantly experiencing, theorising, testing and
'To encapsulate all these theories, we can label the theoretical standpoint of this paper as Personal Social Constructivism'
adapting their mental constructs. To encapsulate all these theories, we can label the theoretical standpoint of this paper as Personal Social Constructivism.
7. DEVELOPMENTAL EVALUATION
Two evaluation models were chosen for this analysis to represent both extremes of the formative and summative scale, thus creating a more complete analysis. The first is Developmental Evaluation (DE) and the second Utilisation-focused evaluation (UFE).
Developmental Evaluation (DE) can be perceived as a type of formative assessment solution as it is concerned with activities 'in progress'. It was created for innovative and evolving situations as a flexible and adaptive method of evaluation. Bowen & Graham (2013) claim that it goes beyond simply evaluating a situation when he states that reflecting the principles of complexity theory, it is used to support an ongoing process of innovation.
For Bowen and Graham (2013), DE is not only an
Volume 1 Issue 1, 2017
rudn.tlcjournal.org
evaluative solution to innovative and adapting context but also a tool to enhance such innovation. Scriven (1996) ventures further by defining DE as 'an evaluation-related exercise, not as a different kind of evaluation' (Scriven, 1996, p. 158). Therefore, we can comprehend that DE can be much greater than just an evaluative solution.
A DE on its own is not always appropriate for every context, even for those in extremely innovative situations, so it is often used in conjunction with other forms of evaluation. DE's process-based responsive nature can clash with the more standard 'end result' evaluation which in many organisations is necessary for formal evaluations and measurable and actionable results. DE cannot be clearly defined, pre-planned and the outcomes are not predictable. Like the context it was created to evaluate, it is complex. Thus, a DE evaluation can be challenging to legitimise at the evaluation planning stage and also its return on investment at the end of the process and even for establishing success indicators mid-process.
Gamble (2008) explains the journey of DE as 'we move methodically from assessing the situation to gathering and analysing data, formulating a solution and then implementing that solution' (Gamble, 2008, p. 18). This is a five-step process but what is omitted is the subsequent loop between implementation at the end of a cycle and
the next data collection or feedback to begin the following iteration. Patton (2011) refers to it as 'double loop learning' and describes the key actions involved in a typical DE process as 'asking evaluative questions, applying evaluation logic, and gathering and reporting evaluative data to support project, program, product, and/or organisational development with timely feedback' (Patton, 2011, p. 258). These activities imply how active those involved in DE must be. To provide more tangibility and structure to the DE process, Dozois et al. (2010) suggest the development of a learning framework to map the key challenges and opportunities at the beginning.
A DE is a team activity where a group of stakeholders are led by a main evaluator. Patton (2008) asserts that the evaluator's primary function is to elucidate team discussions with evaluative questions, data and logic, and facilitate data-based decision-making in the developmental process. He or she assists, supports and enables the team to make progress. Their greatest challenges are perhaps giving stakeholders the power to develop the DE instead of controlling it themselves, responding quickly to change and recognising Patton's (2008) 'key developmental moments' that are unplanned but mark progress by indicating significant events.
Dozois et al. (2010) have several suggestions for the logistics of conducting a DE. Firstly, they
by Phillip Wade
support utilising 'rapid reconnaissance' and 'rapid assessment' for the data collection. They describe fieldwork being conducted quickly on an 'as and when' basis. Rapid reconnaissance involves on the ground research via observations, informal interviews and surveys. Dozois et al. (2010) also recommend system mapping and outcome mapping for documenting the DE sessions but adapted documents could also be used to structure the sessions.
8. UTILISATION-FOCUSED EVALUATION (UFE) 8.1 General observations
Utilisation-focused evaluation (UFE) concentrates on the use of the evaluation which in this setting, is the assessment of my course to establish its quality. According to Patton (2003), 'Evaluations should be judged by their utility and actual use; therefore, evaluators should facilitate the evaluation process and design any evaluation with careful consideration of how everything that is done, from beginning to end, will affect use' (Patton, 2003, p. 223).
UFE aims to create the most optimum evaluation possible with a carefully planned step-by-step design approach. It has a long-term vision with each stage clearly marked out to reach the final goal of creating the ideal evaluation for that setting. According to Ramirez and Brodhead (2013), the attention is constantly on the intended use by intended users. We can consider UFE as a
bottom-up 'by the people for the people' evaluation design approach. Based on the constructivist perspective of everyone having different experience and thus constructions, we can deduce that every UFE produced test will be unique. The result is a product of the needs of the users but also their knowledge, skills and teamwork.
Patton (2003) advocates an 'active-reactive-adaptive' relationship between the lead evaluator and the team members/intended users which implies the contribution of the team members but the overarching managerial role of the leader to lead and manage (active), listen to and respond to member contributions (reactive) and to change steps, the evaluation and the style of the process (adaptive). Patton highlights in more detail important decisions of members by stating that UFE 'is a process for helping primary intended users select the most appropriate content, model, methods, theory, and uses for their particular situation' (Patton, 2003, p. 223). The scholar proposes a logical five-step procedure for conducting a UFE process which is ideal for newcomers to evaluation.
1. Identify who needs the evaluation.
2. Commit and focus on the evaluation.
3. Confirm evaluation options.
4. Analyse, interpret and conclude.
5. Share findings.
Volume 1 Issue 1, 2017
rudn.tlcjournal.org
Ramirez and Brodhead (2013) expanded the steps into a more detailed 12-stage version which, unlike Patton's, does not assume the context is automatically suitable for a UFE and so includes several important preparatory stages.
1. Assess the readiness of the program.
2. Assess the readiness of evaluators.
3. Identify primary intended users.
4. Conduct a situational analysis.
5. Identify primary intended uses.
6. Focus on the evaluation.
7. Design the evaluation.
8. Conduct a simulation of use.
9. Collect data.
10. Analyse data.
11. Facilitate use.
12. Conduct a meta evaluation.
It has a very clear design methods stage a simulation use step tests if the evaluator and the evaluation are ready for the data collection. For the collecting data stage, Ramirez and Brodhead (2003) categorise data collection tools in UFE into 'perception (evaluator's own observations), validation (surveys and interviews) and documentation (desk review of documents and literature)' (Ramirez & Brodhead, 2003, p. 63). The final parts are aimed at making use of the results and then conducting 'an evaluation about the evaluation' which is useful for demonstrating return on investment (ROI) to superiors and for
assessing the process.
The 12-step process is a substantial undertaking for even an experienced evaluator. In my context, it seems unrealistic to run a weekly session on just one stage and expect full attendance and equal contributions. On account of the time allowance and schedules of potential teachers, five or even four face-to-face sessions of a maximum of thirty minutes would probably be the maximum possible and 100 percent attendance would not be guaranteed. I also presume, based on my own knowledge and experience, that they will be more interested in the final product and actually conducting the evaluation than the process of creating an evaluation. Thus, with this in mind, I have grouped the logistics of the 12-step process below by adding stages and the people involved:
Stage 1
1. Assess the readiness of the program. (Course Leader)
2. Assess the readiness of evaluators. (Course Leader)
3. Identify primary intended users. (Emails to potential team members)
4. Conduct a situational analysis. (Course Leader)
Stage 2
5. Identify primary intended uses. (All)
6. Focus on the evaluation. (All)
by Phillip Wade
7. Design the evaluation. (All) Stage 3
8. Conduct a simulation of use. (Course Leader) Stage 4
9. Collect data. (Course Leader)
10. Analyse data. (All)
differences of opinion between those who expected and got the grades they wanted, those who did not and those who actually performed better. Objectives 1 and 2 could alternatively be achieved before the test via surveys and objective 3 following it as the latter will rely on require test results. In this sense the design process perhaps only needs to focus on the first 2 objectives.
Stage 5
11. Facilitate use. (Course Leader/All)
12. Meta evaluation. (Course Leader/All)
It is feasible that the process could function with just three or even less people and entirely via online contact so limited availability might not hamper the process.
8.2 Objectives compatibility
UFE can enable a final evaluation to reach all my objectives as long as the team members can agree to cover them and the data collection methods are suitable. The timing of the actual evaluation is perhaps the main challenge as there will be a great deal of influence of objective 3 on the questions created from 1 and 2. This can be termed the 'post-test results influence'. For example, a student earning a good grade will reply positively to questions and 'help' (1) and 'enjoy' (2). They will be naturally biased due to their experience of their results and especially if did not confirm their expectations. Thus, there will be significant
8.3 Analysis
UFE provides a pragmatic approach to create a tailored evaluation such as the one my course requires. Its analysis of the setting and actors will provide clarity to the context and ensure the evaluation is suitable. The use of a simulation is another valuable step in perfecting the final product and the meta evaluation will be valuable for future evaluation processes. All these components are only possible via a significant time and work investment which is the main challenge in adopting a 12-step process.
The people factor is another obstacle. If the course leader is the only intended user of the evaluation, involving other people creates unintended use by unintended users unless they can work on addressing their own objectives. As all other users will probably desire is the final evaluation a list of questions, justifying this long process and keeping people engaged would be problematic. Some would possibly not reply to emails and so it would not be clear if they were following or had quit.
Volume 1 Issue 1, 2017
rudn.tlcjournal.org
'DE relies on change and events in the classroom but if the context is stable and nothing is reported, there will be no need for DE meetings'
Franke et al. (2002) note the negative repercussions in the drop out of team members and recruiting new people on the evaluation process. This is the infamous 'Achilles heel of Utilisation-focused evaluation' (Patton, 2003, p. 232-233).
9. COMPARISON OF DE AND UFE
A DE evaluation is beneficial for my context given a need for regular feedback to help plan each lesson and respond to a changing student body through absence or dropout. A one or two lesson feedback loop would suit the context, whether that involves conducting actual evaluation or data gathering and analysis. In comparison, a more summative UFE approach would provide valuable 'end of term' feedback adapting the course for the next year. From a social constructivist viewpoint, opinions take time to be constructed, predictions made and confirmed or not. Thus, DE evaluates and utilises the ongoing constructions of opinions at different stages while UFE assesses the final constructed opinions of the course as a whole.
DE relies on change and events in the classroom
but if the context is stable and nothing is reported, there will be no need for DE meetings. The evaluation process will never even commence as there is nothing to report and discuss. The chief evaluator will then have to create objectives and artificial points to work on and the process will no longer be responsive and more of a UFE. At the other end of the scale, too many changes and data would result in overload on the main evaluator. Hoping for an achievable level of events and timely data collection, analysis and outputs is wishful thinking. A UFE evaluation could easily establish stages with measured sub-objectives.
Both DE and UFE require a team led by the main evaluator and team members who should be intended users i.e. teachers who need a course evaluation. While the course leader can assume responsibility for a significant amount of steps in UFE, it seems less possible in DE as it is a more responsive process. Given limited teacher availability there is the danger of minimal or zero involvement in a DE which could effectively nullify its use and if this occurs midway through the process it would be particularly embarrassing.
The type of people involved with DE are often perceived as 'innovative' as they teach in a changing and evolving space and sign up for an innovative evaluation process. Such people would be inherently biased into change and innovating and perhaps the objectives would become second
by Phillip Wade
to 'breaking boundaries' and even cooperating towards shared goals. The more 'let's see how it goes' mentality of DE would probably not agree with the traditional end of term survey and many organisations lack the culture innovativity to fully embrace this new 'fly by the seat of your evaluative pants' approach.
The evaluation process in both DE and UFE is complex and requires deep understanding and preferably training, particularly for the lead evaluator. UFE's 12 steps need to be fully mastered as well as the role of the lead evaluator. Any error during the preliminary steps could damage the evaluation while mistakes or absence of following steps would produce complications or even poor outcomes. The novice would certainly have a bumpy evaluative road ahead of them. In contrast, DE's more 'responsive' approach seems to be more flexible and the leader has more freedom but in fact, the lead evaluator must become a master of shaping and pushing the DE and above all, people management and development. In that respect, a UFE is more regimental as you just move through the clearly defined steps. A DE process could easily become derailed and wind down without a trained and experienced lead. According to Briedenhann and Butts (2005) "participation and training in the evaluation process can build the capacity of stakeholders to conduct their own utilisation-focused evaluations, increasing the cost-effectiveness of the undertaking' (Briedenhann &
'Although the course leader may be the only direct person who needs the evaluation of the course, all the department's teachers could benefit from doing the process as a form of training'
Butts, 2005, p. 241). This would help legitimise the time investment and attract participants.
10. PROPOSED INTEGRATED EVALUATION MODEL
If a particular educational context is not compatible with either a 100% DE or UFE approach an integrated version drawing from aspects of both could be effective for reaching objectives. Given the 'innovative' or rather 'changing' nature of the context, a tailored evaluative approach may be appropriate as opposed to a fixture structure (UFE) or an adaptive format (DE). The setting requires a UFE-focused objective of evaluating the course and reaching the key objectives but with an emphasis on results and creating an effective evaluation, not an easy to use evaluation for the intended users. The latter does not guarantee useful results and neither does a large team of intended users. Although the course leader may be the only direct person who needs the evaluation of the course, all the department's teachers could benefit from doing the
Volume 1 Issue 1, 2017
rudn.tlcjournal.org
process as a form of training. The Head of Department might also be interested as the results would be informative and the Head of the Year would find the process and results useful. In general, I believe there is a strong case for making this process a standardised or repeatable process for our department and so inclusion of these team members is valid even if they are not currently intended users.
An important factor in this analysis has been the amount of involvement of team members or 'stakeholders' and gaining an optimum balance given the availability of participants. According to Taut (2008) 'It seems indispensable for practicing evaluators to arrive at some trade-off between depth and breadth of stakeholder involvement' (Taut, 2008, p. 225). Yet a decrease in participation does not necessarily have to equal a drop in effectiveness. A substantial amount of the workload can be alleviated and the data collection and analysis can be improved and certainly made faster by embracing automated feedback software from business such as Officevibe. This software was designed to assist companies in assessing and improving staff engagement via attractive and easy to use online surveys followed by accurate reporting and suggestions for improving engagement. In UFE, the lead evaluator is generally responsible for a significant amount of the evaluation process logistics and in DE, if they are the sole contact
with the students and course set to be evaluated, then they are automatically at the centre of the process and possibly the only person involved with data collection and implementing outcomes of the DE meetings. Using Officevibe, the DE evaluator and team can save time and energy by focusing on designing questions and creating changes based on the automatically analysed results. It can all be carried out online and very quickly as respondents can respond to surveys as soon as they are shared. The app also provides a suggestion box for respondee ideas and allows for anonymous post-feedback qualitative discussions with between them and the evaluator in what the company terms 'actionable feedback'. This adds a valuable qualitative dimension akin to 1-2-1 unstructured interviews and the anonymity should increase the reliability of responses.
I propose using Officevibe for creating and delivering questions to assess objectives 1 and 2 every two weeks. While objective 3 can be achieved with test results and then possibly providing a final 'post-test' evaluation of all three for comparison. This provides enough reflection time for predictions, comparisons and the construction of opinions. If integrated into the course as a compulsory homework and the responses shared with the students and acted on, response rates will rise and there will be a more development element with the evaluations contributing to the design of subsequent lessons
by Phillip Wade
and my approach. Below is a suggested '12+4' process based on the DE and UFE models:
1. Analyse the students, the class and the teacher to see if they are suitable for an evaluation process of 10 weeks.
2. Build a team and agree on the objectives and actionable questions or complete them yourself.
3. Create survey 1 for objectives 1-2 and share it.
4. Correspond with students, read the suggestions.
5. Share reports with the team, analyse them.
6. Share results with the students, discuss them.
7. Make changes to the next lesson. Repeat steps 3 to 7 until the final test.
8. Analyse the final test results.
9. Create a final survey of all the objectives.
10. Correspond with any students and read the suggestions.
11. Share results with the students, analyse them.
12. Conduct a meta evaluation.
This process tests the students every two weeks and each cycle will provide information to help improve the subsequent questions. For instance, perhaps question types with low responses should be avoided, concise wording needs will draw more attention and students are reluctant to discuss feedback. It is important to vary the test designs as utilising the same questions five times would probably produce less answers as students would not see the point. Therefore, the process not only evaluates the students for the objectives but also inherently assess itself. This is why I believe every cycle's results must be shared and discussed with the students. This involves them as stakeholders in the process.
11. CONCLUSION
Evaluations are both an interesting and a challenging activity for evaluators. The RUFDATA analysis in this paper demonstrates the need to create a framework for any and all possible evaluations while the analysis and comparison of DE and UFE highlight the significant differences between just 2 types of evaluations. Even though Patton (2016) notes that DE is actually classified as a UFE as it involves a focus on use by (current) users, this paper highlights significant differences which must be addressed when choosing a suitable evaluative solution. There is no 'one size fits all', which is why an integrated version tailored to the situation will successfully press more evaluative buttons than an 'off the shelf' model.
Volume 1 Issue 1, 2017
rudn.tlcjournal.org
'There is no 'one size fits all', which is why an integrated version tailored to the situation will successfully press more evaluative buttons than an 'off the shelf' model'
For instance, a short intensive course of five days cannot be evaluated using a team-based continuous cycle DE but a final more UFE could be enhanced using daily lesson evaluations if the teacher is able to adapt following lessons. It could be greater enhanced by utilising the students as stakeholders and running end of day discussions of focus groups. In this way, what are seen as obstacles when adopting a traditional rigid model approach should actually be viewed as
opportunities through an integrated, tailor or simply 'do what needs to be done' lens.
In this paper, the proposed outcome is a 12-step model which will presumably evolve over time as a result of applications to evaluations, as too will the use of the Officevibe app as a tool for data collection and analysis. There must be an element of evaluating the evaluation i.e. meta evaluation, as highlighted in this paper. Change is inherent in the evaluation context as the people are constantly creating understanding and factors such as lesson length, attendance and the levels of students can and in this context do always change. A good evaluation seeks results and they are more important than how they were achieved be that from a 100% DE, UFE, a mix or even a completely new type of evaluation.
References
Bowen, S. J., & Graham, I. D. (2013). From knowledge translation to engaged scholarship: promoting research relevance and utilization. Archives of Physical Medicine and Rehabilitation, 94(1), 3-8.
Briedenhann, J., & Butts, S. (2005). Utilization-focused evaluation. Review of Policy Research, 22(2), 221-243.
Chelimsky, E. (1997). Thoughts for a new evaluation society. Evaluation, 3(1), 97-109.
Dozois, E., Blanchet-Cohen, N., & Langlois, M. (2010). DE 201: A practitioner's guide to developmental
evaluation. JW McConnell Family Foundation.
Forss, K., Rebien, C. C., & Carlsson, J. (2002). Process use of evaluations types of use that precede lessons learned and feedback. Evaluation, 8(1), 29-45.
Franke, T. M., Christie, C. A., & Parra, M. T. (2002). Transforming a utilization focused evaluation (UFE) gone awry: A case of intended use by unintended users. Studies in Educational Evaluation, 29(1), 13-21.
Fransella, F. (2015). What is a personal construct. In D.
by Phillip Wade
A. Winter, & N. Reed (Eds.), The Wiley handbook of personal construct psychology (pp. 1-8). Wiley Blackwell.
Gamble, J. A. (2008). A developmental evaluation primer. Montreal: JW McConnell.
Iofciu, F., Miron, C., & Antohe, S. (2012). Constructivist approach of evaluation strategies in science education. Procedia-Social and Behavioral Sciences, 31, 292-296.
Levers, M. J. D. (2013). Philosophical paradigms, grounded theory, and perspectives on emergence. Sage Open, 3(4), 2158244013517243.
Lodico, M. G., Spaulding, D. T., & Voegtle, K. H.
(2010). Methods in educational research: From theory to practice (Vol. 28). John Wiley & Sons.
Patton, M. Q. (2003). Utilization-focused evaluation. In T. Kellaghan, & D. L. Stufflebeam (Eds.), International handbook of educational evaluation (pp. 223-242). Dordrecht, Boston: Kluwer Academic Publishers.
Patton, M. Q. (2008). Utilization-focused evaluation (4th ed.). Sage Publications.
Patton, M. Q. (2011). Developmental evaluation: Applying complexity concepts to enhance innovation and use. Guilford Press.
Patton, M. Q. (2016). What is essential in
developmental evaluation? On integrity, fidelity, adultery, abstinence, impotence, long-term commitment, integrity, and sensitivity in implementing evaluation models. American Journal of Evaluation, 37(2), 250-265.
Piaget, J. (1977). The development of thought:
Equilibration of cognitive structures. New York: The Viking Press.
Ramirez, R., & Brodhead, D. (2013). Utilization focused evaluation: A primer for evaluators. Southbound.
Saunders, M. (2000). Beginning an evaluation with RUFDATA: Theorizing a practical approach to evaluation planning. Evaluation, 6(1), 7-21.
Saunders, M. (2006). The 'presence' of evaluation theory and practice in educational and social development: Toward an inclusive approach. London Review of Education, 4(2), 197-215.
Saunders, M. (2011). Setting the scene: The four domains of evaluative practice in higher education. In M. Saunders, P. Trowler, & V. Bamber (Eds.), Reconceptualising evaluation in higher education: The practice turn (pp. 1-17). Berkshire: Oxford University Press.
Scriven, M. (1996). Types of evaluation and types of
evaluator. American Journal of Evaluation, 17(2), 151-161.
Scriven, M. (2003). Evaluation theory and metatheory. In T. Kellaghan, & D. L. Stufflebeam (Eds.), International handbook of educational evaluation (pp. 15-30). Dordrecht, Boston: Kluwer Academic Publishers.
Taut, S. (2008). What have we learned about
stakeholder involvement in program evaluation? Studies in Educational Evaluation, 34(4), 224-230.
Vygotsky, L. (1978). Mind in society. London, UK: Harvard University Press.