Научная статья на тему 'Information Literacy Assessment'

Information Literacy Assessment Текст научной статьи по специальности «СМИ (медиа) и массовые коммуникации»

CC BY
0
0
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
information literacy / information literacy assessment / information literacy standards and framework / test-based assessment / perception-based assessment

Аннотация научной статьи по СМИ (медиа) и массовым коммуникациям, автор научной работы — Shahril Effendi Ibrahim, Md Rosli Ismail, Thirumeni T. Subramaniam

The objective of this review is to present and evaluate the literature on the methods and practices of information literacy (IL) assessment in the academic environment. The research concentrates on the two categories of IL assessment: objective or test-based assessment and perception-based assessment. The rationale and objectives of IL assessment are examined. The paper also addresses a number of main IL standards and frameworks. There is a discussion of numerous IL assessment tools and instruments that are enumerated in a variety of literature. The reliability and validity of these measurements, which are discussed in the literature, will also be determined. This article annotates 45 English-language periodical and peer-reviewed articles, reports, and IL standards and framework which highlight information literacy and information literacy assessment. The periodical and peer-reviewed articles were selected from ERIC, ProQuest Education Collection, EBSCO’s Academic Search Ultimate and EBSCO Education Source. Each IL assessment tool and instrument has its advantages. To design the most suitable type of IL assessment tool, it is highly depended on the institutional IL objectives and the ability to balance between the objectives and the resources capability of the institution. The best IL instrument must indicate its reliability and validity of the instrument. Information in this article is primary of use and reference of academic librarians, educators and researchers. Literature discussed in this article may also be used as reference for future practice and research.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Information Literacy Assessment»

Copyright © 2024 by Cherkas Global University

Published in the USA

International Journal of Media and Information Literacy Issued since 2016.

International Journal of Media and Information Literacy

E-ISSN: 2500-106X 2024. 9(2): 339-350

DOI: 10.13187/ijmil.2024.2.339 https://ijmil.cherkasgu.press

Information Literacy Assessment

Shahril Effendi Ibrahim a , *, Md Rosli Ismail a, Thirumeni T. Subramaniam a a Open University Malaysia, Petaling Jaya, Malaysia

Abstract

The objective of this review is to present and evaluate the literature on the methods and practices of information literacy (IL) assessment in the academic environment. The research concentrates on the two categories of IL assessment: objective or test-based assessment and perception-based assessment. The rationale and objectives of IL assessment are examined. The paper also addresses a number of main IL standards and frameworks. There is a discussion of numerous IL assessment tools and instruments that are enumerated in a variety of literature. The reliability and validity of these measurements, which are discussed in the literature, will also be determined. This article annotates 45 English-language periodical and peer-reviewed articles, reports, and IL standards and framework which highlight information literacy and information literacy assessment. The periodical and peer-reviewed articles were selected from ERIC, ProQuest Education Collection, EBSCO's Academic Search Ultimate and EBSCO Education Source. Each IL assessment tool and instrument has its advantages. To design the most suitable type of IL assessment tool, it is highly depended on the institutional IL objectives and the ability to balance between the objectives and the resources capability of the institution. The best IL instrument must indicate its reliability and validity of the instrument. Information in this article is primary of use and reference of academic librarians, educators and researchers. Literature discussed in this article may also be used as reference for future practice and research.

Keywords: information literacy, information literacy assessment, information literacy standards and framework, test-based assessment, perception-based assessment.

1. Introduction

Information literacy has been long discussed in workplace, academic and higher education sectors since 1970's. The term information literacy was originally used by Paul G. Zurkowski in 1974. He defines information literate person as someone who has mastered the use of a variety of information sources to address issues in their daily lives and at work (Zurkowski, 1974). In educational context, the concept of information literacy was raised by Lee Burchinal at the Texas A&M University library's symposium. He argues that information literacy is beyond conventional literacy, which only emphasize on the ability of reading and writing. Burchinal linked information literacy with the proficiency in the acquisition and utilization of pertinent information for the purpose of problem-solving and decision-making (Burchinal, 1976).

There are multiple definitions of information literacy can be found in literature. The most generally accepted definition of information literacy is by American Library Association (ALA) which defines information literacy as an integrated ability to "recognize when information is needed and have the ability to locate, evaluate, and use effectively the needed information" (ALA, 1989, para. 3).

* Corresponding author

E-mail addresses: [email protected] (S.E. Ibrahim)

2. Materials and methods

Objectives

The main objective of this review is to identify methods and measurement tools used in assessing information literacy (IL) competencies and skills among students in academic world. Two approaches of assessments identified in this review areobjective-based assessment and perception-basedassessment. Two preliminary notions and concepts were derived from this purpose for the literature searches are 'information literacy and 'measurement'/'assessment'. As a result, the words and phrases 'assessment' or 'measurement' or 'evaluation' and 'information literacy' or 'information skills' were employed together in selected four of major databases in education, namelyEducation Resources Information CenterERIC, ProQuest Education Collection, EBSCO's Academic Search Ultimate and EBSCO Education Source (Appendix 1). Searches were mostly to peer-reviewed literature as a fundamental criterion for assessing quality. However, some conference papers and reports were also used as references. Relevant subject headings were employed in a formal manner, and searches were not restricted by a certain time frame.

Summary of result

The initial results of the searches are shown in Table 1. After a thorough reading from the four (4) stated database, a total, 79 articles were identified for further investigation. The following are the categories along with the number of relevant articles identified as belonging to the category (Table 2). In addition to the peer-reviewed articles as mentioned above, several standards, guidelines and frameworks from recognized and authorized bodies in information literacy such as Association of College and Research Libraries (ACRL), Society of College, National and University Libraries (SCONUL), and Australian and New Zealand Institute for Information Literacy (ANZIIL) are used as reference and citation in this review. Several reports on the development of information literacy assessment are also included in this review.

Table 1. Initial result of the search

Database Number of Results

ERIC 513

ProQuest Education 210

EBSCO Academic Search Ultimate 14

EBSCO Education Source 11

Table 2.Number of articles by categories

Category Number of articles

Test-based assessment 45

Perception-based assessment 21

Combination of self-assessment and test-assessment 13

3. Discussion

Objectives of information literacy assessment

To determine the effectiveness of an educational programme, assessment is usually the best way. The objective of assessment is "to measure institutional effectiveness and the quality of education" (Beile, 2008: 2). In IL context, an assessment's main objective is to measure the proficiency of IL among the students. Some examples of how IL assessments can be used, such as to determine the effectiveness of IL instruction program, to align instructions more closely with the IL learning objective, to evaluate the effectiveness of changes in instructional programmes and to improve the assessment process itself (Walters et al., 2020). Another reason for the development of IL assessment is to gain data about the information behaviour of student, as well as to have a greater understanding of student strengths and weaknesses (Oakleaf, 2009). Assessment of information literacy contributes to curriculum development, measures and tracks student progress, and encourages reflection on the teaching-learning process (Singh, Joshi, 2013).

Information Literacy Standards and Framework

Many tests and questionnaire are used as instruments in assessing IL competencies and knowledge. Most of them follow the recommendations and topics presented in various IL standards, models and guidelines (Al-Qallaf, 2019; O'Connor et al., 2002; Podgornik et al.,

2016). Each of these standards and guidelines has its own learning outcomes and goals (Hicks, Lloyd, 2023). Among the standards and guidelines referred when establishing IL assessment instruments is the Information Literacy Competency Standards for Higher Education (ACRL, 2000). This standard comprising of five standards, 22 performance indicators and a range of specific outcomes. This ACRL Standards, however, has been completely revised in line with the changing landscape in the area of information, data, media and technology (ACRL, 2015).

It has been rescinded and the new Framework for Information Literacy for Higher Education has been introduced in 2016. Although the Standards is no longer serve as the official guiding guideline for the library profession's information literacy efforts, the specific skills that the Standards cover are still relevant in both the Framework and iL assessment (Graves et al., 2021). The framework comprising of six frames and each frames consists of a concept central to IL, a set of knowledge practices, and a set of dispositions (ACRL,2015). Unlike the ACRL Standards, the Framework is developed based on a collection of related fundamental ideas and core concept which offering a range of execution possibilities rather than a set of standards or learning outcome that must be achieved.

Besides information literacy standards and guidelines for higher education, there are a few other standards developed by the ACRL. The standards are subject-based standards and one of them is Information Literacy Standards for Science and Engineering/Technology(ALA, 2000; Singh, Joshi, 2013). Based on the ACRL Standards, the standard consists of five standards and 25 performance indicators. Each performance indicator is complemented by one or more outcomes with objective of evaluating the advancement of scientific, engineering, and technology students at all levels of higher education toward information literacy. The Information Literacy Standards for Anthropology and Sociology Students, Psychology Information Literacy Standards and Information Literacy Competency Standards for Nursing developed in 2008, 2010 and 2013 respectivelyare other standards developed by ACRL on subject-based standards (ALA, 2006).

Another standard which is being used as a guideline in establishing IL assessment is the SCONUL Seven Pillars of Information Literacy by the Society of College, National and University Libraries (SCONUL). The standard referred as the Seven Pillars of Information Literacy. The United Kingdom (UK) and Ireland-based professional association for academic and research libraries developed the standard in 2011 (Society of College, National and University Libraries, 2011). The standard consists of seven pillars and these pillars, along with a number of awareness statements (understands) and performance challenges (is able to), serve as the foundation for IL practices in higher education in UK and Ireland. In New Zealand and Australia, the Australian and New Zealand Institute for Information Literacy (ANZIIL) developed the Australian and New Zealand information literacy framework. The Framework is based on an earlier version produced by the Council of Australian University Librarians and the ACRL Standards(Bundy et al., 2004). Six core standards and four guiding principles make up the ANZIIL framework, which is used to identify specific learning objectives.

Information Literacy Test-based Assessment

A lot of efforts have been implemented and practiced in assessing IL objectively through test assessment approach. One of the methods is by fixed-choice tests such as multiple-choice, matching, and true-false tests (Mery et al., 2011). According to Walsh, based on his review of nine type of assessment tools, multiple choice questionnaire is by far the most popular method in assessing students' competencies in IL (Walsh, 2009). Many studies list the advantages of multiple-choice questionnaire. This type of test is "easy to administer while also maximising scoring objectivity" (Rosman, 2015:2). They also agreed that the main advantage of such test over self-assessment is the ability to prevent overestimation through deliberate over-reporting of abilities. Multiple-choice question is a clear time-saver for the teacher and instructor for easy grading, especially in very big classroom or number of students (Laprise, 2012). He also added for teachers or instructors, writing this type of question takes the least amount of time. Walsh highlights that most multiple-choice questionnaire, however, make minimal effort to check the reliability or validity of their test instruments in assessing information literacy skills (Walsh, 2009). This is due to that many of short multiple-choice test are designed primary to check knowledge and skills gained specifically in library instruction sessions. Furthermore, there always be a problem in assessing reliability of short multiple-choice test.

Multiple choice questionnaire, however, have several limitations. One of the limitations is this type of test unable to evaluate deeply and thoroughly the understanding and knowledge in IL.

This type of IL test-based assessments is not appropriate for addressing higher order abilities since they only capture declarative knowledge (Scharf et al., 2007). However, Xu et al. argue that well-written tests are effective, versatile, and can measure and evaluate both higher-order and lower-order thinking abilities (Xu et al., 2016). Another drawback of multiple-choice questionnaire is the absence of flexibility (Goebel et al., 2013). This type of assessment question is rather difficult to be accustomed with institutional context and need. Multiple-choice questions also may unable to evaluate students' IL competencies comprehensively in all components of IL. Dunn noted that "Such test ... cannot assess the effectiveness of student search skills in real life situations" ( Dunn, 2002: 28). Proficiency in doing searches or search skills is regarded as one of the key competencies in information literacy.

Four multiple choice assessments tools which are developed based on the ACRL Standard of IL identified in this review are Project Standardized Assessment of Information Literacy Skills (Project SAILS; Mery et al., 2011), Madison Assessment's Information Literacy Test (Latham, Gross, 2011; Podgornik et al., 2016), and the Research Readiness Self-Assessment (RRSA) (Ivanitskaya et al., 2004). RRSA, however, added true/false questions in its question format.

Project SAILS, a federally funded project and initiated by the Kent State University, is developed to create an information literacy assessment tool that is both proven its validity and reliability, easy to administer, standardized, and approved for use at any institution. Project SAILS has developed a test bank of approximately 150 general information literacy test items and had 77 institutions participating in assessment of their information literacy instruction programs (Beile, 2005). Despite the fact that Project SAILS is constructed using the five ACRL standards, the fourth standard is not used since some of its components are insufficient for multiple-choice questions or are covered by outcomes and objectives from other standard (Lau et al., 2016).

The purpose of the Project SAILS test items is to assess information literacy competencies of undergraduates and these competencies are general in nature, meaning they are not discipline-specific. Items in the test bank are also used as a foundation and reference to develop other discipline-specific IL assessment instruments. One of the instruments is referred as Information Literacy Assessment Scale for Education (ILAS-ED). It was developed to measure teacher candidates' information literacy skills levels (Beile, 2005). It has a combination of 22 multiple choice test items, 13 demographic and self-percept items. In addition to ACRL Standards, the ILAS-ED items were developed based on the alignment of the National Educational Technology Standards for Teachers (NETS-T Standards) (Beile, 2007). Other institutions also have used Project SAILS as their reference in establishing their own in-house IL tests. University of Arizona Libraries (UAL) reported initiative to develop and validate in-house test items using Project SAILS items for construct validity (Mery et al., 2011).

The ILT computer-based, multiple choice test are described in details in a literature Enhancing skills, effecting change: Evaluating an intervention for students with below-proficient information literacy skills (Latham, Gross, 2011). It is a 60-item test, which measures competency in four of the five ACRL standards. Three levels of competency are defined by the test developers: 90 % or greater is regarded advanced in IL competency; 65 % to 89 % is considered competent; and less than 65 % is considered below proficient. The goal of the test is to develop a reliable and validated test that could be adopted by other institutions to measure IL competencies based on ACRL Standards(Cameron et al., 2007).

Besides RRSA, Scale of Information Literacy Competency for Agriculture Postgraduate Students (SOILCAPS) also contains a combination of multiple choice and true/false questions (Singh, Joshi, 2013). The instrument is developed for master's degree students at one of agricultural universities in India. Same as the three instruments discussed above, SOILCAPS is developed based on ACRL IL Standard. All of these assessments have undergone meticulous construction, validity, and reliability checks, and they have proven useful in evaluating information literacy competencies (Goebel et al., 2013).

Another instrument which measures information literacy competencies using test-based approach is Threshold Achievement Test of Information Literacy (TATIL). Unlike the other instruments, the test is designed and developed by Carrick Enterprises and is based on the recent ACRLFramework for Information Literacy for Higher Education. The test consists of four modules and each module relates one or more frame from the ACRL Framework (LeMire et al., 2021).

There are other IL tests established in-house as an effort to evaluate the outcomes of teaching of information literacy concepts and skills. University of Maryland University College (UMUC)

reported they have established their own in-house IL test to accommodate online students in a non -proctored environment (Mulherrin, Abdul-Hamid, 2009).

IL test assessments are also used in evaluating a specific group of students and to measure one or more components of IL. An assessment was developed to evaluate source evaluation competencies (one the of component in IL) to measure journalism students (Bobkowski, Younger, 2020). The assessment was also developed based on the Framework for Information Literacy for Higher Education. Another test-based IL assessment which measure a specific components or competencies in IL is discussed in a research. The research selected role-playing method to assess students in two IL competencies, namely searching for information and evaluating sources (Rieh, et al., 2022). Finally, Information Literacy Survey for Upper Secondary Students (ILSUS) is developed to measure information literacy competencies of upper secondary students in Japan through Computer-Based Testing (CBT). The instrument implemented large-scale survey based on Item Response Theory (IRT) (Shinohara, Horoiwab, 2021). The survey consists of 87 items in three (3) format: multiple choice, open resources and others, which had to be answered by operating the application. The research concluded that approximately 70 % of Japanese students who have achieved competence level 4 or higher had a clear comprehension of information ethics and information security. They were also capable of successfully completing activities that involved complicated and extensive amounts of information.

Information Literacy Perception-based Assessment

Perception-based assessment or self-assessment is another approach in evaluating and assessing information literacy competencies. The assessment is based on students' perceived of their IL competencies and skills. It has been one of popular techniques for assessing professionals' and students' information literacy (Mahmood, 2016). As self-assessment is measuring subjective abilities, from an educational psychology perspective, it is a viable method. This method is frequently seen as a fundamental idea that underpins human motivation, performance achievements, and emotional well-being in their perceived capabilities in specific areas. As a result, IL perception-based assessment can have a favourable impact on effort output and task perseverance, particularly when faced with challenges (Schunk, 1984).

The most used IL self-assessment instrument is Information Literacy Self-Efficacy Scale (ILSES) (Mahmmod, 2017). This instrument's purpose is to measure students' IL self-efficacy, and it is tested with highly reliability and validity. Kurbanoglu et al. described in details the development of ILSES and how well it measures what it intended to assess (Kurbanoglu et al., 2006). Participants in the research was 374 teachers in private and public schools. Although not elaborates in details, the study "carefully considered and compared" previously publish definitions and standards for IL including Doyle' Rubrics for Information Literacy, AASL & AETC's Information Literacy Standards and ACRL's Information Literacy Competency Standards for Higher Education (Kurbanoglu et al., 2006: 738). The 28-item scale was also used in developing IL scale in other fields.

Medical-specific scale, Information Literacy Self-Efficacy Scale for Medicine(ILSES-M) is based on the ILSES. The 35-item scale is an expanding version through the inclusion of medical discipline-specific items (Richardson, 2019). In recent years, due to the changes in the information landscape and as a result of the internet's popularity that have occurred since the scale's inception, ILSES also has been investigated its validity. The seven-factor model to become a four-factor model based on confirmatory factor analysis (CFa) literature and model fit indices employed in their research of 253 undergraduate learners (Sommer et al., 2021).

Another instrument using IL perception-based approach is Perception of Information Literacy Scale (PILS) (Doyle et al., 2019). Developed in 2019 and based on the ACRl Framework for Information Literacy for Higher Education, the instrument evaluates information literacy competencies byfiguring out how graduate students view their information literacy abilities, especially in relation to where they place themselves on the comprehending continuum of IL competencies. It measures self-perceptions of IL competency on a developmental scale from novice to expert. The scale consists of 36 items measuring seven different information literacy constructs.

Pinto describes in details on the design of self-assessment approach of IL, namely IL-HUMASS Survey on Information Literacy (Pinto, 2010). The assessment's main target is to be applied to a population of students, teachers and librarians holding various degrees in social science and humanities in Spain and Portuguese universities. The 26-item survey is grouped into four categories consist of information search, assessment, processing and

communication/dissemination. Three self-reporting dimensions consists of motivation, self-efficacy and preferred source of learning are also added in this IL survey (Maidin et al., 2022). The self-assessment are also being used to examine the attitudes and opinions of psychology students in Spain and Portugal on the importance of belief-in-importance (BI), self-efficacy (SE), and favourite source of learning (SL) for information literacy (IL) competences (Pinto et al., 2021).

Information Literacy Skills Questionnaire (SPIL-Q) is another IL measurement tools which use perception-based as its approach in getting information from the students. This secondary instrument requires graduate business students to response six statements in a 5-point Likert scale with 1 (strongly disagree) to 5 (strongly agree) (Michalak, Rysavy, 2016). In the research, SPIL-Q is used in corporation with IL test-based Information Literacy Assessment (ILA) to determine the students' perceptions of their IL skills and their actual test-assessed IL skills.

Information Skills Survey (ISS), meanwhile, is another information literacy competencies self-report or perception-based assessment. Developed by the Council of Australian University Librarians (CAUL) in 2003, ISS is referred to assess the six standards presented in the Australia and New Zealand Information Literacy (ANZIIL) framework (Catts, 2003). The test has 20-item forms for general social sciences and 28-item form for law. ISS employed a 4-point Likert scale (never, sometimes, frequently, always) to assess their information literacy knowledge and proficiency in a range of information literacy tasks (Sparks et al., 2016). Development process by another statistically validated self-assessment scale for assessing IL is discussed. The scale is developed by integrating information literacy and academic writing (Yu, 2023). In Pakistan, one cross-sectional survey was conducted to determine students perceived of IL. According to the study, students' perception of their information proficiency was found to be somewhat above the average, and there were no statistically significant differences seen depending on gender or academic year (Irfan et al., 2024).

Finally, Informed Learning scale is another tool developed based on students' perceptions of using information to learn. Data produced from this assessment scale are used by instructors to refine learning outcome, evaluation and teaching activities (Flierl et al., 2021).

However, not all perception-based approach assessment tools are used to assess IL proficiency and competencies. Some of the tools are used to create metacognitive awareness and perceived use of IL knowledge. Information Literacy Reflection Tool (ILRT) is one of information literacy tools which are used to create metacognitive awareness instead of only for evaluating IL competencies. It is established to create metacognitive awareness, critical reflection and acts as a teaching tool and a formative assessment. In line with PILS which is based on the ACRL Framework for Information Literacy for Higher Education, the scales developed in ILRT "promotes awareness of and reflection on IL concepts and strategies, but does not measure competency, skill, or achievement" (Robertson et al., 2022). The ILRT employs a Likert-style scale to ask participants to rate their own reflections using frequency percentage of time. The statement item consists of "true of me", and range from very untrue of me (0 % of the time) to very true of me (100 % of the time). Another self-reported instruments that is developed to measure metacognitive awareness of information literacy are the Metacognitive Strategies for Library Research Skills Scale (MS-LRSS) (Catalano, 2017). Instead based on IL standards and guideline, initial items in MS-LRSS are based on two metacognition instruments as a framework, namely the Metacognitive Awareness Inventory (MAI) and the State Metacognitive Inventory (SMI). After expert review procedure, however, items written for each subscale represents major information literacy skills as defined by the ACRL Standards. Respondents indicate their agreement with 21-item statements based on a 5-point Likert scale from Not at all (1) to Extremely (5). The scales were deployed to students from two private post-secondary institutions.

Perception-based assessment, however, is not a replacement for testing or examining actual information literacy competencies and skills (Mahmood, 2013; Rosman et al., 2015). Many literatures reported that people always overestimate their IL capabilities compared to their real capabilities when performing IL perception-based assessment. This behaviour, which is referred as Dunning-Kruger Effect proposed by Justin Kruger and David Dunning of Cornell University (Kruger, Dunning, 1999). Although the four studies conducted by them are in the areas of humour, logical reasoning, and English grammar, many studies have confirmed or substantiated this Dunning-Kruger Effect with other areas, including in information literacy (Gross, Latham, 2009; Mahmood, 2013).

Combination of test-based and perception-based assessment

Besides objective-based assessment and self-perception assessment, there are other literature reported using the combination of both assessment approaches. One of them is combining IL objective-based assessment approach (PIKE-P Test) with IL self-efficacy, which is one form of IL self-assessment (Rosman et al., 2015). They also suggest that both tests to be complemented with several standardised information searching tasks. The self-assessment, as suggested by them, should be taken place at the end of the testing practice.

Another assessment developed using combination of both approaches is Scale of Information Literacy Competency for Agriculture Postgraduate Students (SOILCAPS) (Singh, Joshi, 2013). The instrument is developed based on the ACRL Information Literacy Standards for Science and Engineering/Technology. The tool is to be used in pre-test and post-test setting and consist of 2 parts. Part 1 is a 37 multiple choice questions & 25 true/false items and Part 2 consists of non-scoring questions. The non-scoring questions are related to the use of various information resources and experience in locating and utilizing data for educational objectives.

Finally, an innovative assessment tool is introduced, specifically developed to gauge the level of visibility of information literacy services offered by Spanish university libraries. MeLIL, which stands for Metrics for Library Information Literacy, addresses the latest challenges in information literacy, including mobile learning, fake news, data literacy, and open science. The instrument comprises six criteria and 38 indicators (Pinto et al., 2024).

4. Results

Information literacy objective test-based approach and perception-based approach are two approaches commonly used in determining level of IL competencies and skills. Most literature in this review indicate how the reliability and validity of these assessment instruments have been checked. This condition indicates the quality of the assessment tools. Literature on test-based information literacy assessments emphasizes the use of standardized tests, objective measures, and performance tasks to evaluate students' abilities to recognise the need for information, locate, evaluate, and use information effectively. These assessments are valued for their reliability and validity, providing quantitative data that can be compared across different populations and time periods. However, they often fail to capture the nuanced, contextual, and process-oriented aspects of information literacy. The primary limitation is their focus on end results rather than the learning process. Perception-based assessments meanwhile, focus on learners' self-reported confidence and perceived abilities in information literacy skills. These tools provide insights into students' attitudes, motivations, and self-efficacy, which are critical for fostering lifelong learning. While this approach offers valuable qualitative data, perception-based assessments are often subjective and may suffer from biases, such as over- or underestimation of information literacy abilities. They are often criticized for their lack of objectivity and the difficulty in measuring actual skill acquisition. Integrating test-based and perception-based assessments provides a more comprehensive evaluation of information literacy. The limits of each approach can be addressed by combining test-based approach measurements with perception-based assessment, which provides a balanced assessment of the skills learned as well as the learner's self-awareness and confidence. This holistic approach can enhance instructional strategies, support personalized learning, and improve overall educational outcomes.

However, there are other IL assessment type which can be used such as portfolio, essay, observations, simulation and final grades. These subjective-based assessment approaches are discussed in a review by Walsh (Walsh, 2009). Many factors should be considered in determining IL assessment approach when deciding to design assessment tools. The main factors is the need to balance between the purposes and objectives of the IL assessment with the capability to establish tool that can assess the subjects' IL competencies and skills.

5. Conclusion

Assessing and evaluating information literacy (IL) competencies is essential for comprehending and improving the efficacy and effectiveness of the IL educational programs. The purpose of this assessment is to measure students' competence in IL, provide direction for curriculum development, and enhance instructional methods. There are two main methods for assessing information literacy: test-based and perception-based approaches. Test-based assessments, such as multiple-choice questions and standardized test, provide unbiased and

measurable information about students' competencies and abilities. They are well regarded for their dependability and simplicity in management, but they may not fully capture the extent of students' comprehension and utilization of IL skills. Examples such as Project SAILS and ILT are effective in assessing general competencies but sometimes do not adequately cover the intricate and practical aspects of IL in real-world scenarios. Perception-based assessments, in contrast, depend on students' self-reported confidence and their perceived competencies. The Information Literacy Self-Efficacy Scale (ILSES) and the Perception of Information Literacy Scale (PILS) are useful tools for understanding students' attitudes, perceived abilities and self-awareness regarding information literacy. These evaluations can identify areas that need work and encourage a more profound involvement with IL concepts. Nevertheless, they are essentially subjective and susceptible to biases such as the tendency to overestimate own competencies and skills.

By integrating these two approaches, namely test-based and perception-based, a comprehensive perspective on students' information literacy skills can be obtained. This integration combines the objective assessment of competencies with an awareness of learners' self-perceptions and levels of confidence. This comprehensive method promotes personalized learning and improves educational outcomes by considering both the knowledge obtained and the learners' self-assessment. As a result, it leads to more effective training in information literacy (IL) and better curriculum design.

References

ACRL, 2000 - The Association of College and Research Libraries (ACRL). 2000. Information Literacy Competency Standards for Higher Education. [Electronic resource]. URL: https://alair.ala.org/items/294803b6-2521-4a96-a044-96976239e3fb

ACRL, 2015 - Association of College and Research Libraries (ACRL). 2015. Information Literacy for Higher Education Framework. [Electronic resource]. URL: https://www.ala.org/acrl/ standards/ilframework

ALA, 2000 - American Library Association (ALA). 2000. Information Literacy Competency Standards for Higher Education. [Electronic resource]. URL: http://www.ala.org/acrl/standards/informationliteracycompetency

ALA, 2006 - American Library Association (ALA). 2006. Guidelines, Standards, and Frameworks - Listing by Topic. [Electronic resource]. URL: https://www.ala.org/acrl/guidelines-standards-and-frameworks-listing-topic

Al-Qallaf, 2019 - Al-Qallaf, C.L. (2019). Information literacy assessment of incoming students in an information studies graduate program. Global Knowledge, Memory and Communication. 68(3): 223-241. https://doi.org/10.1108/GKMC-07-2018-0062

Beile, 2005 - Beile, P. (2005). Development and validation of the information literacy assessment scale for education (ILAS-ED). Paper presented at the Annual Meeting of the American Educational Research Association (Montreal, Canada, Apr 12, 2005). [Electronic resource]. URL: https://eric.ed.gov/?id=ED490206

Beile, 2007 - Beile, P. (2007). The ILAS-ED: A standards-based instrument for assessing pre-service teachers' information literacy levels. Paper presented at the Society for Information Technology & Teacher Education International Conference. Vol. 2007. [Electronic resource]. URL: https://www.researchgate.net/publication/279693641

Beile, 2008 - Beile, P. (2008). Information literacy assessment: A review of objective and interpretive measures. Proceedings of SITE 2008 -Society for Information Technology & Teacher.

Bobkowski, Younger, 2020 - Bobkowski, P., Younger, K. (2020). News credibility: Adapting and testing a source evaluation assessment in journalism. College & Research Libraries. 81(5): 822-826. DOI: https://doi.org/10.5860/crl.81.5.822

Bundy, 2004 - Bundy, A. (ed.) (2004). Australian and New Zealand information literacy framework: Principles, standards and practice. Australian and New Zealand Institute for Information Literacy. [Electronic resource]. URL: https://adbu.fr/wp-content/uploads/2013/02/Infolit-2nd-edition.pdf

Burchinal, 1976 - Burchinal, L.G. (1976). The Communications Revolution: America's Third Century Challenge. Texas A & M University Library's Centennial Academic Assembly, Sept. 24, 1976: 1-9.

Cameron et al., 2007 - Cameron, L., Wise, S.L., Lottridge, S.M. (2007). The development and validation of the information literacy test. College & Research Libraries. 68(3): 229-236. DOI: https://doi.org/i0.5860/crl.68.3.229

Catalano, 2017 - Catalano, A. (2017). Development and validation of the metacognitive strategies for library research skills scale (MS-LRSS). Journal of Academic Librarianship. 43(3): 178-183. DOI: https://doi.org/10.1016/j.acalib.2017.02.017

Catts, 2003 - Catts, R. (2003). Information skills survey for assessment of information literacy in higher education: administration manual. Council of Australian University Librarians.

Doyle et al., 2019 - Doyle, M., Foster, B., Yukhymenko-Lescroart, MA. (2019). Initial development of the perception of information literacy scale (PILS). Communications in Information Literacy. 13(2): 205-227. DOI: https://doi.org/10.15760/comminfolit.2019.13.2.5

Dunn, 2002 - Dunn, K. (2002). Assessing information literacy skills in the California State University: A progress report. The Journal of Academic Librarianship. 28(1): 26-35. DOI: https://doi.org/10.1016/S0099-1333(01)00281-6

Education International Conference. [Electronic resource]. URL: https://www.researchgate.net/publication/277197065

Flierl et al., 2021 - Flierl, M., Maybee, C., Bonem, E. (2021). Developing the Informed Learning scale: Measuring information literacy in higher education. College & Research Libraries. 82(7): 1004-1016. DOI: https://doi.org/10.5860/crl.82.7.1004

Goebel et al., 2013 - Goebel, N, Knoch, J., Thomson, M.E., Willson, R, Sharun, S. (2013). Making assessment less scary Academic libraries collaborate on an information literacy assessment model. College and Research Libraries News. 74(1): 28-31. [Electronic resource]. URL: https://crln.acrl.org/index.php/crlnews/article/view/8883/0

Graves et al., 2021 - Graves, S.J., LeMire, S., Anders, K.C. (2020). Uncovering the information literacy skills of first-generation and provisionally admitted students. The Journal of Academic Librarianship. 47(1). DOI: https://doi.org/10.1016/j.acalib.2020.102260

Gross, Latham, 2009 - Gross, M., Latham, D. (2009). Undergraduate perceptions of information literacy: Defining, attaining, and self-assessing skills. College & Research Libraries. 70(4): 336-350. DOI: https://doi.org/10.5860/0700336

Hicks, Lloyd, 2023 - Hicks, A., Lloyd, A. (2023). Reaching into the basket of doom: Learning outcomes, discourse and information literacy. Journal of Librarianship and Information Science. 55(2): 282-298. DOI: https://doi.org/10.1177/09610006211067216

Irfan et al., 2024 - Irfan, N., Rafiq, M., Arif, M. (2024). Information competency assessment of undergraduates: A Pakistani perspective. IFLA Journal. 50(2): 354-364. DOI: https://doi.org/10.1177/03400352231222040

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Ivanitskaya, 2004 - Ivanitskaya, L., Laus, R., Casey, A.M. (2004). Research readiness self-assessment: Assessing students' research skills and attitudes. Journal of Library Administration. 41(1-2): 167-183. DOI: https://doi.org/10.1300/J111v41n01_13

Kruger, Dunning, 1999 - Kruger, J., Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one's own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology. 77(6): 1121-1134. https://doi.org/10.1037/0022-3514.77.6.1121 Kurbanoglu et al., 2006 - Kurbanoglu, S.S., Akkoyunlu, B., Umay, A. (2006). Developing the information literacy self-efficacy scale. Journal of Documentation. 62(6): 730-743. DOI: https://doi.org/10.1108/00220410610714949

Laprise, 2012 - Laprise, S.L. (2012). Afraid not: Student performance versus perception based on exam question format. College Teaching. 60(1): 31-36. DOI: https://doi.org/10.1080/87567555.2011.627575

Latham, Gross, 2011 - Latham, D., Gross, M. (2011). Enhancing skills, effecting change: Evaluating an intervention for students with below-proficient information literacy skills. Canadian Journal of Information and Library Science. 35(4): 367-383. DOI: https://doi.org/10.1353/ils.2011.0029

Lau et al., 2016 - Lau, J., Machin-Mastromatteo, J.D., Gârate, A., Tagliapietra-Ovies, A.C. (2016). Assessing Spanish-speaking university students' info-competencies with iSkills, SAILS, and an in-house instrument: Challenges and benefits. Communications in Computer and Information Science. 676: 327-336. DOI: https://doi.org/10.1007/978-3-319-52162-6_32

LeMire et al., 2021 - LeMire, S, Xu, Z, Balester, V., Dorsey, L.G., Hahn, D. (2021). Assessing the information literacy skills of first-generation college students. College & Research Libraries. 82(5):

730-754. [Electronic resource]. URL: https://www.researchgate.net/publication/353096083_ Assessing_the_Information_Literacy_Skills_of_First-_Generation_College_Students

Mahmood, 2013 - Mahmood, K. (2013). Relationship of students' perceived information literacy skills with personal and academic variables. Libri. 63(3): 232-239. DOI: https://doi.org/10.1515/libri-2013-0018

Mahmood, 2016 - Mahmood, K. (2016). Do people overestimate their information literacy skills? A systematic review of empirical evidence on the Dunning-Kruger Effect. Communication in Information Literacy. 10(2): 199-213. [Electronic resource]. URL: https://files.eric.ed.gov/fulltext/ EJ1125451.pdf

Mahmood, 2017 - Mahmood, K. (2017). Reliability and validity of self-efficacy scales assessing students' information literacy skills: A systematic review. Electronic Library. 35(5): 1035-1051. DOI: https://doi.org/10.1108/EL-03-2016-0056

Maidin et al., 2022 - Maidin, F.N.M., Chui, P.L., Che, C.C., Lai, L.L., Hisham, R. (2022). Perceived information literacy among undergraduate medical students at a Malaysian public university. Malaysian Journal of Library and Information Science. 27(3): 129-143. DOI: https://doi.org/10.22452/mjlis.vol27no3.6

Mery et al., 2011 - Mery, Y., Newby, J., Peng, K. (2011). Assessing the reliability and validity of locally developed information literacy test items. Reference Services Review. 39(1): 98-122. DOI: https://doi.org/10.1108/00907321111108141

Michalak, Rysavy, 2016 - Michalak, R., Rysavy, M.D.T. (2016). Information literacy in 2015: International graduate business students' perceptions of information literacy skills compared to test-assessed skills. Journal of Business and Finance Librarianship. 21(2): 152-174. DOI: https://doi.org/10.1080/08963568.2016.1145787

Mulherrin, Abdul-Hamid, 2009 - Mulherrin, EA., Abdul-Hamid, H. (2009). The evolution of a testing tool for measuring undergraduate information literacy skills in the online environment. Communications in Information Literacy. 3(2): 204-215. DOI: https://doi.org/10.15760/ comminfolit.2010.3.2.82

O'connor et al., 2002 - O'connor, L.G., Radcliff, CJ., Gedeon, JA. (2002). Applying systems design and item response theory to the problem of measuring information literacy skills. College &Research Libraries. 63(6): 528-543. [Electronic resource]. URL: https://eric.ed.gov/?id=EJ659651

Oakleaf, 2009 - Oakleaf, M. (2009). The information literacy instruction assessment cycle: A guide for increasing student learning and improving librarian instructional skills. Journal of Documentation. 65(4): 539-560. DOI: https://doi.org/10.1108/00220410910970249

Pinto et al., 2021- Pinto, M., Fernández-Pascual, R., Lopes, C., Antunes, M.L., Sanches, T. (2021). Perceptions of information literacy competencies among future psychology professionals: a comparative study in Spain and Portugal. Aslib Journal of Information Management. 73(3): 345366. DOI: https://doi.org/10.1108/AJIM-04-2020-0103

Pinto, 2010 - Pinto, M. (2010). Design of the IL-HUMASS survey on information literacy in higher education: A self-assessment approach. Journal of Information Science. 36(1): 86-103. DOI: https://doi.org/10.1177/0165551509351198

Podgomik, 2016 - Podgornik, B.B., Dolnicar, D., Sorgo, A., Bartol, T. (2016). Development, testing, and validation of an information literacy test (ILT) for higher education. Journal of the Association for Information Science and Technology. 67(10): 2420-2436. DOI: https://doi.org/10.1002/asi.23586

Richardson, 2019 - Richardson, B. (2019). Scale evaluating the information literacy self-efficacy of medical students created and tested in a six-year Belgian medical program. Evidence Based Library and Information Practice. 14(2): 128-130. DOI: https://doi.org/10.18438/eblip29564

Rieh et al., 2022 - Rieh, S.Y., Bradley, D.R., Genova, G. Roy, R.L., Maxwell, J., Oehrli, JA., Sartorius, E. (2022). Assessing college students' information literacy competencies using a librarian role-playing method. Library & Information Science Research. 44(1): 101143. DOI: https://doi.org/10.1016/j.lisr.2022.101143

Robertson, 2022 - Robertson, S, Burke, M., Olson-Charles, K., Mueller, R. (2022). Metacognitive awareness for IL learning and growth: The development and validation of the Information Literacy Reflection Tool (ILRT). Communications in Information Literacy. 16(2): 58-89. [Electronic resource]. URL: https://eric.ed.gov/?id=EJ1380271

Rosman et al., 2015 - Rosman, T., Mayer, A.K., Krampen, G. (2015). Combining self-assessments and achievement tests in information literacy assessment: empirical results and

recommendations for practice. Assessment and Evaluation in Higher Education. 40(5): 740-754. DOI: https://doi.org/10.1080/02602938.2014.950554

Scharf, 2007 - Scharf, D., Elliot, N., Huey, H.A., Briller, V., Joshi, K. (2007). Direct assessment of information literacy using writing portfolios. Journal of Academic Librarianship, 33(4): 462-477. DOI: https://doi.org/10.1016/j.acalib.2007.03.005

Schunk, 1984 - Schunk, D.H. (1984). Self-efficacy perspective on achievement behavior. Educational Psychologist. 19: 48-58. [Electronic resource]. URL: https://libres.uncg.edu/ir/uncg/f/ D_Schunk_Self_1984.pdf

Shinohara, Horoiwab, 2021 - Shinohara, M., Horoiwab, A. (2020). Information literacy: Japan's challenge to measure skills beyond subjects. Educational Research. 63(1): 95-113. DOI: https://doi.org/10.1080/00131881.2020.1864221

Singh, Joshi, 2013 - Singh, D., Joshi, M.K. (2013). Information literacy competency of post graduate students at Haryana Agricultural University and impact of instruction initiatives: A pilot survey. Reference Services Review. 41(3): 453-473. DOI: https://doi.org/10.1108/RSR-11-2012-0074

Society of College..., 2011 - Society of College, National and University Libraries (SCONUL). 2011. The SCONUL Seven Pillars of Information Literacy Core Model. [Electronic resource]. URL:https://www.researchgate.net/publication/259341007_The_SCONUL_Seven_Pillars_of_Inf ormation_Literacy_Core_model

Sommer et al., 2021 - Sommer, M., Ritzhaupt, A.D., Hampton, J. (2021). Investigation of the validity evidence of the information literacy Self-Efficacy Scale (ILSES) among undergraduate students. Communications in Information Literacy. 15(1): 1-23. [Electronic resource]. URL: https://eric.ed.gov/?id=EJ1306516

Sparks, 2016 - Sparks, J.R., Katz, I.R., Beile, P.M. (2016). Assessing Digital information literacy in higher education: a review of existing frameworks and assessments with recommendations for next-generation assessment. ETS Research Report Series. 2016(2): 1-33. DOI: https://doi.org/10.1002/ets2.12118

Walsh, 2009 - Walsh, A. (2009). Information literacy assessment: Where do we start? Journal of Librarianship and Information Science. 41(1): 19-28. DOI: https://doi.org/10.1177/ 0961000608099896

Walters et al., 2020 - Walters, W.H., Sheehan, S.E., Handfield, A.E., Lopez-Fitzsimmons, B.M., Markgren, S., Paradise, L. (2020). A multi-method information literacy assessment program: Foundation and early results. Portal: Libraries and the Academy. 20(1): 101-135. DOI: https://doi.org/10.1353/pla.2020.0006

Xu et al., 2016 - Xu, X., Kauer, S., Tupy, S. (2016). Multiple-choice questions: Tips for optimizing assessment in-seat and online. Scholarship of Teaching and Learning in Psychology. 2(2): 147-158. DOI: https://doi.org/10.1037/stl0000062

Yu, 2023 - Yu, C. (2023). Integrating information literacy and academic writing: Developing a self-Assessment scale of information-based academic writing. The Journal of Academic Librarianship. 49(6): 102804. DOI: https://doi.org/10.1016Zj.acalib.2023.102804

Zurkowski, 1974 - Zurkowski, P.G. (1974). The information service environment relationships and priorities. Related paper No. 5. [Electronic resource]. URL: https://eric.ed.gov/?id=ED100391

Appendix 1

ERIC

Search on 12 December 2023, 517 results

Search term - Using 'Subject heading - MAINSUBJECT'. Limited by peer-reviewed ERIC journal and full-text only.

Mainsubject ((assessment OR measurement OR evaluation)) AND mainsubject (("information literacy" OR "information skills"))

ProQuest Education

Search on 13 December 2023, 210 results

Search term - Using 'Subject heading - MAINSUBJECT'. Limited by peer-reviewed journal and full-text only.

Mainsubject ((assessment OR measurement OR evaluation)) AND mainsubject (("information literacy" OR "information skills"))

EBSCO Academic Search Ultimate

Search on 15 December 2023, 14 results

Subject term - Using 'SU - Subject term' Limited by peer-reviewed journal and full-text only.

Subjectterms ((assessment Or measurement OR evaluation)) AND subjectterms (("information literacy" OR "information skills"))

EBSCO Education Source

Search on 15 December 2023, 11 results

Subject term - Using 'SU - Subject term' Limited by peer-reviewed journal and full-text only.

SU (assessment OR measurement OR evaluation) AND SU ("information literacy" OR "information skills").

i Надоели баннеры? Вы всегда можете отключить рекламу.