Научная статья на тему 'Conceptual challenges in designing measures for media literacy studies'

Conceptual challenges in designing measures for media literacy studies Текст научной статьи по специальности «СМИ (медиа) и массовые коммуникации»

CC BY
408
68
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
MEDIA LITERACY INTERVENTIONS / DESIGNING MEASURES / VALIDITY / SCHOLARS / RESEARCHERS / MEDIA / STUDIES

Аннотация научной статьи по СМИ (медиа) и массовым коммуникациям, автор научной работы — Potter W. James, Thai Chan

The analysis of the existing conceptualizations in the media literacy literature reported in this article revealed that there are considerable gaps in the media literacy literature. While there are a many definitions of media literacy, the existing definitions typically cluster around highlighting several components, especially skills and knowledge but also behaviors and affects. To a lesser extent there is a clustering around certain domains of skills and particular domains of knowledge. But at this point the conceptualizations stop providing detail, and this inadequate degree of specificity in the explication of media literacy requires researchers to fill in conceptual gaps in order to design their measures. The gaps have resulted in the design of a great many measures of questionable validity, which sets up a vicious cycle. Researchers who want to design a test of media literacy go to the literature for guidance, however that literature shows them an overwhelming choice of definitions with no single definition being regarded as the most useful one. Even more problematic is that none of the many definitions provides enough detail to guide researchers very far through the process of designing measures of media literacy. Until more fully explicated definitions of media literacy are offered to scholars, researchers will be left with little guidance, which will result in the continuation of inadequate conceptual foundations for their empirical studies and therefore a fuzzy and incomplete foundation to use as a standard for judging the validity of their measures.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Conceptual challenges in designing measures for media literacy studies»

Copyright © 2016 by Academic Publishing House Researcher

W TT

I

Published in the Russian Federation International Journal of Media and Information Literacy Has been issued since 2016. E-ISSN: 2500-1051 Vol. 1, Is. 1, pp. 27-42, 2016

DOI: 10.13187/ijmil.2016.1.27 www.ejournal46.com

i 'S

International Journal of Media and Information Literacy

Conceptual Challenges in Designing Measures for Media Literacy Studies

W. James Potter a , *, Chan Thai b

a University of California at Santa Barbara, Santa Barbara, USA b Santa Clara University, Santa Clara, USA

Abstract

The analysis of the existing conceptualizations in the media literacy literature reported in this article revealed that there are considerable gaps in the media literacy literature. While there are a many definitions of media literacy, the existing definitions typically cluster around highlighting several components, especially skills and knowledge but also behaviors and affects. To a lesser extent there is a clustering around certain domains of skills and particular domains of knowledge. But at this point the conceptualizations stop providing detail, and this inadequate degree of specificity in the explication of media literacy requires researchers to fill in conceptual gaps in order to design their measures. The gaps have resulted in the design of a great many measures of questionable validity, which sets up a vicious cycle. Researchers who want to design a test of media literacy go to the literature for guidance, however that literature shows them an overwhelming choice of definitions with no single definition being regarded as the most useful one. Even more problematic is that none of the many definitions provides enough detail to guide researchers very far through the process of designing measures of media literacy. Until more fully explicated definitions of media literacy are offered to scholars, researchers will be left with little guidance, which will result in the continuation of inadequate conceptual foundations for their empirical studies and therefore a fuzzy and incomplete foundation to use as a standard for judging the validity of their measures.

Keywords: media literacy interventions, designing measures, validity, scholars, researchers, media, studies.

1. Introduction

While there is a large and growing literature that tests the effectiveness of media literacy interventions, there is reason to be skeptical about the value of the findings in that literature because of problems with the validity of the measures used in those studies (Potter & Thai, 2016).

In this essay, we analyze the problems found in that content analysis of the media literacy intervention literature then focus attention on how measures of media literacy can be constructed in order to exhibit a higher degree of content and face validity. We present recommendations about how to handle eight measurement issues in three categories: Measuring skills, measuring knowledge, and macro measurement issues.

* Corresponding author

E-mail addresses: wjpotter@comm.ucsb.edu (W. James Potter), chanlthai@gmail.com (Chan Thai)

2. Materials and methods

Analyzing Problems with Measuring Media Literacy

To create a basis for analyzing problems with measuring media literacy, we first have to clarify what validity is. Next we report the measurement practices we found in our content analysis of the media literacy intervention literature, which reveals some troubling patterns. Then we demonstrate that many of these measurement problems can be traced to the nature of definitional guidance in the broader media literacy literature.

Validity. Validity is the most fundamental criterion for judging the quality of measures in social science research (Brinberg & McGrath, 1985; Guilford, 1954; Nunnally, 1967; Rust & Golombok, 1989). As Chaffee (1991) writes, "Validity should not be equated with 'truth.' Disappointing as this might sound, the philosophical concept of truth is not a usable criterion" (p. 11). Instead, the criterion for judging validity within a research study is the meaning expressed by the authors of that study. "The question of validity is a question of 'goodness of fit' between what the researcher has defined as a characteristic of a phenomenon and what he or she is reporting in the language of measurement" (Williams, 1986: 21). Thus the standard for judging validity comes from the results of a meaning analysis, that is, analyzing the meanings that authors provide for their focal concepts.

There are two general kinds of validity - the logical/conceptual type and the empirical type (Babbie, 1992). The logical/conceptual type of validity is established through argumentation and expert judgments, while the empirical type relies on collecting data to show support for expectations about what the concept should (predictive and concurrent) and should not (discriminant) be related to. In our content analysis, we focused on two types of logical/conceptual validity - content validity and face validity - by examining how the authors of those studies conceptualized media literacy then comparing those authors' conceptualizations to the measures they used.

Content validity. Content validity focuses on the structure of the concept and is concerned with the degree to which the items used by researchers in their measurement set match the configuration of components in their presented definition of the concept. Bausell (1986) illustrates the essence of content validity with the question, "Do the different components of the measurement procedure (which are usually items) match the different constituents of the attribute being measured?" (p. 156). Those items need to be representative of the entire subject (Vogt,

2005).

What are the components of media literacy definitions? Scholars have provided many different definitions of media literacy (for a sampling see Table 1). Two components - knowledge and skills - appear most prevalently but there are also other components of behaviors, affects, and beliefs. Notice that within each of these components there is a variety of domains (see Table 1 for a sampling of definitions). For example, skills is a broad component that includes domains of accessing, interpreting, and producing media messages. The knowledge component has domains for information about the media industries, media content, media effects, and one's self. A category scheme of components and domains is a useful template in making a determination about content validity, that is, authors who specify certain components and domains in their definitions of media literacy should then be expected to provide measures for each of those specified components and domains. To illustrate, let's say that an author defines media literacy in terms of skills (e.g., evaluation and production) and knowledge (e.g., industry motives and character stereotypes). If that author measures skill with only one domain (such as evaluation but not production) and knowledge with only one domain (such as character stereotypes but not industry motives), then this leads to a judgment of faulty content validity, because this author failed to measure the full set of components/domains specified in the conceptual definition. The judgment of content validity then is the comparison of the configuration of components and domains in authors' conceptualization of media literacy with their set of measures.

Consider the situation where authors provide a definition of media literacy that includes several components and many domains. These authors might decide that using a full set of measures to address all the components/domains would place too high a burden on their research participants, so the authors select a sub-set of components/domains to measure. If these authors provide a clear rationale for the sub-set, then they narrow the scope of their definition in a scholarly manner and thus preserve the correspondence between their conceptual foundation and

their set of measures. However, if the authors fail to provide a rationale for the narrowing of the definition, then the judgment of content validity must be based on the comparison of sub-set of measures with the full definition, and thus the resulting judgment is that there is low content validity.

Face validity. Face validity requires a judgment about whether the measures are acceptable operationalizations of the ideas described in the definition. Thus, when authors provide a measure that matches the component/domain they claim it to be, there is a match on face validity. To illustrate a non-match, let's say that authors define media literacy with the skill component and a domain of critical thinking. In the methods section the authors say their measure of critical thinking consists of one item (I am confident about my ability to think critically about media messages) and a five point Likert type set of responses (1 = Strongly Agree, 2 = Agree, 3 = Neither Agree nor Disagree, 4 = Disagree, 5 = Strongly Disagree). When we compare this conceptualization to the measure, we can see a non-match, because the measure represents a belief component rather than a skills component, that is, the measure taps into the degree to which respondents believe they have critical thinking abilities instead of assessing their performance level on this skill. As a second example of a non-match, let's say the measure of critical thinking consists of a question (How often do you think critically when encountering media messages?) and respondents are presented with answer choices (1 = Never, 2 = Seldom, 3 = About Half the Time, 4 = Almost Always, 5 = Always). This reveals a non-match because the measure represents a behavior component rather than a skills component.

3. Discussion

Patterns in the Literature

The patterns we found in our content analysis indicate serious problems with content and face validity. Across these 88 studies of media literacy intervention studies, authors of 22 of those studies (25%) provided no definition for media literacy, so it was not possible to make a judgment about validity, that is, it was impossible to compare measures to a conceptual foundation when no definition was provided. Of the 66 studies that did present a definition for media literacy, the authors of 45 (68.2 %) studies presented their own definition of media literacy. Therefore, only 21 (23.9 %) of those 88 studies used an existing definition of media literacy from the literature as the foundation for their study.

Of the 66 studies that presented a definition of media literacy, 53 (80.3 %) displayed significant problems with content and face validity. With content validity, there were many instances where authors would highlight a particular component or domain in their definitions but then provide no measures of that specified component or domain. For example, 59 studies presented a definition that included a skills component, but only 22 of those studies (37.3 %) presented a measure for any type of skill.

With face validity, there were many non-matches, because authors mis-attributed their measures. For example, with the 22 studies that said they provided measures for some sort of skill, only 12 actually measured a skill. These non-matches on skills were due to measures reflecting a belief component (I have confidence in my skills) or a behavior component (I typically analyze media messages) rather than the expected skills component. Non-matches were also common within the knowledge component where measures were often more about attitudes than they were assessments of what participants actually knew. For example, asking respondents to use a Likert intensity scale to react to a statement such as "The media industries are too concentrated" generates data about respondents' attitudes about the issue of ownership instead of their knowledge of facts about the issue.

Definitional Guidance

Recall from the previous section that only 21 (23.9 %) of the 88 studies analyzed did authors use a definition for media literacy as a foundation for their study. Of the rest of the studies, 37.5 % presented no awareness of any definition of media literacy in the existing literature; 38.6 % showed awareness of definitions but rejected them all, preferring instead to present their own definition. This raises a question about how useful the existing definitions are to designers of media literacy intervention studies.

In order for definitions of concepts to be useful to designers of empirical studies, those definitions need a good degree of clarity and detail - or what Chaffee (1991) calls explication.

Chafee has argued that many concepts in communication research have not been well explicated so they have limited utility in forming a strong conceptual foundation for research studies. He explains that well explicated concepts meet the criteria of invariance of usage and specificity. Let's examine both of these criteria in relation to media literacy.

Invariance of usage. Chaffee (1991) argues that when researchers share a clear meaning of a concept, they can better generate a literature where measures are replicated across studies so that over time the definitional elements that are ambiguous can be clarified, and those definitional elements that are found to be faulty can be eliminated.

There are a great many meanings articulated for media literacy (Buckingham, 2003; Hobbs, 1998; Potter, 2009, 2010; Silverblatt, Ferry, & Finan, 2014; Tyner, 2009). One reason for this is that this literature is very large. A search of Google Scholar using "media literacy" as a keyword yields over 1.5 million hits of published books and articles on this topic. Another reason for the multiplicity of meanings is because those meanings have been produced by a wide range of scholars from different backgrounds (communication, psychology, education, public health, and advertising) and with different agendas (political action groups, consumer protection groups, public school systems, and governmental agencies from around the world).

4. Results

Specificity. Concepts that are presented in more detail provide a more fully articulated meaning and thereby provide more guidance to researchers as they design measures of that concept. Specificity refers to the degree to which authors provide detail when defining their concepts. When we analyze the definitions in the media literacy literature, we can see several layers of detail provided by definers of the concept. One layer of detail specifies different components, such as skills, knowledge, behaviors, and affects (see Table 1). This is a useful layer of detail to researchers because it delineates the need for different types of measures, that is, measuring a skill requires a different type of measure than assessing knowledge. Notice that many of these definitions provide another layer of detail by specifying domains within components. For example, some authors who specify the component of skills in their definition of media literacy provide more detail about which skills in the form of skills domains, such as accessing, interpreting, and producing. The knowledge component also displays some domain detail.

It is useful to think of the arrangement of definitional detail in a pyramid structure with the most general definitions of media literacy existing at the apex (see Figure 1). As we move down the pyramid toward the base, we encounter definitions of elements that are greater in specificity but narrower in scope than a general, all-encompassing definition. Think of the base of the pyramid as the individual measures. When researchers are designing their measures they start at the apex with the general umbrella type definition and work their way through each layer of detail down to the base. When this path through the layers of detail is complete, designers are provided with a fully articulated path guidance that makes the design task relatively easy. However, when layers of detail are missing, the challenge grows larger as designers must fill in the gaps with assumptions - either consciously or unconsciously. To the extent that these assumptions are made unconsciously, they fail to generate a scholarly treatment and this can lead to a greater lack of face validity because the gap between expressed meaning and the measures intended to capture that meaning grows larger.

When we look at the definitional elements for media literacy presented in Table 1 and consider how they would fit into the pyramid in Figure 1, we can see that there is a fair amount of guidance for at least three levels - general definitions, major components, and domains within those major components. However, there are layers that are left largely unaddressed. These layers suggest conceptual issues that need to be considered when explicating the meanings of media literacy.

In the remaining sections of this article, we illuminate eight issues that need to be addressed - three issues about measuring skills, three issues about measuring knowledge, and two issues about measuring media literacy more generally. By presenting these issues, our intention is not to recommend one particular position on each issue. Instead we illuminate the various positions that can be taken on each issue and discuss their implications for measurement. Our purpose is to stimulate thinking and discussion so that these issues can be elevated from the unconscious level of unaddressed assumptions to a conscious level where scholars will start debating the various merits of different positions on each issue. Such a discussion will serve to provide more guidance to

designers of various measures of media literacy. Then when future researchers design media literacy intervention studies, they will have more guidance to provide a more elaborate articulation of their meaning of media literacy and this will provide a stronger foundation for making judgments about the validity of their measures.

Measuring Skills

There are three issues that have been largely ignored or under-developed within the skills component. When researchers design their measures for media literacy skills, they must deal with all three of these issues. The more they are aware of these issues and their measurement options, the more they can design their measures with a higher degree of precision. These are the issues of specifying domains of skills, distinguishing between skills and competencies, and distinguishing between broad skills and skills that are special to media literacy.

Domains of Skills

Almost every conceptualization of media literacy suggests a skills component. Also, a high percentage of empirical tests of media literacy claim to deal with at least one skill. In our content analysis of the media literacy intervention literature, we found that 59 (89.4 %) of the 66 published studies that provided a media literacy conceptual foundation, featured a skills component in their definitions.

The depth of treatment of skills can be assessed by comparing the definitions of skills presented by authors to a four level scheme depending on how much detail authors present when talking about media literacy skills. At the most general level, authors make a claim that skills are important to media literacy but they do not mention what those skills are (c.f., Alliance for a Media Literate America; Speech Communication Association, 1996, Standard 23). A second, and more detailed level, is reached when authors articulate what those skills are by mentioning specific skills (such as a particular kind of ability, typically accessing information, interpreting messages, or producing messages). A third level of depth is exhibited when scholars present their list of skills as a set, which leaves designers with the impression that the mentioned skills are more than suggested examples but constitute the complete list. For example, the often cited conceptualization by The National Leadership Conference on Media Literacy specifies four domains of skills "decode, evaluate, analyze, and produce both print and electronic media" (Aufderheide, 1993: 79). And a fourth layer of depth is when scholars take an additional step beyond presenting a set of skills and also offer definitions for each of those skills. To date there is only one example of this in the literature. Potter (2004) specifies seven skills in the domains of analysis, evaluation, classification, induction, deduction, synthesis, and abstraction (see Table 2).

When designers of media literacy intervention studies look for guidance in the literature, they are likely to see a big drop-off in the number of studies as they move from the first to the fourth level of depth. And this pattern leaves designers with little guidance, such that they must fill in the gaps with their own definitions, which can result in definitional fragmentation. Even more troublesome is when designers rely on a definition of media literacy that calls for skills and names some skills, but then does not define those skills, leaving researchers to assume the meaning of those skills for themselves. An illustration of this is with the use of the term "critical thinking." While many scholars use this term, few define it. It is difficult to infer what those scholars who do not define it mean, but it appears that they exhibit a range of meanings. Some of these scholars seem to mean the ability to perceive more elements in media messages, which is really the skill of analysis, while other scholars seem to mean the ability to evaluate messages to uncover their faulty or exploitative nature. Some scholars seem to mean the ability to construct one's own meaning even when it is counter to the intention of the message designers, while others mean the ability to argue against intended meanings, and still others mean achieving a habit of being mindful during media exposures rather than allowing one's mind to continue on automatic pilot. Although we do not argue that one of these meanings is better than others, we do argue that scholars who contend that skills are an essential part of media literacy need to do more than simply name a skill and assume that all readers will share the same meaning.

The current state of ambiguity over defining critical thinking has led researchers to operationalize it in a variety of ways, many of which fall outside the boundaries of what could be considered a measurement of a skill. For example, operationalizing a skill by posing a question that asks respondents how confident they are in their ability to critically analyze media messages, is more a measure of belief than of level of skill. Operationalizing a skill with a question that asks

respondents how important is it that they critically analyze media messages, is more a measure of motivation than of skill level. And asking respondents how often do they critically analyze media messages, is more a measure of recall of behavior than it is a measure of skill level.

Perhaps the most significant problem with defining media literacy as critical analysis arises when researchers measure it as an attitude. For example, consider a study where researchers design an intervention that attempts to persuade teenage participants to avoid using alcohol by teaching them about the health risks and the criminal penalties. They design attitude items to measure the degree to which their participants are persuaded that alcohol use is not an acceptable behavior while they are teenagers. If the authors frame their research as a persuasion study, then the attitude outcome measures could be judged as valid. That is, researchers look for changes in the direction and magnitude of attitude scores between pre-intervention and post-intervention so that they can assess the degree of persuasiveness of the intervention. However, if the authors frame their study as a media literacy intervention study where their intention is to teach their participants to think for themselves and not accept claims made by authorities, then a pattern of thinking for one's self is not evidenced by participants' converging toward a shared attitude that demonstrates widespread acceptance of the authority position taught in the intervention. If "critical thinking" implies that people need to think for themselves instead of simply accepting what they are told, then accepting the attitudes that the media literacy intervention is teaching respondents represents uncritical persuasion rather than critical thinking.

Implications for measurement. When scholars specify one or more skill domains but leave those skills undefined, they leave a gap that must be filled by researchers who are forced to make their own assumptions to fill the gap. If scholars who attempt to define media literacy through a skills domain begin to provide more detail about their component's domains, then designers of skills measures will have more guidance. This additional level of detail can also help reviewers and readers understand more clearly the researchers' intentions and thereby make better evaluations about the measures' face validity.

Finally, scholars need to think of skills in terms of performance, and researchers need to measure skills by observing the actual performance of their participants. In the athletic realm, basketball coaches do not ask prospective players: How well do you shoot free throws? (very good, good, average, below average). Instead they observe the level of their performance. While determining the level of basketball players' free throw skill is relatively easy, determining the level of media literacy skills is much more challenging. However, researchers can begin working on this challenge by using a three step procedure. First, they need to clarify as much as possible what the skill is. Second, they need to think about what the various levels of performance are on the skill, then determine what observables would indicate performance at each level. And third, they need to think about the skill as requiring a sequence of sub-tasks, then design measures to track participants through the process of applying that skill in order to identify how far (and how well) each participant has moved through that process of performing the skills.

Skills or Competencies

It is useful to make a distinction between skills and competencies. Competencies are relatively dichotomous, that is, either people are able to do something or they are not able (Potter, 2004). For example, people either know how to use a remote control device (RCD) to turn on their TV or they do not; they either know how to send someone a text message on a smart phone or they do not; and they either know how to recognize a word and match its meaning to a memorized denoted meaning or they do not. Competencies are relatively easy to learn, then once learned, they are applied automatically over and over again. Once a person has a competency, further practice makes almost no difference in the quality of their performance.

Skills, in contrast, are tools that people develop through practice. The ability to perform skills should not be regarded as dichotomous; instead skills exist along a wide continuum from novice to expert. People's level of performance on skills is highly variable, and there is always room for more improvement through practice.

Implications for measurement. Skills and competencies require very different measures. Competencies present a relatively easy challenge because they are binary, and researchers can trust self report measures. For example, researchers could simply ask something like "Can you show me how you use your smartphone to send text messages?" They do not need any more detail in the question. The answer to the question becomes obvious through observation.

Skills, in contrast, are much more challenging to measure. The task of assessing skill levels requires researchers to stimulate the performance of the skill so that the level of that performance can be observed in a way that participants can be placed on a continuum according to their levels of performance. It is faulty to assume that participants can rate the level of their own skills because most participants do not have an accurate idea of what skills performance means at levels above their own ability, so they do not have access to an understanding of the full continuum when rating their own skill level.

The challenge of measuring participants' skill levels lies in developing a template of indicators for each of those levels. Meeting this challenge requires researchers to be clear about what the function of a skill is and how that function can be performed at different levels of ability.

Broad or Specialized

This next skills issue requires us to think about whether the skills mentioned are broad ones or whether they are specific to media literacy. Broad skills are cognitive processes that people use throughout their lives in a wide range of situations. In contrast, there are skills that are only applicable to interacting with the media in literate ways. Examples of this kind of skill are things like writing computer code for a laptop, troubleshooting problems with smartphones, lighting a scene to make a photograph that conveys a particular emotion, etc.

Implications for measurement. if skills are regarded as broad, then it is likely that there is a wide range of ability on each of these skills. Some of this range may be accounted for by age and educational level, but a large part of the variation is more likely traceable to IQ, motivation, need to achieve, and reward history. No single one of these factors can serve as a good predictor of skill level, so none can serve as an adequate surrogate for the skill level. With broad skills, researchers need to design their intervention studies so that they measure the levels of skills before and after the intervention so they can document the degree of change.

The measurement of skills believed to be special to media literacy would seem to be a bit easier challenge, because it is not likely that many people would possess these skills prior to a training-type intervention. Also, researchers are not likely to want to know the level of a participant's proficiency on these skills, so they can be treated more like competencies in the measurement. For example, we should not expect a high percentage of the population to have message production skills, such as designing a website, running an audio mixing board, writing a computer program for a video game, or editing raw video footage. Of course within the video production industries, making fine distinctions across skill levels is essential for employers and professional organizations, but when conducting media literacy skills assessments on general populations it would be sufficient to distinguish between those with no skill and those with any skill. If skills are regarded as specialized to media literacy, then researchers need to think about what makes them so special to media literacy. They also need to think about whether it is really or skill or a competency.

In summary, scholars who present definitions of media literacy that include a skills component need to think through the three issues presented in this section. They need to provide more detail in the form of specifying domains of skills and be more clear about defining what those skills are. They also need to articulate their vision about whether they are dealing with skills or competencies and if skills, then are those skills broad or specialized to media literacy. Scholars who clearly lay out their positions on these three issues will be providing a great deal more guidance to designers of measures.

Measuring Knowledge.

In this section, we argue that there are three issues that have been largely ignored or underdeveloped with regard to the knowledge component. These are the issues of specifying domains of knowledge, distinguishing between facts and beliefs, and distinguishing between info-bits and knowledge.

Domains of Knowledge

Almost every conceptualization of media literacy suggests a knowledge component. Like we did with the skills component, we can also analyze the knowledge component in layers of progressive depth. At the most general level, authors make a claim that knowledge is important to media literacy but they do not mention what it is people need to know in order to be media literate. At the first layer displaying definitional detail is the specification of knowledge as a component of media literacy. At the next layer of detail is the specification of domains, such as knowledge about

the media industries and knowledge of media efects. When we look at the suggested knowledge domains in Table 1, we can see that they all appear more as suggestions for what is relevant rather than appearing as an organized set. If we search through these suggestions for organized sets, we can find a few. For example, Silverblatt (1995) suggested that people need knowledge in four areas in order to interpret media messages - process, context, structure, and production values. Potter (2004) argued that media literacy requires well developed knowledge structures in five areas -about the media industries, audiences, content, effects, and one's self. Notice that these areas are very broad.

Like with skills, knowledge seems to present two layers of detail (components and domains), then there is a gap. The current cutting edge with the knowledge component occurs when we move beyond the domains, that is, there are well articulated labels suggesting areas of information that media literate people need but very few detailed treatments of what information is critical to each domain. One example of this is provided by Silverblatt and Eliceiri (1997) in their Dictionary of Media Literacy, however this information is not organized by domain and instead is an alphabetical list of more than 300 entries. An equally detailed list of information but organized by domains is Potter's eighth edition of Media Literacy (2016), which presents 15 chapters each with its own outline.

Facts vs. Beliefs

There are two types of information that media literacy studies typically measure in the knowledge component, and each of these types has different implications for measurement. One type of information has a factual basis. Factual-type information includes things like (a) FBI statistics say that 12 % of all crimes are violent, (b) the National Television Violence Study found that 61 % of all shows in American television contained at least one act of violence, and (c) it is illegal for teenagers to purchase and consume alcoholic beverages in the United States. The factual basis gives these statements a truth value, that is, it can be determined whether the claims in the statements are accurate or not.

In contrast, a second type of information arises through inferences derived from observing actions in the world and exists in a person's mind as beliefs. This belief-type information includes things like the world is a mean and violent place, and the high rate of violence presented in the media is harmful to society. Beliefs are typically based on social information, that is, people observe human behavior, infer patterns, and treat those patterns as indicators social norms.

Implications for measurement. It is important to make a distinction between these two types of information because this distinction has an impact on how the two are measured. Factual information can be measured in a dichotomous manner, because either a person knows a fact or does not. Thus true-false response choices are appropriate. In contrast, beliefs typically vary in intensity as reflected by how important they are to people or the degree to which they believe that something exists. Therefore beliefs can be measured on Likert-type intensity scales.

Information vs. Knowledge

In everyday language we use the terms information and knowledge interchangeably. But in the scholarly realm of media literacy, we need to make a distinction. Information is piecemeal and transitory; while knowledge is structured, organized, and is of more enduring significance. Information resides in the messages, while knowledge resides in a person's mind.

It is also important to think about whether domains of knowledge are regarded as containers or as labels in a hierarchy. If the domains are containers, then all the information within each container can be regarded as equally important and existing on the same level of generality. Designers of measures for each container need simply to make a list of all relevant items and sample from among them.

If instead, the names of the knowledge domains are used more as labels in a hierarchy, then it is important to think about how the information is organized within those hierarchies. If knowledge domains are regarded as being structured hierarchically, then we have multiple levels of generality to consider. For example, let's say researchers want to measure participants' knowledge about the media industries. It is impossible, of course, to write one item that would measure the degree to which participants vary in their knowledge of the media industries, so designers must design multiple items. To illustrate, let's also say that authors' conceptual base suggests that there are three key ideas about the media industries as follows: (a) an understanding of the industries' motivation to maximize profits, (b) an understanding of how the media industries

have developed over time, and (c) an understanding about how the industries attract niche audiences and condition them for repeated exposures. It is possible to write a simple true-false question to measure the first of these sub-domains (i.e., the media industries have a strong motivation to maximize their profits) although such an item could be argued to measure awareness of a motive which is much more superficial than what the sub-domain calls for by way of "understanding" their motivations. But let's set that issue aside for now. The more serious challenge lies in designing a single item to measure the other two sub-domains. For the second sub-domain it is not possible to design a simple question that can make a meaningful assessment of the level of a participant's knowledge.

A way to meet this challenge is to think in terms of knowledge hierarchies. These are outlines that display superordinate ideas along with their sub-ordinate components. Sometimes a definition suggests such an outline, such as when Potter (2004) argued that media literacy requires well developed knowledge structures in five areas, and elaborated the key ideas in each with books on media literacy where the information was organized in chapters, each with an outline (see Potter, 2016).

There are advantages and disadvantages of regarding knowledge domains as hierarchies. The major advantages are that hierarchies provide more structured guidance for designing a measurement scheme. Also, cognitive psychology has well documented that humans organize their learning into nested categories, which makes it easier for them to store and retrieve information as well as providing context for meaning (Fiske & Taylor, 1991; Lamberts & Goldstone, 2005). However, the disadvantage is that knowledge hierarchies are more complex and typically require more items to measure well, especially if the domains are not independent (see next section for more on this point).

Implications for measurement. If a study's conceptual foundation treats the knowledge component as an assortment of information, then the measurement decisions are relatively easy. Designers write items to measure how many bits of information participants have acquired. For example, designers could construct a factual information test of 20 true-false items and regard each of the 20 items as equally important as an indicator of participants' levels of knowledge. Because the items have a factual basis, researchers can compute a score for each respondent on the information-accuracy scale by summing all the items they answered correctly. Researchers then conclude that respondents with higher scores on the information-accuracy scale demonstrate a higher degree of media literacy compared to other respondents with lower scores. But in order for these scale scores to have meaning, researchers must assume that each item on the scale measures a bit of factual information that is equally important, that is, knowing one bit of information is no more important than knowing any other single bit of information.

In contrast, if researchers rely on a definition of media literacy that emphasizes knowledge rather than information, then measurement decisions are much more challenging because designers need to consider structure. To help them with this structure, researchers need guidance from media literacy theoreticians to provide them with outlines that show what the super-ordinate concepts are and how each super-ordinate concept is composed of particular clusters of subordinate ideas. Without such an outline, researchers face an impossible task of establishing validity. But even with an outline, researchers need to consider the issue of balance. Perhaps concept X can be measured adequately with one item but concept Y might require five items.

Macro Measurement Issues. Now that we have laid out some issues within two components of media literacy, we shift our attention to a more macro level and consider two issues that need more consideration from media literacy theoreticians. These are the concerns about the use of shortcuts and independence across measures.

Use of Shortcuts

When researchers design measures, they are typically confronted with a trade-off between efficiency and validity. The efficient choice requires less work from designers and less effort from respondents, but these efficient choices often result in measures with lower validity. In such a situation, researchers who select efficiency over validity are relying on shortcuts.

One of the most prevalent shortcuts in the social sciences is the use self report measures. The threats to validity of using self report data have long been recognized by scholars across the social sciences, including sociology (Lauritsen, 1998), anthropology (Bernard, Killworth, Kronenfel, & Sailer, 1984), political science (Sigelman, 1982), criminology (Hindelang, Hirschi, & Weis, 1981),

and behavioral medicine (Stone & Shiffman, 2002) to name a few. For example, in psychology Nisbett and Wilson (1977) showed there was a troubling discrepancy between what people report about themselves and physiological measures. This criticism was ongoing in psychology, and the use of self report measures showed a decrease over the years while in those articles that used self report measures there was an increase in acknowledging their limitations (Haeffel & Howard, 2010).

The use of self reports in media literacy is of course warranted when measuring knowledge, attitudes, and beliefs. However, the use of self reports to measure behaviors and skills is typically a shortcut, because it sacrifices validity to achieve efficiency. As for behaviors, media literacy studies are often concerned with measuring mundane occurrences or habits, such as media exposures, use of advertised products, and engaging in mildly risky behaviors. We know from our research literature that these behaviors are guided by automatic routines that run unconsciously and leave no trace of specific details unless those details are highly vivid or out of the ordinary (Kahneman & Tversky, 1984). When we are asked about our mundane behaviors, we have no memory of details so we rely on heuristics to estimate an answer. These answers have little relationship to actual behavior and instead tell us more about the heuristics people use than they tell us about their behaviors. Furthermore, because there is a variety of heuristics, including representativeness, availability, simulation, anchoring, and adjustment (Fiske & Taylor, 1991), the data generated by self reported questions asking about mundane behaviors is likely to be an invalid conglomeration because of different people using different heuristics to make their guesses.

Implications for measurement. We know from the literature that there is a significant problem with self reports, especially those of mundane behaviors. Ignoring this problem with a shortcut serves to introduce a significant source of error variance.

When our purpose is to measure actual behavior, we need to avoid self reports of mundane behaviors. Fortunately in the new media environment we can get access to many counts of mundane behaviors, such as cell phone providers' counts of numbers of texts sent, minutes connected to the internet, number of sites visited, number of movies streamed, etc. But if researchers cannot get access to data bases where a particular mundane behavior has been recorded and if they cannot electronically record the mundane behavior of their own participants, then they need triangulate the self reported data with data from other sources in an attempt to make some sort of a cast for validity of their self reported measures (Slater, 2004).

Independence

Perhaps the most challenging conceptual issue with the measurement of media literacy is independence. This is challenging because it is so complex and because it has been completely ignored in the large literature of media literacy definitions. The issue of independence forces us to think about whether the different elements, especially the components of skills and knowledge, in a conceptualization of media literacy are related to each other or whether each can stand alone. That is, can people really acquire much knowledge relevant to media literacy unless they have a certain level of skill? And can media literacy skills develop without exposure to certain kinds of information?

We also need to consider the issue of independence within each component. Within the skills component, the individual abilities are likely to influence each other so that people cannot be highly developed on skill Y unless they first become highly developed on skill X. For example, Potter (2004) alludes to this when defining his seven skills of media literacy as being partially dependent on one another, that is, combinations of these skills work together at times, while other times they can be used independently. To illustrate, Potter defines the skill of evaluation as the making of a judgment by comparing an element in a media message to a person's standard. That element may be an obvious manifested one, in which case the skill of evaluation can be used independently. But often a person must analyze a media message to identify an element that is then used in an evaluation; in this case the skill of analysis and evaluation are used together, such that if a person is strong with one of these skills but weak with the other skill, the overall product of using the skills will be weak or faulty.

Within the knowledge component, scholars need to specify whether their domains of knowledge are regarded as being linked together or whether they are independent from one another. For example, let's say that a conceptualization lays out four domains of knowledge -media industries, content, audiences, and effects. If the four areas of knowledge are regarded as

independent from one another, then limiting measures to one domain is not a problem. Each domain is believed to be composed of a stand-alone set of information that is not influenced by any of the other three, and people can be highly media literate if they have a lot of knowledge in any one area. However, let's say the conceptual foundation used in a study suggests that all four domains of knowledge are linked together into a system of knowledge such that effects cannot be understood without also understanding content patterns, which in turn cannot be understood without understanding industry motives and economics. With this non-independence conceptualization, designers of measures need to include measures across all linked domains of knowledge in order to arrive at a valid assessment of participants' knowledge structures.

Implications for measurement. This issue is largely unaddressed in the literature leading researchers to assume that each component and each domain is independent and can therefore be measured in isolation. If researchers assume that skills can be treated in an independent manner, then the measurement task becomes considerably simpler. Researchers need only to develop a performance task for each skill, then observe how well participants perform on each. Later in the analysis, they can test to see which skills are related. However, such a posteriori tests are focused more on seeing how levels of skills are related, which is not the same as believing a priori that certain skills are antecedents or otherwise intertwined with other skills.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

When designers of measures of skills reject the assumption that skills are independent, they create a much more challenging measurement task because they must articulate the pattern of inter-dependency, and this raises a series of questions. Are there threshold levels? That is, until a person reaches a certain level of expertise on a skill, it cannot be observed. Which skills are antecedents for other skills? That is, skill #2 cannot be performed well until a person performs skill #1. Are skills substitutable? That is, in order for people to use skill #3 well, they need to have a high level of either skill #1 or skill #2, but not necessarily both.

5. Conclusion

The analysis of the existing conceptualizations in the media literacy literature reported in this article revealed that there are considerable gaps in the media literacy literature. While there are a many definitions of media literacy, the existing definitions typically cluster around highlighting several components, especially skills and knowledge but also behaviors and affects. To a lesser extent there is a clustering around certain domains of skills and particular domains of knowledge. But at this point the conceptualizations stop providing detail, and this inadequate degree of specificity in the explication of media literacy requires researchers to fill in conceptual gaps in order to design their measures. The gaps have resulted in the design of a great many measures of questionable validity, which sets up a vicious cycle. Researchers who want to design a test of media literacy go to the literature for guidance, however that literature shows them an overwhelming choice of definitions with no single definition being regarded as the most useful one. Even more problematic is that none of the many definitions provides enough detail to guide researchers very far through the process of designing measures of media literacy. Until more fully explicated definitions of media literacy are offered to scholars, researchers will be left with little guidance, which will result in the continuation of inadequate conceptual foundations for their empirical studies and therefore a fuzzy and incomplete foundation to use as a standard for judging the validity of their measures.

References

Aufderheide, 1993 - Aufderheide, P. (Ed.) (1993). Media literacy: A report of the national leadership conference on media literacy. Aspen, CO: Aspen Institute.

Aufderheide, 1997 - Aufderheide, P. (1997). Media literacy: From a report of the National Leadership Conference on Media Literacy. In R. Kubey (Ed.). Media literacy in the information age: Current perspectives, Information and behavior (Vol. 6) (pp. 79-86). New Brunswick, NJ: Transaction Publishers.

Babbie, 1992 - Babbie, E. (1992). The practice of social research. Belmont, CA: Wadsworth.

Bausell, 1986 - Bausell, R. B. (1986). A practical guide to conducting empirical research. New York: Harper & Row.

Bernard et al., 1984 - Bernard, H. R., Killworth, P., Kronenfeld, D., & Sailer, L. (1984). The problem of informant accuracy: The validity of retrospective data. Annual Review of Anthropology, 13, 495-517.

Brinberg & McGrath, 1985 - Brinberg, D., & McGrath, J. E. (1985). Validity in the research process. Newbury Park, CA: Sage.

Buckingham, 2003 - Buckingham, D. (2003). Media education: Literacy, learning and contemporary culture. Cambridge, UK: Polity Press.

Chaffee, 1991 - Chaffee, S. H. (1991). Explication. Newbury Park, CA: Sage.

Fiske & Taylor, 1991 - Fiske, S. T, & Taylor, S. E. (1991). Social cognition (2nd ed.). New York: McGraw-Hill.

Haeffel & Howard, 2010 - Haeffel, G. J., & Howard, G. S. (2010). Self-report: Psychology's four-letter word. The American Journal of Psychology, 123(2). 181-188.

Hindelang et al., 1981 - Hindelang, M. J., Hirschi, T, & Weis, J. G, (1981). Measuring delinquency. Thousand Oaks, CA: Sage.

Hobbs, 1981 - Hobbs, R. (1998). The seven great debates in the media literacy movement. Journal of Communication, 48(1), 16-32.

Kahneman & Tversky, 1984 - Kahneman, D., & Tversky, A. (1984). Choices, values, and frames. American Psychologist, 39, 341-350.

Lamberts & Goldstone, 2005 - Lamberts, K., & Goldstone, R. L. (Eds.) (2005). The handbook of cognition. Thousand Oaks, CA: Sage.

Lauritsen, 1998 - Lauritsen, J. L. (1998). The age-crime debate: Assessing the limits of longitudinal self-report data. Social Forces, 77(1), 127-154.

Nisbett & Wilson, 1977 - Nisbett, R. E, & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84, 231-259.

Nunnally, 1967 - Nunnally, J. C. (1967). Psychometric theory. New York: McGraw-Hill Book Company.

Potter, 2004 - Potter, W. J. (2004). Theory of media literacy: A cognitive approach. Thousand Oaks, CA: Sage.

Potter, 2009 - Potter, W. J. (2009). Media literacy. In W. Eadie (Ed.), 21st century communication: A reference handbook (pp. 558-567). Thousand Oaks, CA: Sage.

Potter, 2010 - Potter, W. J. (2010). The state of media literacy. Journal of Broadcasting & Electronic Media, 54, 675-696.

Potter, 2016 - Potter, W. J. (2016). Media literacy (8th ed.). Thousand Oaks, CA: Sage.

Potter & Thai, 2016 - Potter, W. J., & Thai, C. L. (2016). Examining validity in media literacy intervention studies. Paper presented at the International Communication Association Annual Conference. Fukuoka, Japan.

Rust & Golombok, 1989 - Rust, J., & Golombok, S. (1989). Modern psychometrics (2nd ed.). London: Routledge.

Sigelman, 1982 - Sigelman, L. (1982). The nonvoting voter in voting research. American Journal of Political Science, 26(1), 47-56.

Silverblatt, 1995 - Silverblatt, A. (1995). Media literacy: Keys to interpreting media messages. Westport, CN: Praeger.

Silverblatt & Eliceiri, 1997 - Silverblatt, A., & Eliceiri, E. M. E. (1997). Dictionary o f media literacy. Westport, CT: Greenwood Press.

Silverblatt et al., 2015 - Silverblatt, A., Ferry, J., & Finan, B. (2015). Approaches to media literacy: A handbook (2nd ed.). New York, NY: Routledge.

Slater, 2004 - Slater, M. D. (2004). Operationalizing and analyzing exposure: The foundation of media effects research. Journalism & Mass Communication Quarterly, 81, 168-183.

Stone & Shiffman, 2002 - Stone, A. A., & Shiffman, S. (2002). Capturing momentary, self-report data: A proposal for reporting guidelines. Annals of Behavioral Medicine, 24 (3), 236-243. DOI 10.1207/S15324796ABM2403_09

Tyner, 2009 - Tyner, K. (2009). Media literacy: New agendas in communication. New York: NY: Routledge.

Vogt, 2005 - Vogt, W. P. (2005) Dictionary of statistics & methodology (3rd ed.). Thousand Oaks, CA: Sage.

Wiley, 1991 - Wiley, D. E. (1991). Test validity and invalidity reconsidered. In R. E. Snow & D. E. Wiley (Eds.), Improving inquiry in social science: A volume in honor of Lee J. Cronbach (pp. 75107). Hillsdale, NJ: Earlbaum.

Williams, 1986 - Williams, F. (1986). Reasoning with statistics: How to read quantitative research (3rd ed.). New York: Holt, Rinehart and Winston.

Annexes

Table 1. Definitions for Media Literacy: Components and Domains

Skills Focused Components

Generic Skills

Skill building (Alliance for a Media Literate America)

Skills necessary for competent participation in communication across various types of electronic audio and visual media" (Speech Communication Association, 1996, Standard 23)

Accessing Skills

Ability to access media messages (Moody, 1996)

Ability to access meaning from media messages (Adams & Hamm, 2001; Anderson, 1981; Media Awareness Network; Sholle & Denski, 1995; Silverblatt & Eliceiri, 1997; The National Telemedia Council)

Ability to recognize the questions posed by language, regardless of the medium that transmits that language (Pattison, 1982)

Interpretation Skills

Ability to make one's own interpretations from media messages (Anderson, 1981; Adams & Hamm,

2001; Rafferty, 1999; Silverblatt & Eliceiri, 1997; The National Telemedia Council) Ability to use aesthetic building blocks to create and shape cognitive and affective mental maps (Zettl, 1998)

Ability to analyze media messages (Anderson, 1981; Adams & Hamm, 2001; Brown, 1998)

* Particularly ideological analysis, autobiographical analysis, nonverbal communication analysis, mythic analysis, and analysis of production techniques (Silverblatt, Ferry, & Finan,

1999)

* Ability to critically assess media message in order to understand their impact on us, our communities, our society and our planet. It is also a movement to raise awareness of media and their influence. (Northwest Media Literacy Project)

* Critical viewing skills (Children Now)

* Critical thinking about media messages (Adams & Hamm, 2001; Citizens for Media Literacy; Media Awareness Network; Rafferty, 1999)

* Critical inquiry (Alliance for a Media Literate America)

* "a critical-thinking skill that enables audiences to decipher the information that they receive through the channels of mass communications and empowers them to develop independent judgments about media content" (Silverblatt & Eliceiri, 1997: 48).

Message Production Skills

Ability to communicate effectively by writing well (Brown, 1998)

Ability to produce media messages (Adams & Hamm, 2001; Auferheide, 1993; Hobbs, 2001) Ability to create counter-representations of media messages (Moody, 1996; Sholle & Denski, 1995; The National Telemedia Council)

Knowledge Components

Knowledge of Media Industry

"knowledge about how the mass media function in society. . . Ideally, this knowledge should encompass all aspects of the workings of the media: their economic foundations, organizational structures, psychological effects, social consequences, and, above all, their 'language,' that is the representational conventions and rhetorical strategies of ads, TV programs, movies, and other forms of mass media content" (p. 70); "an understanding of the representational conventions through which the users of media create and share meanings" especially visual representations. (Messaris, 1998: 70) Understanding the process, context, structure, and production values of the media (Silverblatt, 1995)

Knowledge of Media Content

Understanding of media content (understanding of the conduits that hold and send messages), of media grammar (understanding of the language or aesthetics of each medium), and of the medium (understanding of the type of setting or environment) (Meyrowitz, 1998)

Knowledge of Media Effects

Understand the effects of the various types of electronic audio and visual media, including television, radio, the telephone, the Internet, computers, electronic conferencing, and film, on media consumers." (Speech Communication Association, 1996, Standard 22) Understanding of how the media distort aspects of reality as they manufacture their messages and

how symbol systems mediate our knowledge of the world (Masterman, 1985) Learning about "text processing within the broad and complex context of a social, cultural,

educational, and commercial textual ecosphere" (Mackey, 2002: 8) Understanding how media messages shape people's construction of knowledge of the world and the various social, economic, and political position they occupy within it" (Alvermann, Moon, & Hagood, 1999: 1-2)

Knowledge about One's Self

Understanding of one's place in the world (Blanchard & Christ, 1993; McLaren, Hammer, Sholle, & Reilly, 1995; Sholle & Denski, 1994)

Behavioral Components

Generic

"a political, social and cultural practice" (Sholle & Denski, 1995: 17) Empowerment

Becoming empowered citizens and consumers (Blanchard & Christ, 1993; McLaren, Hammer,

Sholle, & Reilly, 1995; Sholle & Denski, 1994) Moving people from dependency to self-direction by being more reflective (Grow, 1990) Rather than allow the media to promote unchallenged the quick fix of violent solutions, conflict resolution skills involving patience and negotiation should be taught. (American Psychiatric Association)

Policing one's own viewing behaviour - if not by reducing the amount of television they watch, then at least by watching it in ways which are assumed to minimize its influence" (Buckingham, 1993: 21) Becoming sophisticated citizens rather than sophisticated consumers (Lewis & Jhally, 1998) Empowering and liberating people to prepare them for democratic citizenship and political awareness" (Masterman, 2001: 15, writing about the Council of Europe Resolution on Education in Media and New Technologies which was adopted by European Ministers of Education).

Activism

Becoming stimulated by social issues that are influenced by the media; these issues are things like violence, materialism, nutrition, body image, distortion in news reporting, and stereotyping by race, class, gender, and sexual orientation (Anderson, 1983) Becoming "active, free participants in the process rather than static, passive, and subservient to the images and values communicated in a one-way flow from media sources" (Brown, 1998:

47)

Challenging abusive stereotypes and other biased images commonly found in the media (Media Watch)

Social Networking

Creating communities of people who interact in complex social and cultural contexts and use this

awareness to decide what textual positions to accept (Buckingham, 1998) "primarily something people do; it is an activity, located in the space between thought and text. Literacy does not just reside in people's heads as a set of skills to be learned, and it does not just reside on paper, captured as texts to be analysed. Like all human activity, literacy is essentially social, and it is located in the interaction between people" (Barton & Hamilton, 1998: 3; cited in Margaret Mackey, 2002: 5-6) Developing a "critical spirit" while encouraging "collaboration with professional people and agencies in both fields" (Commission 1, 1992: 222-223)

Affective Components

Pay more attention to one's own affective investment as one consumes the media (Sholle & Denski,

1995)

Ability to appreciate media messages (Adams & Hamm, 2001) especially respected works of literature (Brown, 1998)

Table 2. The Seven Skills of Media Literacy

1. Analysis - breaking down a message into meaningful elements

2. Evaluation - judging the value of an element; the judgment is made by comparing a message

element to some standard

3. Grouping - determining which elements are alike in some way; determining how a group of

elements are different from other groups of elements

4. Induction - inferring a pattern across a small set of elements, then generalizing the pattern to all

elements in the set

5. Deduction - using general principles to explain particulars

6. Synthesis - assembling elements into a new structure

7. Abstracting - creating a brief, clear, and accurate description capturing the essence of a message

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

in a smaller number of words than the message itself

Figure 1

Pyramid Structure General Definition

Component Component Component Domain Domain Domain Domain Domain Domain

GAP

Measure Measure Measure Measure Measure Measure Measure Measure

i Надоели баннеры? Вы всегда можете отключить рекламу.