Научная статья на тему 'Sensory interaction during perception of verbal signs (experimental study)'

Sensory interaction during perception of verbal signs (experimental study) Текст научной статьи по специальности «Языкознание и литературоведение»

CC BY
185
31
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
MULTIMODAL PERCEPTION / MULTIMODAL TEXT / MULTIMODALITY

Аннотация научной статьи по языкознанию и литературоведению, автор научной работы — Nekrasova Elena D.

This paper deals with the problem of multimodal perception of verbal signs. Human perception is multimodal. The study of multimodal texts and multimodal perception of words is a step towards the study of multimodal perception in general. The author presents information about a series of experiments aimed at identifying the influence of background audio information on the perception of visual information. The article touches a leading modality in multimodal perception of verbal stimulus.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Sensory interaction during perception of verbal signs (experimental study)»

UDC 81-13

DOI: 10.17223/24109266/4/4

SENSORY INTERACTION DURING PERCEPTION OF VERBAL SIGNS (EXPERIMENTAL STUDY)

E.D. Nekrasova

National Research Tomsk State Universiry (Tomsk, Russia).

E-mail: nekrasovaed@gmail.com

Abstract. This paper deals with the problem of multimodal perception of verbal signs. Human perception is multimodal. The study of multimodal texts and multimodal perception of words is a step towards the study of multimodal perception in general. The author presents information about a series of experiments aimed at identifying the influence of background audio information on the perception of visual information. The article touches a leading modality in multimodal perception of verbal stimulus. Keywords: multimodal perception; multimodal text; multimodality.

Introduction

Human perception is multimodal. This character of human perception was repeatedly recorded in the works of leading researchers. Many researchers have pointed multimodal nature of human perception (L.M. Vekker, B.G. Ananyev, S.V. Kravkov, Ye.Yu. Artemyeva, etc.). V.A. Labunskaya, O.V. Abdullina, N.A. Bayeva, M.V. Anisimova, etc. discussed the problems of studying multimodal structures. A. Kibrik, G. Kreidlin, etc. wrote about the role of various components in perception.

Images of reality objects are believed to be formed through a complex fusion of multimodal information coming through multiple sensory channels. Such complex images may exist in the following forms:

- separate verbal units, implicitly combining information from different perceptual nature (e.g. dry type [type], dry [touch]);

- texts with heterogeneous semiotic nature, so-called multimodal texts, involving different perceptual channels for self-representation (e.g. cinetext).

The problem of multimodality became topical in linguistics only by the end of the 20th century when the transition to the anthropocentric paradigm and interest to a cognitive component of language took place. This transition caused a conceptual turnabout in the field of text linguistics concerning its research object (E.E. Anisimov, G. Babenko, N.S. Valgina, V.B. Kashkin, etc). Since then the text has been studied not only as an integral segment of a verbal code, but as a unity with non-verbal means of communication.

The researchers focused their attention on the texts of visual-verbal nature (verbal and iconic texts), the study of which has been exercised in

European tradition for quite a long period (N.W. Levie, J.R. Levin, R. Lentz, J. Reinwein, etc). However, research aimed at multimodal texts, involving different perceptual channels of perception (auditory-visual) for their representation, has been scarce. To name a few one can mention the study of the relationship between video and audio tracks of the movie (Hayes, Birnbaum, 1980), a comparison of the amount of acquired movie text information through verbal, visual and audio "tracks" (Kibrik, 2010), the study of cognitive peculiarities of perception of clips (Sharifulin, 2013).

Other researchers have also studied the coexistence of signs of visual and audial nature (L.S. Bolshakova, T.M. Rogozhnikov, E.G. Nikitin, T.A. Vinnikov and others).

Experimental study of regularities of intermodal interaction is an important step in the research of texts of multimodal nature.

1. Research Design

1.1. The problem

We conducted a series of experiments aimed at the study of competition of audial and visual channels while homogeneous verbal information is being presented in the condition of modality conflict.

We pursued the following research objectives:

- to study characteristics of bimodal perception of verbal stimuli in conflict modalities;

- to identify the impact of conflict "background" information on the perception of information extracted from the field of voluntary attention;

- to select a leading modality in bimodal (audial-visual) perception.

1.1.1. The hypothesis

Based on the objectives stated, we made an assumption that in case of voluntary attention being focused on verbal information of one of the modalities (visual), information acquired automatically via the second modality (audial) will affect the perception of the former original modality as well as the solution of the tasks related to this original modality.

1.2. Experiment one

The main experiment presupposed parallel bimodal presentation of words that contain identical or conflicting information in animateness category, and consisted of preparatory (pretest) and primary stages.

1.2.1. Pretest

In the preparatory phase we selected 120 nouns with different frequency of use, which is controlled by the "New frequency dictionary" (O.N. Lyashevskaya, S.A. Balls), created on the basis of NCRA. The ipm index was introduced for each word (number of usages per million).

During the pretest we asked respondents to rank all selected stimuli in accordance with animacy category using a Likert scale (N = 63, students aged 18 to 23 years) to scale subjective animacy in the mind of native speakers basing on a seven point system from 1 to 7 (1 - very inanimate, 2 - inan-

imate, 3 - rather inanimate, 4 - equally inanimate and animate, 5 - rather animate, 6 - animate, 7 - very animate).

1.2.2. Stimuli

The most and least animate words were used to form 80 stimuli (40 pairs) for the main experiment. Bilateral significance (t-test for independent samples) by the animacy factor between the groups of maximum and minimum animateness words was made p < 0,0001).

We have also eliminated the influence of frequency on the occurrence of functional asymmetry of modalities: bilateral significance (t-test for independent samples) of the groups of stimuli in each modality (audial and visual) amounted to p = 0,755).

Further the stimuli were paired (one for audio and one for visual modality), taking into account both factors. Pairing was based on the catego-ryies of animateness (O-animate, N-inanimate): a total of four cases, two matches (O-O, N-N) and two mismatches (O-N, N-O).

The types of stimuli in experiment one

Visual modality (screen) Audial modality (headphone)

The type of pairs animate animate

animate inanimate

inanimate animate

inanimate inanimate

We also controlled the length (number of syllables) in each pair of words (number of stimulus syllables in audial modality coincides with the number of syllables of visual stimulus).

Thus, the design of the experiment was 2 * 2, where independent variables were the ratio of modalities and animacy category. Dependent variables were presented by reaction time (RT) and accuracy of the task on categorization (ACC).

1.2.3. Sample

Students from different faculties, only those who did not participate in the pretest, aged 19 to 23 years, with normal or corrected to normal vision and normal hearing, N = 26 people, of which 10 were men.

1.2.4. Procedure

The experiment consisted of parallel bimodal presentation of words containing identical (match) or conflicting (mismatch) information with regard to animacy category (one of the most powerful semantic categories for lexical decision).

After a fixation cross (500 ms) pairs of stimuli were randomly presented (responses were counterbalanced), one of which appeared on the screen for a maximum of 3 000 ms before the following keys were pressed: 1 (animate) or 2 (inanimate). Another verbal stimulus sounded in the head-

phones with the overlying surface simultaneously with the appearance of a word on the screen. A new trial was preceded by a blank screen (500 ms). The experiment included a training session (10 pairs of stimuli), the data from which were excluded from the analysis.

During the experiment participants had to determine whether a word appearing on the screen was animate or not. This way voluntary attention was focused on visual modality.

1.2.5. Analysis of results

All data were analyzed, except for technical errors of respondents (pressing "space" instead of the correct keys, etc/). Prior to the analysis response times lying more than ± 2 standard deviations from the mean per condition (4,7%) were excluded.

In the first experiment the hypothesis about the modalities ratio in match and mismatch pairs of stimuli was tested. We also focused voluntary attention on visual modality.

Repeated measures ANOVA of the RT showed a significant effect (F (1, 24) = 7,5; p = 0,01) by factor ratio modalities: at the mismatch pairs RT increased significantly (fig. 1a).

Factorial analysis (ANOVA) of the ACC showed a significant effect (F (1, 36) = 7, p = 0,01) by factor ratio modalities: the probability of choosing the wrong option in cases of mismatch pairs increased significantly in comparison with the case of match ones (fig. 1b).

1000 sao

900 940 920 900 880 860 840

0,97

match

mismatch

0,90

mismatch

match

b

Fig. 1. The mismodal ratio of pairs of stimuli by RT factor (a), by ACC factor (b)

1.3. Experiment two

We also conducted an additional experiment aimed at identifying a dominant modality in multimodal perception of verbal stimuli.

a

1.3.1. The stimuli were the original stimuli from the first experiment, reclassified in accordance with a new task.

1.3.2. Procedure

The respondents were required to determine in which of the modalities an animate stimulus was presented, respondents had to press 1 for a visual and 2 for an audial modality. Thus, voluntary attention has not been set on visual modality, it was distributed in both sensory channels: visual and audial. So we had a one-way ANOVA: factor modality. Reaction time (RT) and accuracy task on categorization (ACC) served as dependent variables.

1.3.3. Analysis of results

To analyze the choice for a response all data were used, except for technical errors of respondents (pressing "space" instead of the correct keys, etc). Prior to the analysis response times lying more than ± 2 standard deviations from the mean per condition (4%) were excluded.

1150 1140 1120 1100 1080 1060 1040 1020 1000

Fig. 2. The RT of perception of visual and auditory information

The data analysis of this experiment (One-way ANOVA) concluded that visual modality is a leading modality for perception (F(1,38) = 8,1244, p = ,007). Respondents are more sensitive to stimuli from visual modality (fig. 2).

audial verbal

modality

2. Conclusions

We investigated multimodal perception of verbal stimuli in conflict modalities and came to the following conclusions.

Visual modality proves to be a leading modality for the perception of verbal information. The results obtained confirmed our previous psycholin-guistic experiment [1-5]. In other words, modality has a leading role in modality - type of the information opposition in case of multimodal perception.

The experimental data show that the existing functional asymmetry of intermodal verbal perception (visual modality playing a leading role) does not depend on the type of category (no interaction between factors). Therefore, this result should be replicable in terms of other semantic factors.

However, the difference in speed during the perception of audio-visual pairs with conflict and identical information (mismatch pairs are perceived significantly faster) shows significant effects of type of information on perception: the background information (which sounds outside voluntary attention of respondents) accelerates or slows down the perception of information from the field of voluntary attention.

Thus, it is possible to speak about some cognitive processes, accompanying the perception of information, which are not determined by a leading modality.

The question about multimodal perception, and the perception of multimodal texts is rather broad, and our study is only a step on the way to its further study.

References

1. LYASHEVSKAYA, O.N. and SHAROV, S.A., 2009. A Frequency dictionary of modern

Russian language (on the material of the National corpus of Russian language). Moscow: Abbuchung.

2. NEKRASOVA, E.D., 2014. About multimodal perception of the text. Vestnik by Tomsk

State University, 378, pp. 45-48.

3. DANZ, A.D., 2011. Visual and audial impact on the determination of changes in the

temporal frequency. Experimental psychology, 4 (2), pp. 48-61.

4. SONIN, A.G., 2006. Modeling mechanisms understandingpolicedog texts: dis. ... the topic

of Dr of Science degree work. Moscow, p. 310.

5. LAKOFF, G. and JOHNSON, M., 1987. Women, Fire, and Dangerous Things. What

Categories Reveal about the Mind. Chicago: Chicago University Press.

i Надоели баннеры? Вы всегда можете отключить рекламу.