Научная статья на тему 'DEVELOPMENT OF ARTIFICIAL INTELLIGENCE: CHARACTERISTICS OF RESEARCH DIRECTIONS'

DEVELOPMENT OF ARTIFICIAL INTELLIGENCE: CHARACTERISTICS OF RESEARCH DIRECTIONS Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
72
18
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
INTELLIGENT MACHINE / ARTIFICIAL INTELLIGENCE / LABYRINTH SEARCH / MACHINE TRANSLATION / VISUAL PATTERN RECOGNITION

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Kobilov A.U.

The article presents the periodization of the basic areas of research in the field of artificial intelligence. The first examples of systems created to perform intellectual tasks are considered in detail. The significance of experimental work for the subsequent creation of intelligent machines is described.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «DEVELOPMENT OF ARTIFICIAL INTELLIGENCE: CHARACTERISTICS OF RESEARCH DIRECTIONS»

Kobilov A. U.

Tashkent State University of Economics

DEVELOPMENT OF ARTIFICIAL INTELLIGENCE: CHARACTERISTICS OF RESEARCH DIRECTIONS

Annotation. The article presents the periodization of the basic areas of research in the field of artificial intelligence. The first examples of systems created to perform intellectual tasks are considered in detail. The significance of experimental work for the subsequent creation of intelligent machines is described.

Key words: intelligent machine, artificial intelligence, labyrinth search, machine translation, visual pattern recognition.

With the advent of the first generation of computers, the hardware of which was built on the basis of vacuum tubes, research in the field of artificial intelligence (AI) is actively developing. Initially, when creating the first computers, scientists did not think about the implementation of intellectual functions. The earliest designs were designed to perform complex computational tasks. However, later, with an increase in productivity, it became clear that a huge potential is hidden in computers, with the help of which it is possible not only to simplify labor costs and increase the efficiency of human activity, but also to build a system similar to the human mind, and, consequently, to understand how mental thinking is carried out. activity.

For the first time, the theory that the capabilities of electronic computers at some point will equal the capabilities of the human brain was proposed by the English mathematician, logician, cryptographer Alan Turing. In his 1947 report "Intelligent Machines," Turing speculated whether a mechanism could detect intelligent behavior. In 1950, the work "Computing Machines and the Mind" was published, in which he presented a method for determining the "intelligent behavior" of a machine, later called the "imitation game" or "Turing test". The standard interpretation of this test is: "A person interacts with one computer and one person. Based on the answers to the questions within five minutes, he must determine with whom he is talking: with a person or a computer program. The task of a computer program is to mislead a person into making the wrong choice." Turing claimed that by the year 2000 computer systems would be free to pass his test, but this did not happen [6].

American scientist Marvin Lee Minsky, along with Alan Turing, is considered one of the founders of AI. At the heart of his theory is the idea that "the brain is nothing but a complex machine whose properties can be copied by computers". In 1951, M. Minsky and D. Edmonds created the first network computer based on a neural network - a device built on the principle of organization and functioning of the nerve cells of a living organism. The sample

was named "Snare" (Stochastic Neural Analog Reinforcement Calculator). It was the first self-learning computer system simulating a network of 40 neurons

[7].

Since 1952, Arthur Samuel, a pioneer in the field of computer games and machine learning, has been creating a number of programs for playing checkers. The most important result of his work is the "Checkers-playing" program - one of the first in which self-learning functions are implemented and the basic principles of AI are clearly demonstrated. In the course of his research, Samuel disproved the claim that computers could only do what they were taught: one of his programs "learned" how to play checkers better than its creator. The developments of A. Samuel are considered to be fundamental in this direction

[5].

At the same time, the American researcher Allen Newell began to develop a program for playing chess. The working team included analysts from the RAND Corporation (a company that develops new methods for solving strategic problems), as well as a group of Dutch psychologists led by De Groot, who studied the playing styles of outstanding chess players. The result of two years of work was the programming language "IPL" - the first symbolic language for processing lists. After some time, the intellectual program "Logic-Theorist" was written in this language, designed to automatically prove theorems in propositional calculus. With its help, 38 out of 52 theorems of one of the sections of mathematical logic, the propositional calculus, were re-proved. Subsequently, all 52 theorems were deduced on a high-speed computer" [2].

1952 was also marked by the development of the American mathematician, cybernetics and cryptologist Claude Chenon. He created an "electronic mouse" - a learning machine for finding a way out of a labyrinth, controlled by a complex relay circuit. The device independently explored the labyrinth and found a way out of it. At its core, it was a software implementation of the labyrinth search model [4].

In 1954, a demonstration was made of the operation of a device representing a separate area in AI research, namely, machine translation from one natural language to another with the preservation of semantic relations using a special computer program. The idea was proposed back in 1947 by cryptographer Warren Weaver, and on January 7, 1954, IBM, together with Georgetown University, demonstrated the IBM Mark II, which performed a fully automatic translation of more than 60 sentences from Russian into English. The event has been dubbed the "Georgetown Experiment". It gave a powerful start to the development of this area, causing a wide resonance in the scientific community and positively influencing the development of such systems in the future [1].

In 1956, research in the field of AI was formalized as an independent scientific direction. In the American city of Hanover, on the basis of Dartmouth College, a conference on AI was held, which was attended by all prominent

American researchers involved in developments in this area. It was at this conference that computer scientist John McCarthy coined the term "Artificial Intelligence" ("artificial intelligence"). According to McCarthy, intelligence is understood as "the computational component of the ability to achieve a goal, and researchers are free to use methods that are not observed in humans, if necessary, to solve specific problems, both in the design of a machine and in the operation of algorithms" [8].

In 1958, John McCarthy also made a significant contribution to the development of a new high-level programming language, Lisp. This language is still one of the main tools for writing the software part of intelligent systems. McCarthy's next discovery, due to the lack of funds to increase the power of computer resources, was the "time-sharing mode" (simultaneous access of several users to one computer). This mode made it possible to unleash the potential of computers and significantly increase the productivity of work.

In the same year, McCarthy described the hypothetical Advice Taker program, the main purpose of which was the use of knowledge, including general ideas about the world, in solving problems. Basing its actions on a set of axioms, the program, depending on the conditions, is able to expand the set of fundamental settings, thereby embodying the principles of knowledge representation and reasoning [5].

In 1959, M. Minsky founded the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology and designed a robot capable of perceiving and manipulating surrounding objects. The device was equipped with optical scanners and tactile sensors, controlled by a computer [3].

Since 1960, AI as an independent direction has spread throughout the world. The USSR, Japan, European countries joined the work. Period from 1945 to 1960 was the first step in the development of a new science. At this time, the main directions of AI research emerged, the results of which gave rise to a huge number of diverse ideas, generating both interdisciplinary connections and fundamentally new ideas. Experimental intelligent systems have become the starting point in the development of such areas of research as neural networks, game programs, labyrinth search, machine translation, automatic theorem proving, recognition of visual patterns and external influences. It should be said that robotics is not considered in this article, since by 1945 there were already a large number of automatic machines, and the development of computer technology and the emergence of AI theory marked a new stage of research in this direction.

Subsequently, subject to the trends of reducing the cost of hardware components of computers and increasing their capacities, the presented areas of AI research are transformed and intertwined with research in the field of philosophy, psychology, and cultural studies. There are theories of "weak" and "strong" AI. Writers and cinematographers in their fantastic works try to show possible interactions between man and machine, touching upon the problems of

ethics, morality, and religion. AI research, as well as the very understanding and use of the theory of intelligent machines, are being commercialized. AI enters the world market not only as a high-performance computer information system, but also as a global general cultural idea that has given rise to both new hopes and new phobias in the minds of people.

At the moment, every specialist, one way or another connected with computer technology, has heard about AI. The world in the last 70 years has been trying to use computer technology to solve problems that have been relevant throughout the history of mankind - understanding and recreating life, reason, the ability to perceive, understand and explain the world around.

Perhaps this problem will be solved in the process of creating AI, perhaps AI will solve this problem after its appearance. However, it can be confidently asserted that AI, like any other scientific direction, will receive productive development only as a result of the joint active efforts of the world scientific community.

The ideas of AI highlighted in this paper are important because they are fundamental. These are the first attempts to embody the possibilities of the mind, recreated by separate technical systems, in order to subsequently unite into a full-fledged intellectual environment. On the other hand, it was these areas of research that were chosen for consideration, first of all, for the reason that they represent the most general ways of knowing and understanding the world around us - starting from the perception of reality (pattern recognition) to attempts to describe and adapt to the environment of existence. (translation, labyrinth search, game).

Thus, the concepts of learning, playing, searching for something new and adapting it to improve human existence, defined in the course of the cultural and historical development of society, are reflected in the theory and practice of AI. The components of a complex idea (artificial intelligence) implemented with the help of computer tools, described in this article, marked the beginning of a new era of computer culture, which completely covered most of our planet, outlining the features of today's historical era.

References:

1. Voronovich V. V. Machine translation. - Minsk: Ed. center of BSU, 2013. -39 p.

2. Znatnov S. Yu. On the software of computer proofs // Logical research. -2004. - T. 11. - S. 139-149.

3. Kruglinski S. Interview with Marvin Minsky [Electronic resource] // Discover. -2007. - No. 1. - Access mode: http://www.myrobot.ru/articles/rev_marvin_minsky.php.

4. Muromtsev D. I. Introduction to the technology of expert systems. - St. Petersburg: SPbGU ITMO, 2005. - 93 p.

5. Samuel A. Some research in machine learning using the game of checkers // IBM Journal. - 1959. - No. 3. - S. 210-229.

6. Turing A. Can machines think? - M.: Fizmatlit, 1960. - 67 p.

7. Hogan D. The End of Science: A Look at the Limitations of Knowledge at the End of the Age of Science. - St. Petersburg: Amphora, 2001. - 479 p.

8. McCarthy J. What is Artificial Intelligence? [Electronic resource] // Stanford University. - 2007. - Access mode: http://www-formal.stanford.edu/jmc/whatisai.

i Надоели баннеры? Вы всегда можете отключить рекламу.