Научная статья на тему 'Future Internet architecture: clean-slate vs evolutionary design'

Future Internet architecture: clean-slate vs evolutionary design Текст научной статьи по специальности «Строительство и архитектура»

CC BY
292
116
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
ИНТЕРНЕТ / КОМПЬЮТЕРНЫЕ СЕТИ / ЭВОЛЮЦИОННОЕ РАЗВИТИЕ / INTERNET / COMPUTER NETWORK / EVOLUTIONARY DESIGN

Аннотация научной статьи по строительству и архитектуре, автор научной работы — Taran A., Lavrov D.N.

Long ago developed the Internet has a range of the problems such as IP's narrow waist, security needs, availability, routing scalability, support of mobility and multihoming. Many appeal to the known "clean slate" approach to resolve these challenges facing Internet today. In this paper we consider two concepts of future Internet architecture: clean slate and evolutionary research. In part about clean slate we mention the main ideas, design goals and several of examples in this area. Then we describe the Internet design process through biological metaphor and consider the pros and cons of two ways of research.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Future Internet architecture: clean-slate vs evolutionary design»

Mathematical

Structures and Modeling 2016. N. 3(39). PP. 142-151

UDC 004.7

FUTURE INTERNET ARCHITECTURE: CLEAN-SLATE VS EVOLUTIONARY DESIGN

A. Taran

Postgraduate Student, e-mail: alina.taran.v@gmail.com D.N. Lavrov

Associate Professor, Ph.D. (Eng.), e-mail: dmitry.lavrov72@gmail.com Dostoevsky Omsk State University

Abstract. Long ago developed the Internet has a range of the problems such as IP's narrow waist, security needs, availability, routing scalability, support of mobility and multihoming. Many appeal to the known "clean slate" approach to resolve these challenges facing Internet today. In this paper we consider two concepts of future Internet architecture: clean slate and evolutionary research. In part about clean slate we mention the main ideas, design goals and several of examples in this area. Then we describe the Internet design process through biological metaphor and consider the pros and cons of two ways of research.

Keywords: Internet, computer network, evolutionary design.

Introduction

As it is known, the Internet was developed as reliable data transmission system for the United States Department of Defense, then this network was used with academic purpose in different scientific areas and after that came to broader market becoming a social phenomenon. It was unpredictable in the early '70 that the Internet take up such a great part of our world. Internet has entered into areas of people's life such as business, economy, military, communication. This changes can be traced by fact that a number of connected hosts has grown from less than 200 in the 80s to millions in 2014. Also the nature of Internet traffic has been changed according to new requirements of society. It has led to the appearance of Web, file sharing, video streaming and other new applications. Furthermore, do not only software companies such as Microsoft, Oracle, SAP and others take advantages of the Internet, but other major sectors like the automotive industry, the entertainment industry, banks, insurance companies.

Such broad possibilities for application affect the huge success of the Internet, but it is also a cause of many difficulties and diversity of the system, which grew out from sending e-mails in the university circles to interconnecting the world. Moreover, developers of services and applications should respond as quickly as possible to the growing needs of users throughout various areas of industries. Due to these requirements of users, the Internet is faced with few challenges which are mentioned below.

First, a lack of security in the Internet is the biggest problem. It is not enough to add security to each layer or individual protocol used in the Internet, because security in each component don't give the security in common system. Fishing, spam, spyware, worms and viruses are gaps in security, problems faced by average people. It might be said that these problems are not responsibility of the network. But if we speak about a future Internet as a trusted and cooperative system, the goal of "true" security should be stated. That's why a defensible position of the network role in supporting the endhost security should be claimed, and a consistent division of responsibility between the network and the endhost system should be proposed.

There is one more issue faced by internet service providers (ISP) constantly. The ISPs should provide a service which complies with user requirements of the Internet's crucial role in both business and private life, in terms of reliability, resilience, and availability. If a number of users or applications in systems have an intention to communicate and these systems are allowed to communicate by the policies of the interconnecting networks, then they should be able to do so. This aim seems to be quite simple, but it is not. It is not so easy to remedy any barrier to availability ranging from transient routing instability to denial of service attacks.

Also the problem of flexibility, extensibility and capability of supporting innovations in Internet architecture can be mentioned. Also this problem is well known as "narrow waist" of IP: today the Internet is primarily used for content distribution, but not just for host-to-host communication. Such architecture, with network layer (IP) in the middle and various design above and below it, implements all the functionality necessary for global interconnectivity, but in the same time it is rigid and makes it hard to change middle layer to adapt for future unanticipated and large-scale changes in network behaviour.

Another unresolved problem is network management. There is the lack of tools for recognition which network items are being used by which users and which applications are currently using which specific network components. We still do not understand how to set up and control network for reliable operation, easy management, debugging, and still scales well.

The Internet is experiencing a new stage of development — a shift from PC-computing to mobile computing. Mobility has become an important component of the future Internet. It is also very difficult to add mobility to the current Internet architecture as the current Internet naming system is based on the host address, typically the IP address. To achieve scalability of routing, an address hierarchy is used which imposes a structure on the host addresses that relates to its location within the Internet. Thus putting mobility as the norm is one more big task of the present.

There are two essential approaches to resolve these challenges. The incremental or evolutionary research means that an investigated system is changed incrementally, is moved from one state to another supporting old potential and adding new opportunities. The second, clean-slate approach, means that every new state of system is developed from scratch, providing old possibilities on new principles.

Many researchers believe that it is impossible to solve the problems mentioned above in the current internet architecture. So a lot of papers has appeared in the direction of rethinking the fundamental assumptions and design decisions, and starting from scratch. The research community nowadays intends to focus not on the incremental method, but on a clean-slate approach attempts to find appropriate solution for all these issues. We will consider and give an evaluation of both approaches.

1. Clean-slate approach

As were said above, the aim of clean-slate approach is not to be limited by the existing architecture. Proponents of this approach intend to use benefits of current technologies, opportunities and demands to suggest a brand-new one which will be easily manipulated, easy integrated and light-exposed to changes due to new requirements. As the Internet has been used more then 30 years now, probably it is reached the point where people are unwilling or unable to experiment on the current architecture. This approach has good state of mind: out-of-the-box thinking, the design of alternative network architectures, exploring of the architecture, so that it was more likely to be integrated into the network environment. Contribute to development of this approach proceeds breakthroughs in technologies such as fast packet optical components, wireless networks, fast packet forwarding hardware, and significant computational resources [5].

It is noteworthy that a few attractive and valuable enhancements, like multicast, Mobile IP, IPSec, QoS mechanism or IPv6, have not been widely integrated in spite of theoretical overall excellence [5]. Probable reason is the risk to replace a complex system that already works, no short-term benefits for adopters, difficulty of updating the equipment base. Therefore, some part of the community believes that this type of research is complementary to the indispensable evolutionary approach. Further, we review some clean-slate projects which are forward-looking or have already yielded fruit: GENI, named data networking, and network virtualization.

1.1. Global Environment for Network Innovations (GENI)

The National Science Foundation (NSF) in the United States has initiated the research program which has strong focus on defining an architecture for the Future Internet, and it plans to encourage research teams to reach solidarity on broad architectural themes. For providing an open, large-scale, realistic experimental facility for evaluating new network architectures, the Global Environment for Network Innovations (GENI) was initiated by NSF. The map of GENI is shown in Figure 1.

GENI not only supports existing projects on a dedicated backbone network infrastructure, but also guides other infrastructure platforms to participate in the federation — the device control framework to provide these participating networks with users and operating environments, to observe, measure, and record the resulting experimental outcomes [7]. That is why it is different from other testbeds

Figure 1. The GENI map

because there are no limits on the network architectures, services, and applications to be evaluated. Thus it allows clean-slate designs to experiment with real users under real conditions.

The main goal of GENI is to construct multiple virtualized slices out of the substrate for resource sharing and experiments. It contains two key points:

Physical network substrates that are expandable building block components

A global control and management framework that collects the building blocks together into a connected object

So that in GENI testbeds there are two main branches: the first is deploying a prototype testbed federating different small and medium ones together, the second is running observable, controllable, and recordable experiments on it. Concerning these different branches there are several researching fields, such as GENI experiment workflow and service working group; the control framework; campus/operation, integration, and security area; and instrumentation and management. All in all by delivering instrumentation and measurement support, which is implemented as initial entity of GENI, open and extensible testbed for experimentation with new Internet architecture is provided.

1.2. Named Data Networking (NDN)

The Named Data Networking was funded by NSF in September 2010 as one of the four projects under NSF with participation from about 13 universities and research institutes from the United States, France, China. The main goal of this project is reconstructing current layer network architecture following stretched initial assumptions of the Internet, which often cause challenges and confrontation with user's needs.

This new architecture is focused on the fact, that usage of Internet has shifted from end-to-end packet delivery to a content-centric model. The old one is increasingly limited and makes difficult to conform to IP's requirement to communicate by discovering and specifying location. Figure 2 is an illustration of the Named Data Networking model to build another "narrow waist" around content instead of IP.

A "narrow waist" around content chunks instead of the IP.

Figure 2. The new "narrow waist" of NDN (right) compared to the current Internet (left) [7]

The NDN architecture assumes naming data instead of their locations, and in this way data become a first-class entity of this structure. So it allows to develop a new Internet architecture that can rely on strengths — and address weaknesses — of the Internet's current host-based, point-to-point communication architecture in order to naturally provide emerging needs of communication. Transfer to a new "what"-model from "where"-model can resolve the technical challenges that must be addressed in a future Internet architecture: routing scalability, fast forwarding, trust models, network security, content protection and privacy, and fundamental communication theory [7].

Furthermore, the concept of encryption also changes. Instead of trying to secure the transmission channel or data path through encryption, Named Data Networking decouples trust in data from trust in hosts and tries to secure the content by naming the data through a security-enhanced method. What is more, such architecture enables to optimize the traffic on the network's part and multiple copies of the same data would not be sent between endpoint on the network again.

1.3. Network visualization

Let us consider the research which represents a success of clean-slate thinking. Network virtualization has long been a challenge of the network research community. The aim of the network virtualization is that multiple isolated logical networks, each with potentially different addressing and forwarding mechanisms, share the same physical infrastructure. In order to better understand the principle, let us turn to the analogy with computer virtualization. Reason for the success of

computer virtualization lies in an abstraction of the underlying hardware. Rather the computer virtualization layer has a hardware abstraction that allows slice and share of resources among the guest operating systems. So each operating system thinks that it has its own private hardware. Thus there is a hardware abstraction, which allows to quickly respond to changes both above and below the virtualization layer and allows the system be more competitive due to its flexibility.

Also network virtualization guarantees the improving resource allocation, permits operators to checkpoint their network before changes, and provides competing customers to share the same equipment in a controlled and isolated scope [10].

Moreover, virtual networks also claim to provide a safe and realistic environment to deploy and evaluate experimental "clean slate" protocols in production networks.

Let us give an example of project in network virtualization area. The OpenFlow initiative is an innovative concept to providing opportunities to test new network architecture which was initiated by Stanford Clean Slate project. Protocol Open-Flow is based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries. The structure of these tables is different between vendors, but it has a certain common set of functions. The OpenFlow provides the tools for experimenters to act on the flow tables to change the way the router forwards packets of certain flows [9]. So it allows researchers to run experiments on heterogeneous switches and can be used to create virtual network.

2. Evolutionary approach

The other approach is evolutionary research. The goals of evolutionary Internet research are to understand requirements and behaviour of the current Internet, to clarify existing problems, and to resolve them with two essential limitations: backward compatibility and incremental distribution.

We can understand the processes in computer network through the biological metaphor. Let us make arrangement what we mean by "species" and "environment" in terms of computer network. By species we mean a system which is based on a conceptual organization or architecture (e.g., datagram internetworking), uses certain protocols (e.g., IP, TCP, 802.11), is implemented on various technologies (e.g., electronic routers, optical transmission), and it supports several applications (e.g., Web, P2P) [8]. This system is surrounded by an "environment": a big number of users which interact between them, service needs, outside danger, economic circs, and external conditions. As in biology, network-environment changes constantly, permanently and what is more important uncertainly over the time. There is one good example of this unpredictability — network security. Historically this component was not a big issue. In those times when Internet was designed people did not think about such type of problem, because as it was said Internet was intended for other purposes.

So nature has opportunistic mechanism not to let species die in permanently changing environment — evolution. Biological evolution is based on three factors: generic heritage, mutation and selection. These three basic mechanism can be sim-

ilarly illustrated in Internet evolution. To begin with, generic heritage is referred to property of backward compatibility. Future generations of network's architecture, protocols, backend technologies keep up mostly becuse of their predecessors.

Second, mutation is referred to transformation of network-species components such as architecture, protocols, technologies or applications. By this process there is no creation of new species. When the external conditions have changed, then nature calls for evolution process and predecessor is pushed by evolution variates itself. Of course, there is a difference between network variation and biological in that the first is not random. That is why we have to cope on their own, the scientific community or one individual can create plenty of highly probable survivable variations. Still we can not predict the best of this mutation due to a non-static environment, we can make a conditions for nature and let environment select the most competitive mutation. So it is our responsibility to provide the most diverse variations of predecessor.

Finally, environment chooses one mutation via natural selection. Each mutation is associated with a certain degree of validity i.e., the ability of the current network to satisfy the needs of the environment in that period of time. Certainly as in world of nature, in Internet researches wins the mutation that is associated with the highest validity; in other words, that mutation which has lower costs to deploy and expansion.

3. Evaluation

During the last years Internet architecture design was "clean-slate" oriented. The goal of clean-slate design is to make a new architecture of Internet significantly better, design it from scratch [1]. The author is convinced that it is too ambitious aim which hardly can be successful in reality. There are many reasons to believe that.

Firstly, economical reason plays a major role. If we have got two opportunities: to use something new or use something that works badly, but is still working, then we likely choose the second option. Costs of integration of new product far exceed the benefit in the short run. Furthermore, it is hardly to estimate where in new technology can be a weak point and which problems can we get through deploying this technology in real world. So it is too hard for new technology to replace the working one. It should be a "really big urge" for deploying new architecture. The problem is not the lack of experimentation, but the fact that new technology is not so competitive as previous ones.

Another argument of proponents of clean-slate approach is "ossification" current Internet architecture [6]. As were said above, the middle layer of protocol stack is a narrow place of network architecture: everything goes through it. We can look at it from the other side. Internet architecture should relate a variety of constantly appearing link-layer technologies with a diversity of services and applications. The central protocols of the architecture is a background, which must be stable and expand very slowly to support innovations at other layers of the architecture. In terms of biology, such stable background is Gene Regulatory Networks, which

were based about 510 million years ago and have not substantially advanced since then. Despite the different morphology of human body parts, this GRNs are a component part of them [4]. So evolutionary core delivers a stable basis on which the higher levels may be developed.

What we can do now is stop thinking of the current Internet architecture as an artefact and to start thinking of it as an expanding ecosystem which is affected by many areas of science. The author believes that a new architecture model can not come out of the blue. It is always hard work which is learned by mistakes of the past. We can not abandon its past, that is a natural choice of a mutation that occurs in nature. Researches can be focused on measuring and understanding of current problems. Our goal is to help the evolution by creating great variety of different ideas which firstly can resolve this challenge and secondly can be adopted by current Internet with backward compatible and incrementally deployable. As we involved in this evolution, we should continue to direct and bend the ideas of researches which collaborate and communicate.

There is a successful example of such collaboration from the mathematical field. There is a list of twenty-three problems in mathematics as well-known Hilbert's problems which were published by German mathematician David Hilbert in 1900. He presented ten of the problems at the Paris conference of the International Congress of Mathematicians and published the complete list of 23 problems a pair years later. In his report he formulated the problems which were the most significant for mathematics at the beginning of XX century. No one before him did not set out such titanic task. There were a lot of different directions in math, and it was very difficult for one person to cover all of its sections. But Gilbert was broad-minded: he worked in almost all existing fields of mathematics and in many of them made significant results. Based on the success of co-working mathematical colleagues (19 problems were fully or partly resolved), Clay Mathematics Institute stated seven problems in mathematics (The Millennium Prize Problems) in 2000. A correct solution to any of the problems results in a US $ 1,000,000 prize being awarded by the institute. Our aim in investigations of future Internet technologies is to continue pointing out an issues for current situation and expanding the areas of investigation by cooperation of researchers. What also can be done is to make such impact for good science which requires relevance to the real world, to formulate problems in current Internet and create a good environment for cooperation of people to make this good science.

Conclusion

The author believes that there is the one opportune way in reality — evolutionary Internet research. Unfortunately, evolutionary approach is frequently considered as lightweight and small-minded, as a range of "hacks". The biggest difference between clean-slate and evolutionary research is to take into consideration the existing Internet. The main goal of evolutionary research is to design protocols in the context of existing environment and ecosystem, but not in a vacuum. As well-known mathematician Walter Savitch said, "In theory there is no difference

between theory and practice; in practice there is." A brand-new protocol or service can not be so competitive and robust as already working analogues. Also it neither considers all links between variety of all players in the Internet environment, nor displays the spectrum and synergy of all technologies and application in the real Internet.

Besides, we know very well one concept which already works successfully in practice. Nature has the good mechanism to adapt in new conditions of reality — evolutionary process. The evolution of species can be identified by three natural mechanisms, such as genetic heritage, mutation and selection, which help to survive in radically changing environments without needing a clean-slate restart [3]. Going back to the concept of evolution of the Internet, evolutionary research can be viewed as the creation of survivable mutation. We are able to produce multiple of manifold solutions which can compete then in the real Internet. The author believes that we can not state what will be important and necessary in 10 years. We can not predict the future, we do not even truly know the present. What we can do now is to invest and support the various of ideas and innovation depending on real world to find the optimal solution which meets the requirements of the present.

References

1. Bellovin S.M., Clark D.D., Perrig A., Song D. A clean-slate design for the next-generation secure internet. 2006.

2. Papadopoulos J.D.C. NDN Testbed. Accessed January 15, 2015.

3. Dovrolis C. What would Darwin think about clean-slate architectures? // ACM SIG-COMM Computer Communication Review, 38(1):29-34, 2008.

4. Dovrolis C., Streelman J.T. Evolvable network architectures: What can we learn from biology? // ACM SIGCOMM Computer Communication Review, 40(2):72-77, 2010.

5. Feldmann A. Internet clean-slate design: what and why? // ACM SIGCOMM Computer Communication Review, 37(3):59-64, 2007.

6. Pan J., Jain R., Paul S., So-In C. MILSA: A new evolutionary architecture for scalability, mobility, and multihoming in the future internet // Selected Areas in Communications, IEEE Journal on, 28(8):1344-1362, 2010.

7. Pan J., Paul S., Jain R. A survey of the research on future internet architectures // Communications Magazine, IEEE, 49(7):26-36, 2011.

8. Jennifer Rexford and Constantine Dovrolis. Future Internet architecture: clean-slate versus evolutionary research // Communications of the ACM, 53(9):36-40, 2010.

9. Roberts J. The clean-slate approach to future internet design: a survey of research initiatives // Annals of telecommunications, 64(5):271-276, 2009.

10. Sherwood R., Gibb G., Yap K.-K., Appenzeller G., Casado M., McKeown N., Parulkar G. Flowvisor: A network virtualization layer. 2009.

БУДУЩЕЕ АРХИТЕКТУРЫ ИНТЕРНЕТ: C «ЧИСТОГО ЛИСТА» ПРОТИВ ЭВОЛЮЦИОННОЙ РАЗРАБОТКИ

А.В. Таран

аспирант, e-mail: alina.taran.v@gmail.com Д.Н. Лавров

к.т.н., доцент, e-mail: dmitry.lavrov72@gmail.com

Омский государственный университет им. Ф.М. Достоевского

Аннотация. Давно разработанный Интернет накопила множество проблем, таких как нехватка IP адресов, растущие потребности в области безопасности, доступности, масштабируемости маршрутизации, мобильности. Многие призывают к разработке «с чистого листа» для разрешения этих проблем, с которыми сталкивается работа Интернет в настоящее время. В этой работе мы рассматриваем две концепции будущего Интернет-архитектуры: «c чистого листа» и эволюционного развития. В части, посвящённой разработке с нуля, мы опишем основные идеи, цели проектирования и несколько примеров. Затем мы опишем процесс проектирования Интернет через биологические метафоры и рассмотрим все достоинства и недостатки двух способов развития.

Ключевые слова: Интернет, компьютерные сети, эволюционное развитие.

i Надоели баннеры? Вы всегда можете отключить рекламу.