Научная статья на тему 'Algorithm for balancing the traffic of Ethernet ring networks'

Algorithm for balancing the traffic of Ethernet ring networks Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
130
68
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
MSTP / RSTP / BPDU / ROOT / SWITCH / ROOT AND BACKUP PORT / RING TOPOLOGY

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Ryzhikh Sergey, Clitheroe Sean, Lihtsinder Boris

Today the majority of data networks work on Ethernet technology and used in a ring topology. Thereby, allowing for optimal network reliability. Most of the equipment used in such networks is access switches having from 8 to 48 ports for connecting subscribers. Such switches usually only work with frames, which is level 2 OSI model. But such a construction, in addition to simplicity, reliability and low cost, has its drawbacks such as the possibility of Ethernet storms that can bring down the entire network. The process of storms is in the underlying technology of Ethernet; namely in the absence of the removal mechanism of broadcast frames from the exclusive network. Consequently, this technology does not allow the existence of more than one communication switching channel between the two points exchanging information. To protect from storms special protocols are used to block redundant switching channels between the communications equipment. Such protocols are used quite a lot today, such as STP, PVSTP, RSTP, but the most common is MSTP. But they all have the same function, based on the finding of redundant channels of communication between the two switches. When found, they block all but one, selected on the basis of comparison of parameters based on the coefficients dependent on the bandwidth of the channel. Thus prohibiting the transmission of traffic on all channels except the selected one. In use, the basic principle of these protocols imposes restrictions on the use of all possible communication channels. This approach results in an inefficient allocation of all available bandwidth in the network. To solve this problem, we developed an algorithm that gives an opportunity to use all available network bandwidth, if it is needed, without affecting the quality of service for existing subscribers. Our algorithm uses the built-in mechanisms of existing protocols. This allows, with minimal modification, to almost double the efficiency of the network.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Algorithm for balancing the traffic of Ethernet ring networks»

Algorithm for balancing the traffic of Ethernet ring networks

Keywords MSTP, RSTP, MSTP BPDU, root, switch, root and backup port, ring topology.

Today the majority of data networks work on Ethernet technology and used in a ring topology. Thereby, allowing for optimal network reliability. Most of the equipment used in such networks is access switches having from 8 to 48 ports for connecting subscribers. Such switches usually only work with frames, which is level 2 OSI model. But such a construction, in addition to simplicity, reliability and low cost, has its drawbacks such as the possibility of Ethernet storms that can bring down the entire network. The process of stoims is in the underlying technology of Ethernet; namely in the absence of the remova mechanism of broadcast frames from the exclusive network. Consequently, this technology does not allow the existence of more than one communication switching channel between the two points exchanging information. To protect from storms special protocols are used to block redundant switching channels between the communications equipment. Such protocols are used quite a lot today, such as STP, PVSTP, RSTP, but the most common is MSTP. But they all have the same function, based on the finding of redundant channels of communication between the two switches. When found, they block all but one, selected on the basis of comparison of parameters based on the coefficients dependent on the bandwidth of the channel. Thus prohibiting the transmission of traffic on all channels except the selected one. In use, the basic principle of these protocols imposes restrictions on the use of all possible communication channels. This approach results in an inefficient allocation of all available bandwidth in the network. To solve this problem, we developed an algorithm that gives an opportunity to use all available network bandwidth, if it is needed, without affecting the quality of service for existing subscribers. Our algorithm uses the built-in mechanisms of existing protocols. This allows, with minima modification, to almost double the efficiency of the network.

Lihtsinder Boris Yakovlevich,

DPhil, professor ofMSIB PSATI, Russia, [email protected]

Ryzhikh Sergey Vyacheslavovich,

graduate student of the Volga State Universiiy of Telecommunications and Informatics (PSUTI), [email protected]

Clitheroe Sean,

engineer of telecommunication,

"SilkSmith" company, United Kingdom, [email protected]

Introduction

There are many ways to control load on data networks, starting with the cable assemblies that provide the physical connection of user workstations and ending with the core of the system. The method of adjustment depends on the switch class.

Data networks are usually considered to be arranged in three hierarchical levels: the access evel, the aggregation layer and the core layer.

Switches can also be classified according to the evels of the OSI model by which they pass, filter and commute frames [1].

Differentiating between layer 2 (L2) switches and Layer 3 (L3) switches.[2] L3 switches are used mainly for tasks at the aggregation and core levels. L2 Switches analyze incoming frames, make a

decision on their future and transmits them to their destination based on the MAC address. This address is the link-layer in the OSI model. These switches are used mainly for hierarchy access tasks. This layer provides end-user access, it is the biggest of all the above.

Each level is defined by its suite of protocols, algorithms and rules for the transfer of information. So, when working with packets, (L3) has the ability to use two or more ways to allow a packet to reach its destination that allows traffic balancing. It also uses mechanisms to verify the delay, loss, and jitter and even choose the best way. But this is only when working with IP packets. When working with frames (L2) this approach is not possible. The presence of two or more paths can cause a traffic loop and consequently a "storm".

Spanning tree method

Algorithms are used in Ethernet networks to prevent traffic loops. Logic of their process can be described as a "spanning tree." This method is based on the sequential elimination of certain parts of the circuit. Forwarded traffic can only use one way to reach the destination host.

Consequently, selecting a method for forwarding traffic must take into account the requirements to the communication channel. One of these parameters of the communication channel is overload.

Ethernet switches have an internal buffer memory for the temporary storage of frames. If the link is

free, the frame is immediately routed to its destination. If this is not possible, the switch puts the frame into temporary storage in a buffer memory, forming a queue. In the case of queue overflow, frames are removed in accordance with the algorithm used for queuing.

Traffic distribution using spanning tree protocols, like STP, RSTP and MSTP. [3]

The most common are ring topology network, based on the construction of multiple tree protocol MSTP [4].

In standard operation protocols STFJ RSTP MSTP, the spanning tree in each of the switch ports at L2 are created at the beginning of traffic forwarding and change only in the event of a topology change or a failure of one of the elements. Spanning trees are based on the definition of the main switch and calculate the path cost to that switch. These parameters are transmitted between switches using service frames at specific time intervals. Main switches are selected based on their priorities, and may be different in different spanning tree. Cost paths are calculated based on the bandwidth of the physical switch ports through which these paths pass. The route through the switch port with the lowest cost is selected as the primary and other routes as backup.

The disadvantage of this method is the way in which it only takes into account the nominal bandwidth of the port and ignores their real traffic load. In reality, one of the ports could be overloaded by local traffic and its real bandwidth, for the transit of

traffic passing through the spanning tree, can be significantly different from the nominal. As a result, the switches may have large queues and traffic delays.

Under overload conditions, a switch port understands the presence of at least one frame in the queue for this port. In a non-congested condition, there are no queues at the port. The queues are checked using a standard mechanism inherent in the switch. This functionality is used in all modern switches.

To eliminate possible overloads the real load must be considered at each of the switch ports and in case of overload at L2, all traffic (or part of the traffic) passing through the congested switch port can be adaptively switched to the backup path.

Traffic balancing algorithm

Implementation of the algorithm is as follows: Figure 1 shows a usual ring connection diagram switches.

Figure 1. Ring topology of Ethernet network

Switch 1 is chosen as the root switch. STP protocol creates a logical ring break at point 2. Switch 3 creates this logical break and sets this in its memory, but its neighbour, switch 4, located on the other side of the logical ring break has no information about this. The logical ring break divides the ring into two branches where information to root switch 1 first flows clockwise and then counter clockwise (figure 1 arrows above). Switch ports are directed towards the root switch considered active (blackened), the other ports of the ring are used for backup. Normally, information about the path costs is transferred in the fields "path cost" (field "a") of the service frames (BPDU) 5. These BPDUs move from the root switch in the opposite direction of the ring (arrows below). Logical break do not obstruct transference of BPDUs around the ring. Each BPDU

makes a complete cycle and after this it is removed by the root switch 1.

The algorithm considers the load on the output ports of the switches and provides the ability to reduce an overload by successively moving the logical ring break (point 2) in the direction of an overloaded switch. For example: When an overload occurs at point 6 on the output port of switch 7, the logical break moves to position 8. In this case, the traffic of switch 3 is transmitted toward the root switch along the backup path (anticlockwise), which will relieve the pressure on switch 7. If the overload is not eliminated and the left branch does not have any additional overload, then on the next cycle of the BPDU, the logical break point moves one more step, and so on. As the overload diminishes, the break point will move in the opposite direction.

The process of moving the logical ring break of an overloaded network can be performed in various ways. We implement this based on the use of free bits of some fields of the BPDU. To inform other switches about overloading we may, for example, use the field "Port Identifier" [5]. This is 1 byte size and currently only uses 4 bits. The value of the remaining 4 bits is always set to 0 [5]. To employ this method, the most significant bits of this field must be used. Significant bit "c" is used to notify the presence of an overload on ways to achieve of the root switch. To determine the switch nearest to the logical break point (4), significant bit "b" (from the last 4 bits of the field "Port Identifier ") is used.

The specified field used in the standard protocol implementation MSTP is used to determine the type of neighbouring switch port.

In its initial state, the output BPDU (5) of the root switch, significant bit "c" in both BPDU are set to "zero". When an overload is detected in the active port, switch 7 changes bit-"c" to "one" and transmits the modified BPDU further on to the next switches (counterclockwise). Another BPDU, moving in the opposite direction, arrives at switch 7 through backup port and therefore will not fix the overload.

The outgoing bit "b", of both clockwise and counter-clockwise BPDU from the root switch, is set to a state that determines the type of port that will be sent this BPDU and on to neighbouring switches. So, from the output from the loot switch, the status of significant bit "b" will determine the designated type of port. When passing BPDU counter-clockwise via the active switch 3, which had previously produced a virtual break (2); switch 3 sets bit "b" to a state that determines the blocked port type. This information is then passed to the neighbouring switch 4 which fixes in it its memory. After that, on the switch 4, bit "b" of the BPDU is set to determine the active port type and sent to the next neighbouring switch. Bit "b" of the clockwise BPDU enters the switch 3 through

the backup port and is defined as the root type of port. Thus, after the BPDU transfer cycle, only edge switches 3 and 4 have BPDU bits "b" and "c" moving counter-clockwise in a state of overload and blocking the ports. In all other switches, bits "b" have different values.

BPDU bits "b" and "c", moving clockwise, will also have other meanings.

The simultaneous presence of blocking and overloading ports generates a signal which will be recorded in significant bit "a" field "path cost*1 of this BPDU.

For acceptable convergence, each spanning tree is used for no more than 20 switches. Therefore, an existing field "path cost" of 4 bytes is enough and senior bits of the field can be used to implement the algorithm.

A change in the significant bit field "path cost" is guaranteed to change the path to the root switch from switch 3 and therefore move the break point from position 2 to position 8.

This method changes the cost of the path and allows the use of the mechanisms inherent in the standard implementation of the MSTP protocol.

Each incoming service frame (BPDU), in the case of an overload not clearing, reallocates switches to the backup channel. This will be carried out sequentially, starting with the switch closest to the logical break point of the ring.

Switching BPDU

In the standard implementation, topology changes are carried out by the complete removal of the previous switching tables. This results in a prolonged process of restructuring the spanning tree.

In the proposed algorithm, switching channels on the boundary switch 3, produced by replacing output interfaces in existing switching MAC-address tables. To inform other switches, is proposed to introduce a new type of BPDU switching.

Figure 2 shows the implementation of the switching BPDU.

Changing the status of the port, as a consequence of traffic balancing algorithm causes a switching BPDU to be created. This switching BPDU contains the MAC addresses 9 added to the switching table of MAC addresses 8 through ports without changing the status of the shipment traffic. This BPDU (10) will be sent via port 2, was transformed into a state of "root" from "blocking". For the other MAC addresses (MAC addresses available through ports, change the status of delivery traffic) in MAC-address table for logical switch 3, will be replaced the port accessibility to port 2, that change the state "blocking" to the state of "roof. Thus, part of the traffic from switch 3 will be directed to bypass the overload point.

Figure 2. Implementation of the switching BPDU

bit "c" with a value of zero will consistently remove the highest bit values in the "path cost" starting with the highest numbers in the logical ring. Thus, changes in the highest bit field "path cost" are guaranteed to change the path to the root switch from switch 3, and therefore move the break point in the ring to position 2. After that, according to the algorithm, a signal will be generated and sent to the switching BPDU through the port, changing the state from "blocking" to "roof. So the initial settings will be restored to traffic on the logical ring. Per cycle BPDU each physical switch will switch only one logical ring. Consequently, during one cycle in the BPDU the data network will not switch more than two logical paths. The first logical path handles the displacement of the point of rupture in the logical ring toward the overload point and the second deals with the displacement of discontinuity points in the other side of the logical ring toward the point of overload.

Thus, after the overload disappears after a few cycles BPDU all logical rings will be transferred to the original mode of operation, ensuring the integrity of the system.

Summary

The developed algorithm is more efficient use of data networks, and improves the processing of traffic, such as latency and packet loss.

This algorithm does not change the spanning tree protocol, but only uses the available mechanisms in existing data protocols. This allows us to apply this algorithm to existing data networks. Equipment that does not support this algorithm, will act as transit hops.

Logical switches, receiving the switching BPDU

(11), replace the port accessibility of MAC address

(12) contained in the switching BPDU, to the receiving port of this BPDU on their MAC tables (13). After consistent passage through all switches, this BPDU is removed by switch that created it. This will significantly reduce the amount of routing traffic required for rebuilding switching tables.

If, as a result of switching traffic an overload is formed on the "backup" path, the switch that commits overload will operate under the algorithm described in the reverse direction. It will return to the "roof way, and will not cause loss of traffic transmitted through other channels.

Later, when the overload is relieved (receiving BPDU with zero bits - "c") and is consistent (each

cycle BPDU) traffic switches back to the "roof channels. Switch participated in switching traffic to "reserve" channel, following the algorithm described by getting bit -"c" with a value of zero will consistently remove highest bit values in the "path cosf starting with the highest numbers in the logical ring. Thus, changes in highest bit field "path cosf guaranteed to get change the path to the root switch from switch -3, and therefore move the break point in the ring to position -2. After that, according to the algorithm will be generated and sent to the switching BPDU through the port, change the state from "blocking" to "roof. So the initial allocation will be restored traffic on the logical ring. Switches participating in switching traffic to the "reserve" channel follow the algorithm described. When receiving a

References

1. Afoncev Edward, Cisco QoS, viewed 5 March 2014, http://network.xsp.ru/3_11.php.

2. Huawei company, viewed 5 March 2014

http:// enterprise.huawei.com/en/products/ net-

work/switch.

3. Cisco company, Document ID: 24248, viewed 5 March 2014, http://www.cisco.com/c/en/us/sup-port/docs/lan-switching/spanning-tree-proto-col/24248-147.html.

4. Peir Lapukhov Understanding MSTP, viewed 5 March 2014, http://blog.ine.com/2010/02/22/ understanding-mstp.

5. Standard of protocol IEEE Std 802.1aq™-2012. The Institute of Electrical and Electronics Engineers, Inc. Chapter 4, page 145.

i Надоели баннеры? Вы всегда можете отключить рекламу.