Td corrigé department of electrical and information engineering - Covenant ... pdf

department of electrical and information engineering - Covenant ...

Rout (t) = Rin (t-D) for all t ..... While the leaky bucket algorithm shapes bursty traffic into fixed-rate traffic by averaging the data rate, the token bucket algorithm  ...




part of the document



CHAPTER 1
INTRODUCTION

1.1 Background
The term ‘network’ refers to the means to tie together various resources so that they may operate as a group, thus realizing the benefits of numbers and communications in such a group [1, p.12]. In the context of computers, a network is a combination of interconnected equipment and programs used for moving information between points (nodes) in the network where it may be generated, stored, or used in what ever fashion is deemed appropriate.

Kanem et al. [2] has averred that the current state of the art in the design of computer networks is based on experience, that the usual approach is to evaluate a network from similar type systems without basing the evaluation on any network performance data, and then purchase the highest performing equipment that the project funds will support. It has also been argued by Torab and Kanem [3] that the design of switched Ethernet networks is highly based on experience and heuristics and that experience has shown that, the network is just installed, switches randomly placed as the need arises without any load analysis and load computation. There are usually no performance specifications to be met, and this approach, frequently leads to expensive systems that fail to satisfy end users in terms of speed in uploading and downloading of information. This speed of uploading and downloading of information challenge was the reason that motivated the research of Abiona who stated in [4, p.10] with respect to the network at the Obafemi Awolowo University, Ile-Ife, Nigeria, that access to the Internet is very slow at certain times of the day and sometimes impossible. Also, response times slow down and performance drops, leading to the frustration of users. Therefore, it became necessary to critically examine the network and improve access to the Internet. According to Gallo and Wilder [5], in a network, the arrival of information in real-time to the destination point at a specified time is a critical issue. It is the contention of this work that, this observed problem is a common feature with most installed local area networks, as it has also been observed at Covenant University, Ota, Nigeria. According to Song in [6], although a lot of work has been done, there exists few fundamental research works on the time behavior of switched Ethernet networks. In the view of Fowler and Leland in [7], there are times when a network appears to be more congestion-prone than at other times, and that small errors in the engineering of local area networks can incur dramatic penalties in packet loss and/or packet delay. Falaki and Sorensen [8] has once averred that, there have always been a need for a basic understanding of the causes of communication delays in distributed systems on a local area network (LAN).

It has also been pointed out by Elbaum and Sidi in [9] that, the issue of network topological design evaluation criteria is not quite clear, and that there is, therefore, the need to provide analytic basis for the design of network topology and making network device choices. But Kanem et al.[2], Bertsekas and Gallager [10, p.149], Gerd [11, p.204], Kamal [12] have argued that one of the most important performance measures of a data network is the average delay required to deliver a packet from origin to destination; and this delay depends on the characteristics of the network [10, p149]. According to Mann and Terplan [13, p.74], the most common network performance measures are cost, delay and reliability. Reiser [14] has averred that, the two most important network performance measures are delay and maximum throughput. Cruz [15] has also argued that, the parameters of interest in packet switched networks include delay, buffer allocation, and throughput. However, Elbaum and Sidi [9] have proposed the following three topological design evaluation criteria:
Traffic-related criterion. This traffic criterion deals with traffic locality.
Delay-related criterion. The minimum average network delay reflects the average delay between all pairs of users in the network, and the maximum access time (the maximum average delay) between any pair of users.
Cost-related criterion. The equipment price and the maintenance cost can be of great significance. This cost can be normalized to be expressed in terms of cost per bit of messages across the network, and be included in any other complicated criterion.

Gerd [11, p.287] has also stated that, when conceiving any type of network, whether long-haul, or local, the network designer has available a set of switches, transmission lines, repeaters, nodal equipment and terminals with known performance ratings; the design problem is to arrange these equipment in such a way that a given set of traffic requirements are met at the lowest cost. This he stated, is known as network optimization within a given cost constraint; and that the main parameters for network optimization are throughput, delay and reliability. It is apparent so far, that an important criterion for evaluating a network is network delay. Delay is the elapsed time for a packet to be passed from the sender through the network to the receiver [16]. There are three common types of network delay; namely, total network delay, average network delay and end-to-end delay [14]. The total network delay is the sum of the total average link delay, the total average nodal delay and the total average propagation delay [13, p.88]. The average delay of the whole network is the weighted sum of the average path delays [17]. The concept of end-to-end is used as a relative comparison with hop-by-hop, as data transmission seldom occurs only between two adjacent nodes, but via a path which may include many intermediate nodes. End-To-End delay is, therefore, the sum of the delays experienced at each hop from the source to the destination [17], it is the delay required to deliver a packet from a source to a destination [18]. The average end-to-end delay time is the weighted combination of all end-to-end delay times.

Mann and Terplan in [13, p.26] have argued that, in certain real-time applications, network designers must know the time needed to transfer data from one node of the network to another; while Cruz in [15] pointed out that, deterministic guarantees on network delay are useful engineering quantities. Krommenacker, Rondeau and Divoux [19] have also averred that the inter-connections between different switches in a switched Ethernet network must be studied, as a bad management of the network cabling plan can generate bottlenecks and can slow down the network traffic.

1.2 Statement of the Problem
There has been a strong trend away from shared medium (in the most recent case, the use of Ethernet hubs) in Ethernet LANs in favor of switched Ethernet LANs installations [20, p.102]. But local area networks designs in practice are based on heuristics and experience. In fact, in many cases, no network design is carried out, but only network installation (network cabling and node/equipment placements) [2], [3]. According to Ferguson and Huston [16], one of the causes of poor quality of service within the Internet is localized instances of substandard network engineering that is incapable of carrying high traffic loads. There is the need for deterministic guarantees on delays when designing switched local area networks; this is because, these delays are useful engineering quantities in integrated services networks, as there is obviously a relationship between the delay suffered in a network and packet loss probability [15]. In the view of Bersekas and Gallagar [10, p.510], voice, video and an increasing variety of data sessions require upper bounds on delay and lower bounds on loss rate. Martin, Minet and Laurent [21] have also contended that, if the maximum delay between two nodes of a network is not known, it is impossible to provide a deterministic guarantee of worst case response times of packets’ flows in the network. Ingvaldsen, Klovning and Wilkens [22] have also asserted that collaborative multimedia applications are becoming mainstream business tools; that useful work can only be performed if the subjective quality of the application is adequate, that this subjective quality is influenced by many factors, including the end-system and network performance, and that end-to-end delay has been identified as a significant parameter affecting the users’ satisfaction with the application. Trulove has averred in [23, p.142] that the LAN technologies in widespread use today – Ethernet, Fast Ethernet, FDDI and Token Ring were not designed with the needs of real-time voice and video in mind. These technologies provide ‘best effort’ delivery of data packets, and offers no guarantees about how long delivery will take place; but interactive real-time voice and video communications over LANs require the delivery of steady stream of packets with guaranteed end-to-end delay. Clark and Hamilton [24, p.13] have also reported that, ‘debates rage over Ethernet performance measures’. According to these authors, network administrators focus on the question, ‘what is the average loading that should be supported on a network?’ They went on to suggest that the answer really depends upon your users’ applications needs; that is, at what point do users complain? In their opinion, it is the point at which it is most inconvenient for the network administrator to do anything about it.
Therefore, this research work was motivated by the following network issues: network end-to-end delay and the capability of a network to transfer a required amount of information in a specified time. Network switches cannot just be placed and installed in a switched Ethernet LAN without any formalism for appropriately specifying the switches, as Bersekas and Gallager have argued in [10, p.339] that, the speed of a network is limited by the electronic processing at the nodes of the network. Mann and Terplan have also averred in [13, p.49] that, the two factors that determine the capacity of a node are the processor speed and the amount of memory in the node. They went further to argue that, nodes should be sized so that they are adequate to support current and future traffic flows. This is because, if a node’s capacity is too small, or the traffic flows are too high, the node utilization and traffic processing times will increase correspondingly and hence, the delay which a packet will suffer in the network will also increase.

Network hosts cannot also continue to be added to a network indiscriminately, as Bolot [18] have argued that end-to-end delay depends on the time of day, and that at certain times of the day, more users are logged on to the network, leading to an increase in end-to-end delay. Mohammed et al. [25], Forouzan [26, p.876] have also expressed the view that, there is a limit on the number of hosts that can be attached to a single network; and, the size of the geographical area that a single network can serve.

How, therefore, should appropriate number of switches for any switched Ethernet LAN be determined? And how should the capacities of the switches be determined? Also, what is the optimum number of hosts for any network configuration, since beyond a certain point, network end-to-end delay become unacceptable?

1.3 Aims and Objectives of the Research
In this research work, we seek to achieve the following aims:
1. Develop formal methodologies for the design of switched Ethernet LANs that, addresses the problems of overall topological design of such LANs, so that the end-to-end delay between any two nodes is always below a threshold. That is, we want to be able to provide an upper bound on the time for any packet to transit from one end node to another end node in any switched Ethernet LAN.
2. Develop a procedure with which network design engineers can generate optimum network designs in terms of installed network switches and attached number of hosts; putting into consideration, the need for upper-bounded end-to-end delays.

The objectives of this research work are to:
1. Develop a model of a packet switch with which the maximum delay for a packet to cross any N-port packet switch can be calculated;
2. Develop an algorithm that can be used to carry out the placements and specifications of the switches in any switched Ethernet LAN;
3. Characterize the bounded capacities of switched Ethernet LANs in terms of the number of hosts that can be connected;
4. Develop a general framework for the design of switched Ethernet LANs based on achieved objectives (1), (2), and (3); culminating ultimately, in the development of a software application package for the design of switched Ethernet LANs.

1.4 Research Methodology
According to Cruz [15], a communication network can be represented as the interconnection of fundamental building blocks called network elements, and he went on to propose temporal properties including: output burstiness and maximum delay for a number of network elements. End-To-End delay depends on the path taken by a packet in transiting from a source node to a destination node [18]. Modeling the network internal nodes and adding some assumptions on the arrival process of packets to the nodes, one can use simple queuing formulas to estimate the delay times associated with each network node; based on the network topology, the delay times are then combined to compute the end-to-end delay times for the entire network [3]. Moreover, modeling the traffic entering a network or network node as a stochastic process (this has largely been the case in the literature), for example as a Bernoulli or Poisson process has some shortcomings. These short comings includes the fact that exact analysis is often intractable for realistic models [15], [14]; stochastic description of arrivals only give an estimation of the arrival of messages [27], [28]. Also, arrivals in stochastic approaches are not known to be definite; for example, the widely used Poisson arrivals in Ethernet LANs was faulted in [8]. Instead, the hyper-exponential and Weilbull arrivals were proposed based on the experiments that were carried out in the work. Cruz in [15] therefore, proposed a deterministic approach to modeling the traffic entering a network or a network node. In this modeling approach, it is assumed that the ‘entering traffic’ is ‘unknown’ but satisfies certain ‘regularity constraints’. The constraints considered here, have the effect of limiting the traffic traveling on any given link in the network, hence Cruz called it the ‘burstiness constraint’ and he went on to use it to characterize the traffic flowing at any point in a network. The proposition roughly speaking is that, if the traffic entering a network is not too bursty, then the traffic flowing in the network is also, not too bursty. The method, therefore, consists in deriving the burstiness constraints satisfied by traffic flowing at different points in the network. Stated differently, this approach (called the network calculus approach) which was introduced by Cruz in [15] and extended in [29] only assumes that the number of bytes sent on the network links does not exceed an arrival curve value (traditionally, this is the leaky bucket value). As pointed out by Anurag, Manjunath and Kuri in [20, p.15] network calculus is used for the end-to-end deterministic analysis of the performance of flows in networks, and for the design of worst-case performance guarantees. The research methodology that was adopted in this work in order to achieve the research objectives, therefore, includes the following:

Extensive review of related literature.
A general representative model of a packet switch using elementary components such as receive buffers, multiplexers, constant delay element, first-in-first-out (FIFO) queue defined, analyzed and characterized by Cruz in [15] was obtained.
The network traffic arriving at a switch was modeled using the arrival curve approach.
Tree-based model was used to determine a switched LAN’s end-to-end delays.
An algorithm was developed that can be used to optimally design any switched Ethernet LAN.
The bounded capacities of switched LANs with respect to the number of hosts that can be connected, was determined.
The algorithm that was developed in (5) was validated by carrying out a real (practical) local area network design example.

Contributions of this Research Work to Knowledge
The following are the contributions of this research work to the advancement of knowledge:
Novel packet switch model and switched (Ethernet) LAN maximum end-to-end delays determination methodology were developed and validated in this work. Although researchers have proposed some Ethernet packet switch models in the literature, in efforts at solving the delay problem of switched Ethernet networks, we have found that these models have not put into consideration two factors that lead to packet delays in a switch – the simultaneous arrival of packets at more than one input port, all destined for the same output port and the arrival of burst traffic destined for an output port. Our maximum delay packet switch model is, therefore, unique in that we have put into consideration, these two factors. More importantly, our methodology (the switched Ethernet LANs maximum end-to-end delays determination methodology) is very unique, as to the best of our, knowledge, researchers have not previously considered this perspective in attempts at solving the switched Ethernet LANs end-to-end delays problem.

A formal method for designing upper-bounded end-to-end delay switched (Ethernet) LANs using the model and methodology developed in (1) was also developed in this work. This method for designing upper-bounded end-to-end delay switched LANs will make it possible for network ‘design’ engineers to design fast-response, switched (Ethernet) LANs. This is quite a unique development, as with our method, the days when network ‘design’ engineers only have to position switches of arbitrary capacities in any desired position are numbered, as switches will now be selected and positioned based on an algorithm that was developed from clear cut mathematical formulations.

This work has also shown for the first time that, the maximum queuing delay of a packet switch is indeed the ratio of the maximum amount of traffic that can arrive in a burst at an output port of the switch to the capacity of the link (data rate of the media) that is attached to the port.

It was revealed also, in this work (and this was clearly shown from first principles) that, the widely held notion in literature as regards origin-destination pairs of hosts enumeration for end-to-end delay computation purposes appears to be wrong in the context of switched local area networks. We have shown for the first time, how this enumeration should be done.

Generally, we have been able to provide fundamental insights into the nature, and causes of end-to-end delays in switched local area networks.

1.6 Organization of the rest of the Thesis
The rest of the thesis is organized as follows. Chapter 2 deals with a brief review of related literature and an extensive treatment of theoretical concepts underlying this research work. The derivation of a maximum delay model of a packet switch is reported in Chapter 3. In Chapter 4, the development of a novel methodology for enumerating all the end-to-end delays of any switched local area network and of designing such networks is presented. Chapter 5 deals with the evaluation of the maximum delay model of a packet switch that was derived in Chapter 3, and the development of a switched local area network design algorithm. This chapter also reports a practical illustrative example of the switched local area network design methodology that was developed in Chapter 4. Chapter 6 completes the thesis with conclusions and recommendations.





CHAPTER 2
LITERATURE REVIEW AND RELATED THEORETICAL CONCEPTS

2.1 Introduction
The rapid establishments of standards relating to Local Area Networks (LANs), coupled with the development by major semi-conductor manufacturers of inexpensive chipsets for interfacing computers to them has resulted in LANs forming the basis of almost all commercial, research and university data communication networks. As the applications of LANs has grown, so is, the demands on them in terms of throughput and reliability [30, p.308]. The literature on LANs (particularly switched Ethernet LANs) is almost in a flux. However, a common challenge that has been confronting researchers for a long time now is how to tackle the problem of slow response of local area networks. Slow response of such networks means packets flows from one host (origin host) to another host (destination host) takes longer time than is necessary for comfort at certain times of the day. Switched networks (for example, switched Ethernet LANs) were quite recent developments by the computer networking community in attempts at solving this slow response challenge. While the introduction of switched networks have reduced considerably this slow response (and hence long delay) problem, it has not completely eliminated it. This has elicited researches into switched networks in efforts at totally eliminating this problem. These researches have been said to be important in the present dispensation because of the deployment and/or the increased necessity to deploy real-time applications on these networks. In the next and succeeding sections, a few of these research works and theoretical concepts that are important for an understanding of the problem of this research work and of the solutions approaches adopted are discussed.

2.2 Some works on Switched Local Area Networks
Kanem et al. in [2] described a methodology which was extended in Kanem and Torab [3] for the design and analysis of switched networks in control system environments. But the method is based on expected (average) information flow rates between end nodes and an M/D/1 queuing system model of a packet switch. As we shall indicate in this work, researchers (for example [15], [20]) have suggested a move from stochastic approaches to deterministic approaches in the analysis and estimation of the traffic arrivals and flows in communication networks because of the inherent advantages of deterministic approaches over stochastic approaches.

Georges, Divoux and Rondeau in [28] proposed and evaluated three switch architecture models using the elementary components proposed and analyzed by Cruz in [15]. According to this paper, modeling an Ethernet packet switch requires a good knowledge of the internal technologies of such switches; but we find the three proposals: 2-demultiplexers at the input connected by channels to 2-multiplexers at the output, 1-multiplexer at the input connected by a channel to 1-demultiplexer at the output, and 1-multiplexer at the input connected by a FIFO queue to 1-demultiplexer at the output as not being descriptive enough of the sub-functions that take place inside a packet switch. Georges, Divoux and Rondeau in [27] reported a study of the performance of switched Ethernet networks for connecting plant level devices in an industrial environment with respect to support for real-time communications. This work used the network calculus approach to derive maximum end-to-end delay expressions for switched Ethernet networks. But the system of equations that resulted from the application of the methodology that was described in the paper to a one switch, three hosts network is so large and complex that, it was even stated in the paper that ‘the equation system which describes such a small network shows that for a more complex architecture, the dimension of the system will increase roughly proportionally.’ In fact, the system of equations for increasingly complex networks will be increasingly incomprehensible. The practical utility of the methodology that is presented in this work appears to be doubtful. It looks like the complexity of the resulting model system of equations, even for a one switch, three hosts network is as a result of a wrong application of the burstiness evolution concept enunciated by Cruz in [29].

In Georges, Krommenacker, and Divoux [31], a method based on genetic algorithm for designing switched architectures was described, and a method based on network calculus to evaluate (based on maximum end-to-end delay) the resulting architecture obtained by using genetic algorithm was also described. But the challenge of the proposed genetic algorithm is its utility for practical engineering work. Moreover, as we shall show in this work, the origin-destination traffic matrix approach for all hosts to be connected to the switched network analysis method which was used in the paper appears to be wrong. Krommenacker, Rondeau and Divoux [19] presented a spectral algorithm method for defining the cabling plan for switched Ethernet networks. The problem with the method that was described in this paper is also its practical engineering utility.

Jasperneite and Ifak [32] studied the performance of switched Ethernet networks at the control level within a factory communications system with a view to using such networks to support real-time communications. This work is a study which is on-going, and gave no practical engineering implications and/or applications. Kakanakov et al. in [33] presented a simulation scenario for the performance evaluation of switched Ethernet as a communication infrastructure in factory control systems’ networks. This work is also a study which is on-going, and it gave no practical engineering implications and/or applications. Costa, Netto and Pereira in [34] aimed to evaluate in time dependent environment, the utilization of switched Ethernets and of traffic differentiation mechanisms introduced in IEEE 802.1D/Q standards. The paper reported results that led it to conclude that, the aggregate use of switched networks and traffic differentiation mechanism represents a promising technology for real time systems. A realistic delay estimation method was described in the paper, but it did not consider the nature of end-to-end delays of switched LANs; which is that there is a particular number of origin-destination pairs that must be worked out as we shall show in this work. It merely considered the estimation of the maximum end-to-end delay of an origin-destination path.

It can be seen that works on switched Ethernet networks in the literature have mostly been carried out in the context of industrial control network environments, because of the inherent necessity for real-time communication in these environments in meeting the delay constraints of the applications that are usually deployed. But as it has been pointed out in Chapter 1 of this work, the need to have networks that meet the delay requirements of applications is not limited to industrial environments. Our methodology therefore, took a general perspective of switched Ethernet local area networks; that is, our method can be
applied to switched Ethernet networks, not withstanding the environment of deployment. Moreover, there does not, seem yet, methods in literature with tangible practical utility; this is one of the challenges that our work sought to overcome.

2.3 Data Communication Networks, Switched Ethernet Local Area Networks and
the Network Delay Problem
A data communication network has been defined as a set of communication links for interconnecting a collection of terminals, computers, telephones, printers, or other types of data-communication or data-handling devices and it resulted from a convergence of two technologies – computers and telecommunication [11, p.2]. Generally, any data communication network can be classified into one of three categories: a Local Area Network (LAN), which is a network that can span a single building or campus; a Metropolitan Area Network (MAN), which is a network that can span a single city and Wide Area Network (WAN), which is a network that can span sites in multiple cities, countries, or continents [35, p. 201]. LANs have also been categorized as networks covering on the order of a square kilometre or less [10, p. 4]. Local Area Networks made a dramatic entry into the communications scene in the late 1970s and early 1980s [11, p.2], [10, p.13] and the rapid rise and popularity of LANs were as a result of the dramatic advances in integrated circuit technology that allowed a small computer chip in the 1980s to have the same processing capabilities of a room-sized computer of the 1950s; this allowed computers to become smaller and less expensive, while they simultaneously became more powerful and versatile [11, p.2]. A LAN operates at the bottom two layers of the Open System Interconnection (OSI) model – the physical layer and the data-link layer [11, p.55] and is shown in relation to the IEEE family of protocols in Figure 2.1.

The manner in which the nodes of a network are geometrically arranged and connected is known as the topology of the network and local area networks are commonly characterized in terms of their topology [11, p.146]. The topology of a network defines the logical and /or physical configuration of the network components [10, p.50]; it is a graphical description of the arrangement of different network components and their interconnections [3].




























Figure 2.1 IEEE family of protocols with respect to ISO OSI model layers 1 and 2
Adapted from: [11, p.55]







The basic LAN topologies are the bus, ring and star topologies [11, p.146], and the mesh topology [1, p.26]. A LAN topology that is now widely deployed is the tree topology, which is an hybrid of the star and bus topology [13, p.116].These four types of topologies are illustrated in Figure 2.2.

A family of standards for LANs was developed by IEEE to enable equipment of a variety of manufacturers to interface to one another; this is called the IEEE 802 standard family. This standard defines three types of media-access technologies and the associated physical media, which, can be used for a wide range of particular applications or system objectives [11, p.54]. The standards that relate to baseband LANs are the IEEE-802.3 standard for baseband CSMA/CD bus LANs, and IEEE 802.5 token ring local area networks. Several variations on IEEE 802.3 now exist. The original implementation of the IEEE 802.3 standard is the Ethernet system. This operates at 10Mb/sec and offers a wide range of application variations. This original Ethernet, referred to as Thicknet, is also known as the IEEE 802.3 Type 10-Base-5 standard. A more limited abbreviated version of the original Ethernet is known as Thinnet or Cheapernet or IEEE 802.3 Type 10-Base-2 standard. Thinnet also operates at 10Mb/sec, but uses a thinner, less expensive coaxial cable for interconnecting stations such as personal computers and workstations. A third variation originated from Star LAN, which was developed by AT&T, and, uses, unshielded twisted-pair cable, which is often already installed in office buildings for telephone lines [11, p.364], [36, p.220], and the first version was formally known as IEEE 802.3 Type 10-Base-T. There has been other versions of the twisted pair Ethernet – Fast Ethernet (100-Base-T or IEEE 802.3u), Gigabit Ethernet (1000-Base-T or IEEE 802.3z). Instead of a shared medium, twisted pair Ethernet wiring scheme uses an electronic device known as a hub in place of a shared cable. Electronic components in the hub emulate a physical cable, making the entire system operate like a conventional Ethernet, as the collisions now takes place inside the hub rather than the connecting cables [35, p.149].

Ethernet, in its original implementation, is a branching broadcast communication system for carrying data packets among locally distributed computing stations. The thicknet,
 EMBED Visio.Drawing.11 thinnet and hub-based twisted-pair Ethernet are all shared-medium networks [6]. That is, traditional Ethernet (which these three types of Ethernet represents), in which all hosts compete for the same bandwidth is called shared Ethernet.

The use of Carrier Sense Multiple Access with Collision Detection (CSMA/CD) protocol that controls access of all the interconnected stations to the common shared medium results in a non deterministic access delay, since after every collision, a station waits a random delay before it retransmits [18]. The probability of collision depends on the number of stations in a collision domain and the network load [6], [27]. Moreover, the number of stations attached to a shared-medium Ethernet LAN cannot be increased indefinitely; as eventually, the traffic generated by the stations will approach the limit of the shared transmission medium [37, p.433]. One traditional way to decrease the collision probability is to reduce the size of the collision domain by forming micro-segments separated by bridges [6]. This is where switches come in, as functionally, switches can be considered as multi-port bridges [6], [38].

A Switched Ethernet is an Ethernet/802.3 LAN that uses switches to connect individual nodes or segments. On switched Ethernet networks where nodes are directly connected to switches with full-duplex links, the communications become point-to-point. That is, a switched Ethernet/802.3 LAN isolates network traffic between sending and receiving nodes. In this configuration, switches break up collision domains into small groups of devices, effectively reducing the number of collisions [6], [27]. Furthermore, with micro-segmentation with full-duplex links, each device is isolated in its own segment in full- duplex mode and has the entire port throughput for its own use; collisions are, therefore, eliminated [32]. The CSMA/CD protocol does not therefore, play any role in switched Ethernet networks [20, p.102]. The collision problem is thus shifted to congestion in switches [2], [6], [27]. This is, because, switched Ethernet transforms traditional Ethernet/802.3 LAN from broadcast technology to a point-to-point technology. The congestion in such switches is a function of their loading (number of hosts connected) [27]; in fact, loading increases as more people log on to a network [8], and congestion occurs when the users of the network collectively demand more resources than the network can offer [10, p.27]. The performance of switched Ethernet networks should therefore, be evaluated by analyzing the congestion in switches [3], [27]. In other words, the delay performance of switched Ethernet local area networks can be evaluated by analyzing the congestion in switches. This is one of the research directions that was pursued in this work. We sought to establish deterministic bounds for the end-to-end delays that are inherent in switched Ethernet local area networks by evaluating the congestion in switches. Trulove in [23, p.143] made this point very succinct when he stated that ‘LAN switching has done much to overcome the limitations of shared LANs’. However, despite the vast increase in bandwidth provision per user that this represents over and above a shared LAN scenario, there is still contention in the network leading to unacceptable delay characteristics. For example, multiple users connected to a switch may demand file transfers from several servers connected via 100 Mb/sec Fast Ethernet to the backbone. Each Server may send a burst of packets that temporarily overwhelms the Fast Ethernet uplink to the wiring closet. A queue will form in the backbone switch that is driving this link, and any voice or video packet being sent to the same wiring closet will have to wait their turn behind the data packets in this queue. The resultant delays will compromise the perceived quality of the voice or video transmission.

2.4 Delays in Computer Networks
One fundamental characteristics of a packet-switched network is the delay required to deliver a packet from a source to a destination [18]. Each packet generated by a source is routed to the destination via a sequence of intermediate nodes; the end-to-end delay is thus the sum of the delays experienced at each hop on the way to the destination [18]. Each such delay in turn consists of two components [17], [18], [10, p.150];
a fixed component which includes:
the transmission delay at the node,
the propagation delay on the link to the next node,
a variable component which includes:
the processing delay at the node,
the queuing delay at the node.
Transmission delay is the time required to transmit a packet [11, p.110], it is the time between when the first bit and the last bit are transmitted [10, p.150]. For example, a 100 kb/sec transmitter needs 0.1seconds to send out a 10,000 bit message block [11, p.110]. For an Ethernet packet switch, the transmission delay will be a function of the output ports’ (and hence on the attached lines) bit rates.

Propagation delay is the time between when the last bit is transmitted at the head node of a link and the time when the last bit is received at the tail node [10, p.150], it is the time needed for a transmitted bit to reach the destination station [11, p.110]. This time depends on the physical distance between transmitter and receiver, on the physical characteristics of the link, and is independent on the traffic carried by the link [10, p.150], [11, p.110].

Processing delay is the time required for nodal equipment to perform the necessary processing and switching [35, p.244] of data (packets in packet switched networks) at a node [11, p.110], [10, p.150]. Included here are error detection and address recognition, and transfer of packet to the output queue [11, p.110]. The processing delay is independent of the amount of traffic arriving at a node if computation power is not a limiting resource, otherwise, in queuing models of nodes, a separate processing queue must be included [10, p.150].

Queuing delay is the time between when the packet is assigned to a queue for transmission and when it starts being transmitted; during this time, the packet waits while other packets in the transmission queue are transmitted [10, p.150]. The queuing delay has the most adverse effect on packet delay in a switched network. According to Song [6], in a fully switched Ethernet, there is only one equipment (station or switch) per switch port; and in case wire speed, full-duplex switches are used, the end-to-end delay can be minimized by decreasing at maximum, the message buffering (queuing); as any frame traveling through the switches in its path from origin to destination without experiencing any buffering (queuing) has the minimum end-to-end delay. Queuing delay builds up at the output port of a switch because, the port may receive packet from several input ports; that is, packets from several input ports that arrive simultaneously may be destined for the same output port [20, p.121]. If input and output links are of equal speed, and if only one input link feeds an output link, then a packet arriving at the input will never find another packet in service and hence, will not experience queuing delay. Message buffering occurs whenever the output port cannot forward all input messages at a time and this corresponds to burst traffic arrival; the analysis of buffering delay therefore, depends on a knowledge of the input traffic patterns [6], [40]. According to Anurag, Manjunath and Kuri in [20, p.538], the queuing delay and the loss probabilities in the input or output queue of input queued or output queued switches are important performance measures for a switch and are functions of:
switching capacity,
packet buffer sizes, and
the packet arrival process.

Two other types of delays identified by [11 p.240] are the waiting time at the buffers associated with the source and destination stations and the processing delays at these stations; this was called thinking time in [32]. But these are usually not part of end-to-end delay (see previous definition of end-to-end delay), since in a way, by simply having hosts of high buffer and processing capacities, delays associated with the host stations can be minimized. Moreover, the capacities of host stations are not part of the factors that are put into consideration when engineering local area networks. As argued by Costa, Netto and Pereira in [34], the message processing time consumed in source and destination hosts is not included in the calculation of end-to-end delay because these times are not directly related to the physical conditions of the network. Access delays occur when a number of hosts share a medium, and hence may wait in turns to use the medium [35, p.244]; but this delay does not apply to switched networks.

While propagation and switching delays are often negligible, queuing delay is not [10 p.15], [39], [27]; propagation delay is in general, small compared to queuing and transmission delays [13, p.90]. Inter-nodal propagation delay is negligible for local area networks [13, p.247], [11, p.110]; propagation delays are neglected in delay computation even in wide area networks because of its negligibility [10, p.15]. We therefore, neglected propagation delays in our end-to-end delay computation in this work.

2.4.1 End-To-End Delay in Switched Ethernet Local Area Networks
Ethernet was originally designed to function as a physical bus, but nowadays, almost all Ethernet installations consist of physical star. Tree local area networks can be seen as multi-level star local area networks [11, p.372], [30, p.254]. A tree is a connected graph that has no cycles [41, p.43], [42, p.131], while a graph is a mathematical structure consisting of two finite sets V and E. The elements of V are called the vertices (or nodes) and the elements of E are called edges; with each edge having a set of one or two vertices associated with it, which are called its end points [3], [41, p.2], [42, p.123]. In the context of switched computer networks, a graph consists of transmission lines (links) interconnected by nodes (switches) [2], [3], [37, p.234]. The operational part of a switched Ethernet network and a large number of Asynchronous Transfer Mode (ATM) networks configurations are examples of networks with tree topology, since in a tree topology, there is a single path between all pair of nodes [13, p. 50]. Tree networks therefore, are networks with unique communication paths between any two nodes, with packets from source nodes traveling along predetermined fixed routes to reach the destination nodes [3]. But the throughput (and hence the delay) of an Ethernet LAN is a function of the workload [38], and the workload depends on the number of stations connected to the network [6]. But the end-to-end delays of switched Ethernet LANs depend on the number of level of switches below the root node (switch) and on the number of end nodes (hosts) [28]. But Falaki and Sorensen [8], Abiona [4] have argued that the loading on a network increases as the number of people logged on to the network increases; and this leads to an increase in end-to-end delay [2], [3], [28]. Also, Jasperneite and Ifak [32] have listed the system parameters that affect the real-time capabilities (that is, the ability to operate within a specified end-to-end delay limit) of switched Ethernet networks as among others, the following:
Number of stations, N,
The stations communication profiles,
The number of switches, K,
Link capacity, C (10, 100, 1000, 10,000) Mb/sec,
Packet scheduling strategy of the transit system (switches) and the stations,
The thinking time (TTH), within stations (the thinking time comprises the processing time for communications request within the stations).
The traffic accepted into a network will experience an average delay per packet that will depend on the routes taken by the packets [10, p.366]. The minimum average network delay is the average delay between all pairs of users in the network [9]. We will use this idea to calculate the maximum average network delay in this work.

2.5 Concept of Communication Session and Flows in Computer Networks
According to Cruz [29], a communication session consists of data traffic which originates at some given node, exits at some other given node, and travels along some fixed route between those nodes. Alberto and Widjaja in [37, p.747] defined a session as an association involving the exchange of data between two or more Internet end-systems. Messages exchange between two users usually occur in a sequence of some larger transactions; and such message sequence (or equivalently, the larger transaction is called a session [10, p.11]. A message on the other hand, from the stand point of the network users is a single unit of communication; if the recipient receives only part of the message, it is usually worthless [10, p.10]. For example, in an on-line reservation system, the message may include: flight number, names and other information. But because transmitting very long messages as units in a network is harmful in several ways, including challenges that has to do with delay, buffer management, and congestion control, messages represented as long strings of bits are usually broken into shorter bit strings called packets (defined as a group of bits that include data bits plus source and destination addresses in [11, p.43]), which are then transmitted through the network as individual entities and reassembled into messages at the destination [10, p.10]. A traffic stream therefore, consists of a collection of packets that can be of variable length [15].

Bertsekas and Gallager [10, p.12], therefore, contends that a network exists to provide communication for a varying set of sessions and within each session, messages of some random length distribution arrive at random times according to some random process. They further listed the following as the gross characteristics of sessions:
Message arrival rate and variability of arrivals; typical arrival rates for sessions vary from zero to more than enough to saturate the network. Simple models for the variability of arrivals include; Poisson arrivals, deterministic arrivals, and uniformly distributed arrivals.
Session holding time; sometimes (as with electronic mail), a session is initiated for a single message, while other sessions may last for a working day or even permanently [20, p.45].
Expected message length and distribution; typical message length vary roughly from a few bits to a few gigabits, with long file and graphics transfer at the high end. Simple models for length distribution include an exponentially decaying probability density, a uniform probability density between some minimum and maximum, and fixed length.
Allowable delay; there may be some maximum allowable delay, and delay is sometimes of interest on a message basis, and sometimes in the flow model, on a bit basis.
Reliability; for some applications, all messages must be delivered error free.
Message and Packet ordering; the packets within a message must either be maintained in the correct order going through the network, or restored to the correct order at some point.

With respect to traffic modeling considerations in order to determine end-to-end packet delay, items 1 to 4 are usually the main issues for consideration. Cruz in [15], [29] referred to a communication session as a flow. In computer communication networks, flows can represent either the total amount of information, or the rate of information flow between any two nodes of a network [2], [3]. Specifically in a LAN, routers and switches direct traffic by forwarding data packets between nodes (hosts) according to a routing scheme; edge nodes (hosts) connected directly to routers or switches are called origin or destination nodes (hosts) [43]. An edge node (host) is usually both an origin and a destination, depending on the direction of the traffic; the set of traffic between all pairs of origins and destinations is conventionally called a traffic matrix [43], [9].

2.6 Switching in Computer Networks
A switch can be defined as a device that sits at the junction of two or more links and moves the flow unit between them to allow the sharing of these links among a large number of users; a switch makes it possible to replace transmission links with a device that can switch flow between the links [20, p.34]. In summary, a switch forwards or switch flows. Other functions of a switch may include; the exchange of information about the network and switch conditions, the calculation of routes to different destinations in the network [20, p.35]. Figure 2.3 shows a block diagram view of a switch.

In a LAN, switches direct traffic by forwarding data packets between nodes according to a routing scheme [43]. The concept of switching or Medium Access Control (MAC) bridging was introduced in standard IEEE 802.1 in 1993, and expanded in 1998 by the definition of additional capabilities in bridged LANs; the aim is to provide additional capabilities so as to support the transmission of time critical information in a LAN environment [44], [32]. A switched network, therefore, consists of a series of inter-linked nodes called switches; switches are devices capable of creating temporary connections between two or more devices linked to the switch [26, p.213]. Switches operate in the first three layers of the OSI reference model. While a local area network switch is essentially a layer 2 entity, there are now layer 3 switches that function in the network layer (they perform the functions of routers outside the 802 network cloud). Figure 2.4 illustrates the placement of switches in the context of the OSI reference model.

Two approaches exist for transmitting traffic for various sessions within a subnet: circuit switching and store-and-forward switching [10, p.14]. There are also two different types of switches with respect to communication networks: circuit switches and packet switches. While circuit switches are used in circuit multiplexed networks, packet switches are used in packet multiplexed networks [20, p.34], [37, p.234]. In circuit switching, a path is created from the transmitting node through the network to the








 EMBED Visio.Drawing.11 





















destination node for the duration of the communication session, but circuit switching is rarely used in data networks [10, p.14]. Packet switching offers better bandwidth sharing and is less costly to implement than circuit switching [17].

A packet is a variable length block of information up to some specified maximum size [37, p.14]; it is a self-contained parcel of data sent across a computer network, with each packet containing a header that identifies the sender and recipient, and a payload area that
contains the data being sent [35, p.666]. User messages that do not fit into a single packet are segmented and transmitted using multiple packets and are transferred from packet switch to packet switch until they are delivered at the destination [37, p.15]. A packet switch performs essentially two main functions: routing and forwarding [37, p.511]. Packet switching, therefore, is an offshoot of message switching in which an entire message hop from node to node; at each node, the entire message is received, inspected for errors, and temporarily stored in secondary storage until a link to the next node is available [11, p.114], [10, p.16]; and they are both called store and forward switching in which no communication path is created for a session [11, p.114], [10, p.16]. Rather, when a packet (or message) arrives at a switching node on its path to the destination node, it waits in a queue for its turn to be transmitted on the next link in its path (usually, a packet or message is transmitted on the next link using the full transmission rate of the link) [10, p.16]. Packet switching essentially overcomes the long transmission delays inherent in transmitting entire messages from hop to hop [11, p.115] and was pioneered by the ARPANET (Advanced Research Project Agency Network) experiment [14].

Virtual circuit-switching (routing) is store-and-forward switching in which a particular path is set up when a session is initiated and maintained during the life of the session. This is like circuit switching in the sense of using a fixed path; but it is virtual in the sense that, the capacity of each link is shared by the sessions using that link on a demand basis rather than by fixed allocations [10, p.16]. Dynamic routing (or datagram routing) is store-and-forward switching in which each packet finds its own path through the network according to the current information available at the nodes visited; virtual circuit routing is generally used in practice in data networks [10, p.17].
Reiser in [14] put the packet-switching concepts more succinctly when he averred that, the basic packet-switching protocol entails the following:
messages are broken into packets,
to each packet is added a header which contains among other information, the destination address,
at each intermediate node, a table look-up is made which yields the address of the link next on the packet’s route, and
at the destination, the message is reassembled and routed to the receiving process.

Routes are defined by entries in the node’s routing table. Protocols differ by the way these tables are maintained. The simplest case is one of fixed routes, with the possibility of back-up routes to be used in case of link or node failures. More elaborate schemes try to adapt routes to changes in the traffic pattern, with the optimization of some cost measures in mind; a well known example of an adaptive protocol is the ARPANET routing algorithm [14]. The Ethernet switch like the router, the bridge, and the cell switch in ATM networks is a packet switch [20, p.35], [37, p.433]. A packet switching network therefore, is any communication network that accepts and delivers individual packets of information [35, p.666]. Therefore, switched Ethernet networks have the following attributes:
they are switched networks,
they have collision-free communication links,
they operate in packet-switched mode,
they have a fixed routing strategy (because of the spanning tree algorithm that are employed in these networks).

Classification of Packet Switches according to Switching Structure (Switching Fabric)
To model a packet switch, the switching structure (fabric) implemented in the switch must be known and reflected in the model. The switching fabric of a switch is the element of the switch which controls the port to which each packet is forwarded [20, p.596].
Common elementary switching structures (fabric) that can be used to build small- and medium-capacity switches having a small number of ports are: the shared-medium (single bus) switching fabric, the shared memory switching fabric, and the cross-bar switching fabric [20, p.597], [27], [6]. These switching fabrics results in shared-medium switches, shared memory switches and cross-bar switches. A brief description of these three types of switches (explained in [20, p.597- 599]) is now presented so that the reason for the choice of the switching fabric that is adopted in this work will be clear.

i. The Shared-Medium Switches
This type of switch has a switching fabric that is based on a broadcast bus (much like the bus in bus-based Ethernet LANs, except that the bus spans a very small area – usually a small chip or at most the backplane of the switching system). This is illustrated in Figure 2.5. The input interfaces write to and read from the bus. At any time, only one device can write to the bus. Hence, there is the need for a bus control logic to arbitrate access to the bus.

The input interface extracts the packet from the input link, performs a route look-up (either through the forwarding table stored in its cache or by consulting a central processor), inserts a header on the packet to identify its output port and service class, and then transmits the packet on the shared medium. Only the target output(s) read the packet from the bus and place it on the output queue. A shared-medium switch is, therefore, an output – queued switch with all the attendant advantages and limitations. According to Anurag, Manjunath and Kuri in [20, p.599], a large number of low-capacity packet switches in the Internet are based on the shared-medium switch over the backplane bus of a computer. Multicasting and broadcasting are very straight forward in this switch.

The transfer rate on the bus must be greater than the sum of the input link rates (a high input link rates sum implies a wider bus or more number of bits) which is difficult to implement and is, therefore, a disadvantage [20, p.599]. The shared-medium switch also requires that the maximum memory transfer rate be at least equal to the sum of the






 EMBED Visio.Drawing.11 





transmission rates of the input links and the transmission rates of the corresponding outputs.

ii. The Crossbar Switches
These are also known as space division switches. An NxN crossbar has N2 cross-points at the junctions of the input and output lines, and each junction has a cross-point switch. A 4x4 crossbar switch is shown in Figure 2.6. If there is an output conflict in a crossbar packet switch, only one of the packets is transferred to the destination. Thus, the basic crossbar switch is an input-queued switch, with queues maintained at the inputs and the cross-points activated such that at any time, one output is receiving packets from only one input. It is also not necessary that the input be connected to only one output at any time, as depending on the electrical characteristics of the input interface, up to N-outputs can be connected to an input at the same time; thus, performing a multicast and broadcast is straight forward in a crossbar switch.

iii. The Shared-Memory Switches
The shared-memory switching fabric is shown in Figure 2.7. In its most basic form, it consists of a dual-ported memory; a write port for writing by the input interfaces and a read port for reading by the output interfaces. The input interface extracts the packet from the input link and determines the output port for the packets by consulting a forwarding table. The information is used by the memory controller to control the location where the packet is enqueued in the shared memory. The memory controller also determines the location from which the output interfaces read their packets. Internally, the shared-memory is organized into N-separate queues, one for each output port. It is not necessary that the buffer for an output queue be from contiguous locations.

The following are two important attributes of shared-memory switching fabrics.
The transfer rate of the memory should be at least twice the sum of the input line rates.







 EMBED Visio.Drawing.11 






The memory controller should be able to process N input packets in one packet arrival time to determine their destinations and hence their storage location in memory.

It should be noted that while in a shared-medium switch, all the output queues are usually separate, in a shared-memory switch, this need not be the case; that is, the total memory in the switch need not be strictly partitioned among the N-outputs; the allocation is dynamically done [6]. According to Song [6], the shared memory architecture is based on rapid simultaneous multiple access by all ports and that in this situation, a packet entering the switch is stored in memory, the packet forwarding is performed by an ASIC (Application Specific Integrated Circuit) engine which looks up the destination MAC address in the forwarding table, finds it and sends the packet to the appropriate output port. Output buffering is used instead of input buffering, hence it avoids HOL (head-of-line) blocking. Output overflow is minimized by using a shared-memory queuing, since the buffer size is dynamically allocated; in fact all output buffers share the same global memory, reducing thus, the buffer overflow compared to the per-port queuing [6]. The shared-memory switching fabric is the most implemented in small packet switches that are used in local area networks [27], [28]. We, therefore, in our maximum delay packet switch model assumed a shared-memory switching fabric.

2.6.2 Packets/Frames Forwarding Methods in Switches
There are four packet forwarding methods that a switch can use: store-and-forward, cut-through, fragment free, and adaptive switching [6]. In store-and-forward switching, the switch buffers, and, typically performs checksum on each frame before forwarding it; in other words, it waits until the entire packet is received before processing it [20, p.35]. A cut-through switch reads only up to the frames hardware address before starting to forward it. There is no error checking with this method. The transmission on the output port could start before the entire packet is received on the input port. Cut-through switches have very small latency, but they can forward malformed packets because the CRC (Cyclic Redundancy Check) is calculated after forwarding [32]. The advantages of cut-through switching are limited, and it is rarely implemented in practice [20, p.35].







 EMBED Visio.Drawing.11 





Fragment free method of forwarding packets attempts to retain the benefits of both ‘store and forward’ and ‘cut-through’ methods. This way, the frame always reaches its intended destination. Adaptive switching is a method of automatically switching between the other three modes.

2.7 Ethernet Technology and Standards for Local Area Networks
Ethernet is the most widely used LAN technology for the following reasons [40]:
technology maturity,
very low priced product,
reliability and stability of technology,
large bandwidths (10 Mbps, 100 Mbps, 1Gbps, 10 Gbps),
deterministic network access delay (for switched Ethernet with full-duplex links),
availability of priority handling features (IEEE 802.1p), which provides a basic mechanism for supporting real-time communications,
broadcast traffic isolation, scalability and enhanced security by configuring the network in terms of VLAN (Virtual LAN),
reliability improved by deploying Spanning Tree Protocol (STP) on redundant paths,
deployment facility with wireless LAN (WLAN), that is, IEEE 802.11 LAN,
de facto standard supporting many widely spread upper stacks (IP and socket-based UDP and TCP) for file transfer (FTP), remote login or virtual terminal (telnet), network management (SNMP), Web-based access (HTTP), email (SMTP), and allows the integration of many Commercial Off-The Shelf (COTS) API and middle wares.

In addition, no special staff training is needed since almost all network engineers know Ethernet and Internet related higher layer protocols very well. Importantly, approximately 85 percent of the world’s LAN-connected personal computers (PCs) and workstations use Ethernet.

Therefore, switched Ethernet is more and more now being considered as an attractive technology for supporting time-constrained communications [27], [28], [40]; and currently, Ethernet is the most common underlying network technology that IP runs on [37, p.586]

2.7.1 Ethernet Frame Formats
In the original Ethernet frame defined by Xerox, after the source’s MAC address, two bytes (2 octets) follow to indicate to the receiver the correct layer 3 protocol to which the packet belongs. For example, if the packet belongs to IP, then the type field value is 0×0800. The following list shows several common protocols and their associated type values.

Protocol Hex Type Value
IP 0800
ARP 0806
Novel IPX 8137
Apple Talk 809B
Banyan Vines 0BAD
802.3 0000-05DC

Following the type value, the receiver expects to see additional protocol headers. For example, if the value indicates that the packet is IP, the receiver expects to decode IP headers next.

IEEE defined an alternative frame format. In this format, there is no type field, but packet length follows the source address. A receiver recognizes that a packet follows 802.3 formats rather than Ethernet formats by the value of the 2-byte field following the source MAC address. If the value falls within 0×0000 and 0×05DC (1500 decimal), the value indicates length; protocol type values begin after 0×05DC. Figure 2.8 shows the extended Ethernet frame format (with IEEE 802.1Q field).







 EMBED Visio.Drawing.11 






2.7.2 IEEE 802 Standards for Local Area Networks
The following are the IEEE standards for local area networks:
802.1; this standard deals with interfacing the LAN protocols to higher layers; for example, the 802.1s standard for Multiple Spanning Tree (MST) Protocol.
802.2; this is the data link control standard, very similar to HDLC (High-level Data Link Control).
802.3; this is the medium access control (MAC) standard, referring to CSMA/CD
system.
802.4; this is the medium access control (MAC) standard, referring to token bus system.
802.5; this is the medium access control (MAC) standard, referring to token ring system.
802.6; this is the medium access control (MAC) standard referring to Distributed Queue Dual Bus (DQDB) system which is standardized for metropolitan area networks (MANs). DQDB systems have a fixed frame length of 53 bytes and hence, compatible with ATM.

The 802.3 standard is essentially the same as Ethernet, using unslotted persistent CSMA/CD with binary exponential back-off [10, p.320]. There is also the FDDI (fiber distributed data interface), which is a 100 Mbps token ring that uses fiber optics as the transmission medium. Because of the high speed and relative insensitivity to physical size, FDDI was planned to be used as backbone for slower LANs and for metropolitan area networks (MANs). And then there is the IEEE 802.11 standard for WLAN (Wireless Local Area Networks) also called WiFi (Wireless Fidelity). The IEEE 802.12 standard is known as Demand Priority (100 VG –Any LAN) standard. There is also the IEEE 802.15 which is the standard for wireless personal area network (PAN); the PAN is a wireless network that is located within a room or a hall. An example of the implementation of the protocol defined by 802.15 is Bluetooth. Bluetooth is a wireless LAN technology which was started as a project by the Ericsson Company, designed to connect devices of different functions such as telephones, notebooks, computers (desktop and laptop), cameras, printers and others. A Bluetooth LAN is an ad-hoc (formed spontaneously) network. IEEE 802.16 standard is defined for wireless local-loop. It is also called WiMax. There is the new IEEE 802.20 for Mobile Broadband Wireless Access (MBWA).

2.7.3 Ethernet switches and the spanning tree algorithm
Ethernet switches are multi-port transparent bridges for interconnecting stations using Ethernet links [37, p.466]. A bridge interconnects multiple LANs to form a bridged LAN or extended LAN; while a bridge is termed transparent for the fact that, stations are completely unaware of the presence of bridges in the network. Therefore, introducing a bridge does not require the stations to be reconfigured.

The process in bridge learning (of a network it is connected to) works as long as the network does not contain any loops – meaning that there is only one path between any two LANs. In practice however, loops may be created accidentally or intentionally to increase redundancy. Unfortunately, loops can be disastrous during the learning process, as each frame from the flooding triggers the next flood of frames, eventually causing a broadcast storm and bringing down the whole network.

To remove loops from a network, the IEEE 802.1 committee specified an algorithm called the spanning tree algorithm. If we represent a network with a graph, a spanning tree maintains the connectivity of the graph by including each node in the graph, but removing all possible loops; this is done by automatically disabling certain bridges. It is based on an algorithm invented by Radia Perlman while working for Digital Equipment
Corporation. The Spanning Tree Protocol (STP) is an OSI layer 2 protocol which ensures a loop-free topology for any bridged LAN [45]. Ethernet switches support the Spanning Tree Algorithm and Protocol (IEEE 802.1D Standard); a tree is called a spanning tree since it connects (spans) all the end nodes in the network [2]. An extended version of the IEEE 802.1D standard is the IEEE 802.1W or the rapid spanning tree protocol.

2.8 Modeling of Switched Local Area Networks
Models are set of rules or formulas which try to represent the behavior of a given phenomenon [46]. A model is an abstraction of a system that, extracts out the important items and their interactions [1, p.2]. Models provide a tool for users to define a system and its problem in a concise fashion, they are general description of systems, are typically developed based on theoretical laws and principles and are only as good as the information put into them [1, p.2]. The basic notion is that, a model is a modeler’s subjective view of the system; the view defines what is important, what the purpose is, details, boundaries [1, p.3]. Modeling a system is easier and typically better, if [1, p.2]:
- physical laws are available that can be used to describe them,
- pictorial representation can be made to provide better understanding of the model,
- the system’s inputs, elements, and outputs are of manageable magnitude.

Elementary Network Components that were incorporated into the Packet
Switch Model
This section discusses the elementary network components that were used for modeling the packet switch and is based on the work of Cruz in [15].

i. The Constant Delay Line
The constant delay line is a network element with a single input stream and a single output stream. The operation is defined by a single parameter D. All data which arrive in the input stream exit on the output stream exactly D seconds later; that is, each packet is delayed a fixed constant time before it is moved out. Thus, if Rin represents the rate of the input stream, and Rout represents the rate of the output stream, then,

Rout (t) = Rin (t-D) for all t

The maximum delay of a delay line is obviously D. The delay line can be used to model propagation delays in communication links. In addition, it can be used in conjunction with other elements to model devices that do not process data instantaneously. The constant delay line is illustrated in Figure 2.9.

The routing latency in a packet switch could be modeled by applying a burst-delay service curve ´T(t), which is equivalent to adding a constant delay T [27]. Figure 2.10a












 EMBED Visio.Drawing.11 


















 EMBED Visio.Drawing.11 








shows the input and output curves of the guaranteed delay element, while Figure 2.10b shows the curve of the burst-delay function.

ii. The Receiver Buffer
The receiver buffer is a network element with a single input stream and a single output stream. The input stream arrives on a link with a finite transmission, rate, say C. The output stream exits on a link with infinite transmission rate. The receiver buffer simply outputs the data that arrives on the input link in First-Come-First-Served (FCFS) order. The data packet exits the receive buffer instantaneously at the time instant when it is completely transmitted to the receive buffer on the input link. That is, the receive buffer does not output a packet until the last bit of the packet has been received; at which time, it now outputs the packet. The receive buffer is employed to model situations in which cut-through switching is not used; but, in which store-and-forward switching is used.

If Lk = length in bits of packet k that starts transmission on the input link at time Sk, then
tk = Sk + Lk /C for all k,
where, tk = time at which the kth packet starts exiting the receive buffer.

Obviously, the maximum delay of any data bit passing through this network element is upper bounded by L/C, and the backlog in the receive buffer is obviously bounded by L. The receiver buffer is a useful network element for modeling network nodes which must completely receive a packet before the packet commences exit from the node. For example, the receiver buffer is a convenient network modeling element in a data communication network node that performs error correction on data packets before placing them in a queue. In addition, the receive buffer is useful for devices in which the input links have smaller transmission rates than the output links. The receive buffer is illustrated in Figure 2.11.














 EMBED Visio.Drawing.11 










iii. The First-Come-First-Served multiplexer (FCFS MUX)
The multiplexer (FCFS MUX) has two or more input links and a single output link. The function of the FCFS MUX is to merge the streams arriving on the input links onto the output link. That is, it multiplexes two or more input streams together onto a single output stream. The output link has maximum transmission rate Cout and the input links have maximum transmission rates Ci , i = 1,2,3,…,N. It is normally assumed that Ci e" Cout for i = 1, 2, 3,& ,N. An illustration of the FCFS MUX is shown in Figure 2.12.

iv. First-In-First-Out (FIFO) Queue
The FIFO queue can be viewed as a degenerate form of FCFS multiplexer. The FIFO queue has one input link and one output link. The input link has transmission capacity Cin and the output link has transmission capacity Cout. The FIFO is defined simply as follows. Data that arrives on the input link is transmitted on the output link in FCFS order as soon as possible at the transmission rate Cout. For example, if a packet begins to arrive at time t0 and if no backlog exists inside the FIFO at time t0, then the packet also commences transmission on the output link at time t0. We assume that Cin e" Cout so that this is possible. If Cin were less than Cout, then this would be impossible to do, as the FIFO would  run out of data to transmit immediately following time t0 before the packet could be transmitted at rate Cout.

Suppose that the rate of the input stream to the FIFO queue is given as Rin(t),
If the size of the backlog inside the FIFO queue at time t is given by WCout (Rin)(t); the jth packet which arrives at time Sj must wait for all the current backlog and this backlog gets transmitted at rate Cout. It follows that the jth packet commences exit from the FIFO queue at time tj = Sj + dj, where,
 EMBED Equation.3  (2.1)
= time spent by the jth packet in the FIFO queue before being transmitted at rate Cout.

The FIFO queue is illustrated in Figure 2.13. The following are Cruz’s [15] inputs and outputs rates specifications for the network elements that were used in this work.








 EMBED Visio.Drawing.11 


















 EMBED Visio.Drawing.11 











Receive Buffer: Input rate = Ci buffer
Output rate = Co buffer
Co buffer >> Ci buffer EMBED Equation.3 

Constant Delay Line: Input rate = Rin(t)DL
Output rate = Rout(t)DL
Rout(t)DL = Rin(t - D)DL

3. FCFS MUX: Input rate = Ci MUX, i = 1, 2,& ., N
Output rate = Co MUX
Ci MUX e" Co MUX

4. FIFO Queue: Input rate = Ci QU
Output rate = Co QU
Ci QU e" Co QU

It is to be noted that all rates are in bits/sec

Approaches to Modeling Traffic Flows in Communication Networks:
Network Calculus versus Traditional Queuing Theory
To determine the end-to-end response time of flows in communication networks, two general approaches can be used: stochastic approaches or deterministic approaches. Stochastic approaches consist in determining the mean behavior of the considered network, leading to mean statistical or probabilistic end-to-end response times; while deterministic approaches are based on a worst-case analysis of the network behavior, leading to worst-case end-to-end response times [21], [27]. This is because, stochastic processes are processes with events that can be described by probability functions; while a deterministic process is a process whose behavior is certain and completely known. Network calculus is a deterministic approach to modeling network entities and flows.

The advantages of the Network Calculus over the Traditional Queuing Theory can be put in the following more compact form [44], [10, p.149], [14].
Network Calculus
Network calculus basically considers networks of service nodes and packets’ flows between the nodes.
Network calculus involves bounded constraints on packets arrivals and services.
These bounded constraints allow bounds on the packets’ delays and work backlogs to be derived, which can be used to quantify real-time network behavior.
The packets arrival processes in network calculus are described with the aid of arrival curves, which quantify constraints on the number of packets or the number of bits of a packet flow in a time interval at a service node.

Traditional Queuing Theory
Traditional queuing theory deals with stochastic processes and probability distributions.
Traditional queuing theory normally yields mean values and perhaps quantiles of distributions.
The derivations of these mean values and quantiles of distributions are often difficult.
Upper bounds on end-to-end delays may not exist or be computable.

Generally, the deterministic methodology which the network calculus represents considers the worst case performance of the network and, therefore, yields conservative results [20, p. 127]. Network calculus has traditionally been used for scheduling and traffic regulation problems in order to improve Quality of Service (QoS); but it is now more and more being used to study switched Ethernet networks (for example, [27], [28], [44], [47]). Network calculus enables one to obtain upper-bounded delay for each of the network elements proposed by Cruz in [15]; to obtain the maximum end-to-end delay of a complete, switched communication system, we must, add the different upper bounded delays [31]. Network calculus can be used to engineer Internet networks [44]. In end-to-end deterministic network calculus approach, input processes are characterized via envelops, network elements are characterized via service curves, and it is useful for the engineering of networks if worst-case guarantees are required [20, p.252].

2.8.3 Network Traffic Modeling – the Arrival Curve approach
The delays experienced by packets of a given packet stream at a link or switch, depends on the pattern of arrivals in the stream (arriving instants and the number of bits in the arriving packets) and in the case of a link, on the way the link transmits packets from the stream (the link may be shared in some way between two or more packet streams). To analyze such situations, we use mathematical models that are variously called traffic models, congestion models, or queuing models [20, p.120].

The modeling of network traffic is traditionally done using stochastic models [27], [10, p.149]; for example, Bernoulli arrival process was assumed in [6]. But in order to guarantee bounded end-to-end delay for any traffic flow, the traffic itself has to be bounded [28]. This is where the arrival curve concept of traffic arrivals to a system is important. In integrated service networks (ATM and other integrated service internet), the concept of arrival curves is used to provide guarantees to data flows [48, p.7]. In this approach (arrival curve), the traffic is unknown, but it is assumed that its arrival satisfies a time constraint. Generally, this means that the quantity of data that has arrived before time t will not be more than the arrival curve value at time t. The constraints are normally specified by a regulation method; for example, the leaky bucket controller (regulation).

2.8.3.1 Leaky Bucket Controller
The arrival curve concept can be viewed as an abstraction of the regulation algorithm, and the most common example of traffic regulation algorithm is the leaky bucket algorithm, which has an arrival curve given by the following equation [49];
b(t) = Ã + Át for t > 0,
which means that no more than à data units can be sent at once and the long-term rate is Á. The arrival curve, therefore, bounds traffic and denotes the largest amount of traffic allowed to be sent in a given time interval [49], [10, p.512]. A leaky bucket controller according to Le Boudec and Thiran [48, p.10] is a device that analyses the data on a flow as follows. There is a pool (bucket) of fluid of size Ã. The bucket is initially empty. The bucket has a hole and leaks at a rate, Á units of fluid per second when it is not empty. Data from the flow R(t) has to pour into the bucket an amount of fluid equal to the amount of data that will make the bucket to be full. Data that would cause the bucket to overflow is declared as non-conformant (it would not pour into the bucket) otherwise, the data is declared as conformant. The leaky bucket scheme is used to regulate the burstiness of transmitted traffic [10, p.911]. Figure 2.14 illustrates the operation of the leaky bucket regulator.

In ATM systems, non-conformant data is either discarded, tagged with low priority for loss (“red” cells) or can be put in a buffer (buffered leaky bucket controller); with the Integrated Services Internet, non-conformant data is in principle, not marked, but simply passed as ‘best effort’ traffic (namely, normal IP traffic) [48, p.10]. A similar concept to the leaky bucket concept is the token bucket controller. While the leaky bucket algorithm shapes bursty traffic into fixed-rate traffic by averaging the data rate, the token bucket algorithm allows bursty traffic at a regulated maximum rate [26, p.779].

2.8.3.2 Straight line Equation as an Affine arrival curve
It is a well known fact in elementary mathematics that y = mx represents a straight line through the origin (y being on the vertical axis and x on the horizontal axis), with gradient or slope m. y = mx + c is obtained by adding the value c to the y-coordinate at every point of y = mx thus getting a line parallel to the original one; hence, it represents a straight line with gradient m, with the value c most easily seen as the value of y corresponding to x = 0.

We can see from Figure 2.15a that, y1 = c + Z1, y2 = c + Z2, and y3 = c + Z3; and that Z1, Z2, and Z3 depends on the length of the intervals x1-x0, x2-x0, and x3-x0. Also, from Figure 2.15b, we can see that y = Ã + Át, where,
Á = gradient of the straight line
=  EMBED Equation.3 





 EMBED Visio.Drawing.11 












 EMBED Visio.Drawing.11 










2.8.3.3 Traffic Stream Characterization
In the network calculus approach for describing network traffic, a traffic stream (which is a collection of packets that can be of variable length [15]) or flow is described by a wide-sense increasing function r(t). The function r is wide-sense increasing if and only if r(s) d" r(t) for all s d" t. We represent a traffic stream as follows: for any t > 0,
 EMBED Equation.3  is the amount of bits seen in the flow in the interval [0, t]. R(s) is called the rate function of the traffic stream [48, p.4], [15]; it is the instantaneous rate of traffic from the stream at time s. By convention, we take r(0) = 0 [48, p.4].
Also, in this traffic modeling approach, for any y e" x,
 EMBED Equation.3  represents the amount of traffic seen in the flow in the time interval [x, y]. We note explicitly that the interval of integration is a closed interval.

2.8.4 Definition of Burstiness Constraint
Given Á e" 0 and à e" 0, we write R ~ (Ã, Á) if and only if for all x, y satisfying y e" x, there holds;
 EMBED Equation.3  (2.2)
Thus, if R ~ (Ã, Á), there is an upper bound on the amount of traffic contained in any interval [x, y] that is equal to a constant à plus a quantity that is proportional to the length of the interval. The constant of proportionality Á determines an upper bound to the long-term average rate of traffic flow, if such an average rate exists. For a fixed value of Á, the term à allows for some burstiness.

From (2.2), another interpretation of the constraint R ~ (Ã, Á) is that;
 EMBED Equation.3  (2.3)
or  EMBED Equation.3  (2.4)
Therefore, a useful interpretation of the constraint R ~ (Ã, Á) is as follows [15]: for any function R and a constant Á > 0, define the function WÁ(R) for all times by Eq. (2.5).
 EMBED Equation.3 , -" 0 at any instant of time t, then, Rout(t) = Cout [15]. So definitely, the interval [s, sj] where dj is maximum only depends on the arrival process of the traffic Rin(t). This is illustrated in Figure 3.7. The data rate of traffic arrival in the time interval t4 is equal to the data rate of traffic arrival in the time interval t3 and is greater than the data rate of traffic arrival in the time interval t2 which is greater than the data rate of traffic arrival in the time interval t1.







 EMBED Visio.Drawing.11 







Therefore, if we assume that t1 = t2 = t3 = t4, it follows that the amount of data that will be in the queue as a result of the traffic that arrived during t4 is equal to the amount of data that will be in the queue as a result of the traffic that arrived during t3 and is greater than the amount of data that will be in the queue as a result of data traffic that arrived during t2, which will be greater than the amount of data that will be in the queue as a result of data traffic that arrived during t1 . Hence, any data packet that arrives to the queue at the time immediately following the end of time interval t4 or t3 would have met, more, backlog and hence, experienced more delay than any data packet that arrives at the time immediately following the end of time interval t2, this same reasoning also applies to time interval t1. We now proceed to determine a possible traffic arrival interval where dj, would be maximum. Recall that Rin is the rate function of the incoming traffic stream;
 EMBED Equation.3 
 EMBED Equation.3  is the amount of traffic that have arrived in the closed interval [s, sj].
Given à e" 0, and Á e" 0, we write Rin ~ (Ã, Á), if and only if for all s, sj satisfying sj e" s, there holds:
 EMBED Equation.3  (3.11)
where à = the maximum amount of traffic that can arrive in a burst, and,
Á = the long term average rate of traffic arrivals.

Similarly, if b is any function defined in the non-negative reals, and Rin ~ b, we can write [15], [27]:
b(t) = Ã + Át (3.12)
where b(t) is an affine arrival curve which we have previously illustrated.

In consonance with the description of the physical layer switch system in [60]; that the switching circuit of a switch establishes a link between two ports specified by the source address and the destination address that is received from the status look-up table, we can then take into account, the internal bus (the bus connecting the receive buffer to the output buffer) capacity (transfer rate). If this is C bits/sec, then the affine function (Eq. (3.12)) can be completed with an inequality constraint:
b(t) d" Ct (3.13)

This inequality constraint idea was introduced by Georges, Divoux and Rondeau in [27] in relation to the communication link feeding a switch. The inequality relationship represented by (3.13) means that, the arrival of data to the output buffers cannot be greater than the internal bus capacity through which the data will flow.

Eq. (3.12) can now be completed with the inequality constraint (3.13) as:
 EMBED Equation.3  (3.14)

We can now write out the amount of data that have arrived in the interval [sj, s] for all
sj e" s as:
 EMBED Equation.3  (3.15)
From Eq. (3.14), if Ct  EMBED Equation.3  (3.18)
 EMBED Equation.3  (3.19)

Eqs. (3.17) and (3.19) then give us two possible arrival rates: C, the internal bus capacity and Á, a long term average rate (both are in bits/sec).
But the maximum burst size has been defined as the maximum length of time that a data traffic flows at the peak rate [26, p.762], [37, p.551]; we, therefore, ignore Eq. (3.19) which deals with average rate. Eq. (3.15) can now be written (taking the upper bound of the inequality) as:
 EMBED Equation.3  (3.20)
Eq. (3.10) now becomes:
 EMBED Equation.3  EMBED Equation.3   EMBED Equation.3  (3.21)
To determine the maximum length of time or max [sj – s] that the incoming traffic flows at the peak rate, we note that, the upper bound of the inequality of (3.15) implies,
either  EMBED Equation.3  (3.22)
or  EMBED Equation.3  (3.23)
that is C(sj - s) = Ã + Á(sj  s) (3.24)
or sj  s =  EMBED Equation.3  (3.25)
= maximum length of time at which the traffic flows at the peak rate.
We can now re-write Eq. (3.21) as:
 EMBED Equation.3 
=  EMBED Equation.3  (3.26)
= maximum delay in seconds incurred by the jth packet in crossing the FIFO queue.

We note here again that à is the maximum amount of traffic (in bits) that can arrive in a burst to the FIFO Queue. But we had earlier stated that Á is the rate at which a work-conserving system that accepts data at a rate described by the rate function R, transmits the data while there is data to be transmitted [15]. We can explain this concept in this simple way. Consider a work-conserving system as shown in Figure 3.8, which receives data at a rate described by R(t) (the rate at different times are different as illustrated by Figure 3.7), and issues out traffic at a constant rate Cout.

Consider also, a communication session between the traffic source and the work-conserving system. It is easy to see that the traffic that arrives to the work-conserving system during the communication session (including burst traffic arrivals) would eventually be issued out by the system over time, at, rate Cout. It is easy to see also, that, Cout represent the average rate of traffic arrivals to the work-conserving system during the communication session.

Consider that the communication session has four (4) intervals with different rate functions [t0 to just before t1] with rate function R1(t), [t1 to just before t2] with rate function R2(t), [t2 to just before t3] with rate function R3(t), and [t3 to just before t4] with rate function R4(t) (which can appropriately be illustrated as we did in Figure 3.7). Then,
 EMBED Equation.3 
This idea (output port issuing rate equals average rate of traffic arrivals) was amply illustrated by Sven, Ales, and Stanislav in [63] as shown in Figure 3.9. In the words of Costa, Netto and Pereira in [34], the queuing delay experienced by packets arriving at a switch varies, since the packets that might have arrived in the output queue before any arriving packet is not fixed; it depends on the patterns of arrivals at any time.

Therefore, taking Á as Cout, Eq. (3.26) becomes:
 EMBED Equation.3  (3.27)
where,
d j = maximum delay in seconds incurred by the jth packet in crossing the FIFO Queue,
à = maximum amount of data traffic that can arrive in a burst in bits,
Cout = bit rate of the output link (switch port) in bits per second (bps).











 EMBED Visio.Drawing.11 




















 EMBED Visio.Drawing.11 







Eq. (3.27) is in agreement with the assertion (with respect to a router) by Sven, Ales and Stanislav in [63], that since the output queue of a router is emptied at the nominal link capacity, an hypothesis can be made that, the size of a packet burst in bits measured on a router’s output port divided by the nominal physical link capacity is the upper limit of delay added to the queue build-up by the packet burst. We have, however, shown beyond this hypothesis that Eq. (3.27) actually characterizes the maximum delay suffered by a packet at the output queue of the output port of a packet switch.

3.4.2.5 Transmission Delay
According to Kanem et al. [2], Reiser [14], Gerd [11, p.169] for all arriving instants, the delay experienced by a message upon arrival at a queuing system is composed of the message’s own service time plus the backlog ‘seen’ upon arrival. Bersekas and Gallager [10, p.149] contends that among the delays which a packet suffers in a network is the transmission delay, which is the time between when the first and last bits of the packet are transmitted after the backlog of packets met at a queue by the packet has been transmitted. A fourth component of our model, therefore, is the transmission delay. The maximum transmission delay that can be suffered by an arriving packet is obviously the ratio of the maximum size that can be assumed by the packet to the transmission speed of the output port (channel). According to Alberto and Widjaja [37, p.416], if
L = length of frame in bits,
R = full rate of medium that connects to the output port of a node in bits/sec,
then, the time to transmit the frame at full rate =  EMBED Equation.3 secs
Therefore, if;
Dmaxtrans = maximum transmission delay of a packet in the switch in seconds,
L = maximum length of a packet in bits,
Cout = transmission speed of the output port (link) in bits/sec, then;
Dmaxtrans =  EMBED Equation.3 secs (3.28)

Having derived the maximum delay expressions for each of the components in Eq. (3.1), we can now proceed to insert these maximum delay expressions into this equation. Therefore, if we replace Ci in Eq. (3.2) by CN-1 (since we have assumed that the data packet that arrived in port N-1 will suffer the maximum delay – it is the last to be forwarded to the output port N), we have:
Dmax (seconds) =  EMBED Equation.3 + EMBED Equation.3  + (N-2) × EMBED Equation.3 + EMBED Equation.3 +  EMBED Equation.3 
In the context of Figure 3.6, the packet that arrives on port 1 suffers one constant delay (the time to switch the packet); the packet that arrives on port 2 suffers two constant delays (the time it waited for packet that arrived on port 1 to be switched plus the time for itself to be switched); the packet that arrives on port 3 suffers three constant delays (the time it waited for the packets that arrived on ports 1 and 2 to be switched plus the time for itself to be switched); therefore,
Dmax (seconds) =  EMBED Equation.3 + (N-1) × EMBED Equation.3 + EMBED Equation.3 +  EMBED Equation.3  (3.29)
where,
Dmax = maximum delay in seconds for a packet to cross any N-port packet switch,
N = No of input/output ports,
Ci, i = 1, 2, 3,…,N = bit rates of ports 1, 2, 3,…,N in bps,
= channel (for example, Ethernet) rates of input ports in bps,
Cout = bit rate of the Nth output link in bps,
= output port (line) rate of the Nth port (the destination of the other N-1 input traffics)
CN-1 = bit rate of the (N-1)th input port in bps,
L = maximum length in bits of a data (for example, Ethernet) packet,
à = maximum amount of traffic in bits that can arrive in a burst.



3.4.2.6 Determination of à (the maximum amount of traffic that can arrive in
a burst)
The parameter à has been defined as the maximum amount of data traffic that can arrive in a burst [15], [10, p.512]. But there is no general agreement in literature on how to characterize bursty traffic (how do we assign a numerical value to �). For example, Sven, Ales and Stanislav [63] have asserted that metrics for traffic burstiness have not yet been defined, and that methods to monitor traffic burstiness are not well understood. Ryousei et al. [ 64 ] has also averred that there is no consensus on a quantitative definition of data traffic burstiness. But Sven, Ales and Stanislav contends in [63] that network traffic tends to be bursty for a number of reasons, including: protocol design, user behavior and traffic aggregation; while Khalil and Sun [65] has asserted that, traffic generated within token ring and Ethernet local area networks are very bursty due to the widespread use of distributed applications (for example, distributed file systems and distributed databases) and high-speed computers capable of transmitting large amount of data in a very short period of time.

What is quite clear to researchers of computer network traffic and the performance effect of such traffic on the networks is that, bursty traffic is quite critical to the performance of computer networks. For example, Forouzan [26, p.763] asserted that although the peak data rate is a critical value for any network, it can usually be ignored if the duration of the peak value is very short. For example, if the data is flowing steadily at a rate of 1Mbps with a sudden peak rate value of 2Mbps for just 1ms, the network probably can handle the situation. However, if the peak data rate lasts for 60ms, then this may be a problem for the network.

Bersekas and Gallagar [10, p.15] has posited with respect to circuit-switched networks that, communication sessions for which »T*hFÓhYSÂhIHhKz}hb2:5 hKz}hb2: hKz}há54 hKz}h i¢ hKz}hÍ?0,A2B2K2Q2V2s2x2y2¼2Ê2Ë2Ï2Ð2à2333383J3‡3œ3¥3¿3ê3ü3þ34*4I4J4K4N4O4P4Q4t4u4³4ùõùîêãßãØÔØÐØãÉÂоº¾ºÂ³Â¬¥³¬›³”†xnfhKz}hd~Ü\h·>½húlÀ5\ h·>½5\ héYF5\ h\ ¨5\ hLcE5\ h#HX5\hKz}húlÀ5>* hKz}h¸À hKz}h\nÝ hKz}húlÀh}v|hèLŒ hKz}hÝ T hKz}ha+kh]n˜har¦ hKz}hKêhþQ hKz}hQh'hIH hKz}hŠaïh•+. hKz}h m&³4¶4»4Ó4ü455 55575?5E5L5P5{5‚5“5”5Î5Û5å5æ5666"6467686K6W6^6t6}6…6“6”6ª6«6³6»6Ø67#7$7%7Q7R7U7ùõñíéñéùñùíñùâùâÞù×ÓÏ×Ó×Ó×õùÈÁºùºÁâùº³¬¥¬³¡õù–‘‰‚ hKz}h’\•h§gæh¡G?\ h§gæ\hKz}h,;5>*\hyHc hKz}h' , hKz}hTÔ hKz}h™c hKz}hézó hKz}h 1 hKz}hñ›høh,ì hKz}h;oh˜#V hKz}hã{Lh
rhJ ˜hØ?9hLcE hKz}h,;1U7V7\7i7j7q7x7‡7¥7­7¶7½7Ö7×7ã7í7888.8/838P8Y8]8^8n8Õ8Ö8Ù8á8ã8ë8|9}9~9‡9¶9¿9Ã9Ä9üøôíéíüíâôéÞíÔǽ³©³©³Ÿ•ŸíÔÇí‹Ô‹‹wÇwÇjNjhKz}h yB*PJph333hu`æB*PJph333hyB*PJph333h3JàB*PJph333h‚mßB*PJph333h—PyB*PJph333hY«B*PJph333hu,fB*PJph333hÁDkB*PJph333hKz}h¡G?B*PJph333ht+êB*PJph333h8-° hKz}h—ìhk µ hKz}h¡G?hu`æht+êhg(Ä9Å9Æ9É9Ê9Ë9Ì9à9á9ò9ó9ö9T:r:w:z:ƒ:¬:µ:ð:ò:÷:B;P;_;`;c;f;à>å>æ>é>÷>û>?3?V?g?y?„?q@Ž@”@˜@¼@ùòîòêòãÜÕùÎÇêÇùÀã¹ê¹²¹«¤¤ê™¤’ޤ‡xòqmqòhÈ~* hKz}hÿo hKz}h÷[ h½]êh½]ê\ hKz}hžvh†4 hKz}h(_¶h@xÆ5UmHnHtH uh>_¶hì^‰5'jh>_¶hì^‰5UmHnHtH u'jh>_¶hì^‰5UmHnHtHuhì^‰hip{{ {
{ { {{{ {!{"{#{${&{'{*{+{,{-{.{2{3{4{6{7{8{9{÷÷÷÷÷òòòòòòòòòòòòæææææææææ d7$8$H$gdì^‰gdì^‰dhgd”6W9{:{‹{·{¸{¹{º{»{¼{½{¾{æ|ç|W„X„
…††¬‰­‰o”p”“”óóóëëëëëëëàëàÕÕÕëÊë¿ëë
$dha$gdZ³
$dha$gd$]`
$dha$gd%y
$dha$gdbvdhgd”6W d7$8$H$gdì^‰D{v{|{‚{Š{‹{{­{µ{·{¸{¹{¼{¾{ü{ý{| |V|Y|›|œ|Ö|å|æ|ç|ø}ù}ÿ}~9~M~N~U~\~{~™~©~ƒ¯ÍÎñâñÖâÖÊ»ñ´°¬¨¡¡¡–’–ޖއƒ‡ƒ‡|x|‡txt‡p‡plhO!hctchYhR
b hKz}h”AÊhE hKz}h9EhcKÎhhG* hKz}hcKÎhð-C hKz}hð-Ch£ºhì^‰h9E hKz}hiphëthã5B*PJphhNš5B*PJphh;©5B*PJphhëth‰"„5B*PJphhëthì^‰5B*PJph*ÎÚÛc€‚€Å€Ç€no‹Œ‘–¢µ¶ñ‚‚‚«‚¬‚·‚Ƃقڂæ‚õ‚ʃõƒöƒ„„I„J„W„X„`„„¥„³„´„µ„¾„¿„ÄĄń̈́΄ԄՄé„ê„ô„õ„… …
……&…üøñøñøñíñíñíñíñíñæñâñøñÞñøñÞñ×ø×Ó×ø×Ï×Ó×ËÓÇÓÃ׿Ã×Ã×Ã×Ã×Ã׸´§Ÿhcs¹–¹¹Ÿ¹¡¹¤¹¥¹¦¹§¹ººº"º#ºùòëáÚÖÏÚËÚĽڹڲڮڧ®§Ú ® Ú¹ÚœÚ®•އ€•޹ŽÚy®yÚ hKz}h]z’ hKz}h ƒ hKz}h¢” hKz}h¢# hKz}htšhxa hKz}hWý hKz}hõha, hKz}hd&ÕhYc hKz}h]y hKz}hK{8hrh hKz}h^õhw; hKz}hiBhKz}hiB5\ hJr5\ héYF5\ h 5i5\-#º%º)º*ºˆº‰ºŽººº¦º§º¬º®º±º²º¼ºËº@»_»»ž»£»¥»ï»¼…¼¥¼ô¼ ½½½Ÿ½ ½ª½Í½×½í½O¾R¾S¾¿¿¦¿´¿õ¿ÀCÁEÁVÁ†Á‡ÁÂÂÂ)Â*Â3ÂùòîçãçÙòÒãÒËòÇòÇòÇòÃòÃò¿òãò¿òãòã¸ò¿ãò±ªò¿ò¿ò¿ò£œ•œòœŽœãœ hKz}hi+ hKz}hòp hKz}h
c hKz}h²0ã hKz}h3 hKz}h:me hKz}hPpÉhYN˜hg:håE° hKz}h]z’ hKz}hE5hKz}h‹2ü5>*ha, hKz}h‹2ühfbç hKz}hiB hKz}hÒZ©8ºÞ»ô¼¬½¿¦¿õ¿©ÀªÀÄÄ9ĎƏƖʗʥ̧̦ÌôææææææÚÏÇ¿´Ç©©ž©Ú
$dha$gdJ8Ô
$dha$gd¹BÈ
$dha$gdú>{dhgd_dhgd”6W
$dha$gdš'‹ „hdh^„hgd”6W$
& Fdha$gd^(’
$dha$gda,3Â4Â@ÂBÂI—˜ŸÂÉÂÊÂÑÂÃ"Ã#Ã*Ã+Ã/Ã=Ã>ÃDÃQÃqÃëÃÄÄÄÄÄÄÄÄ8Ä9ÄOÅaÅbÅiÅmÅnÅ¥ÅÅÅMÆNÆXÆYÆZƌƍƎÆùõîõçãÜçãÜçùãÜãùçãÜçÕçÕãÕçËĽ¶½¬¥žšž¥“ŒšŒšŒˆ„€|ˆh"LthUTh‚hü Æ hKz}h½,9 hKz}hÝy{ha'¶ hKz}h…)© hKz}h,XhKz}h,X5\ héYF5\ hü Æ5\ h_5\hKz}h01±5\ hKz}hQ|ý hKz}häòha, hKz}h01± hKz}h­8 hg: hKz}h”x0ŽÆÆ—ÆåÆ÷ÆøÆûÆüÆýÆþÆÇ'Ç(Ç/Ç0ÇCÇcǂǬÇÎÇ>ÈBÈCÈLÈMÈOÈaÈbÈcÈnÈÉÉÉ!É$É&É'É*ÉJÉtÉvəɺÉòÉóÉ
ÊÊ:ÊjʕʖÊùõîçãçîçߨÑõØãØßÍØÆØ¿ã¿Ø¸±­±¦±¿ã¿±Ÿ›—ã—“——ã—‹—‡›€ hKz}h¤d
h›s—h3>h£±hz)h¥w«h1eú hKz}hûH— hKz}hKûhNš hKz}hÑHé hKz}ht* hKz}h[;Ì hKz}h€Išhi!ehÞq hKz}hU0h hKz}hù*’ hKz}h‘s>házô hKz}hüPF hKz}hþ9Ðh9tp hKz}hV
|h¸”0íÜÝÝkÝlÝãÝïÝðÝñÝÞÞKÞMÞ”Þ•ÞæÞúÞûÞ ßßßß߲߭߱ßÇßÐßÑßæßëßõßHàIàJàKàLàXàgàhà³à´àÏàáàâàVáWáwáyáùõùîçßÚßùõùÖùõùÏõÏùÏÈÖùÁõÁùÁ½ù¹ù²¨£ž™”Œùˆù„€ùˆù|hú3yh†r+hA/£hShKz}hñ+5 h¬ ó5 hiŠ5 hÔ
5 hDK5hKz}h°-
5>* hKz}h¬Y÷hé™h‘FÞ hKz}h©ô?ôuôÝ÷Þ÷ß÷à÷á÷â÷ã÷ä÷øø ø
ø ø ø$ù%ùeù¶ùËù÷ïä÷÷÷÷÷÷÷÷÷÷÷÷÷Ù÷ÑÉ»$
& F dha$gd^(’dhgdx\Ddhgd_
$dha$gdx\D
$dha$gd7´dhgd
D=dhgd”6WFôGôTôUôtôuôžô£ô¿ôúôûôÿôõ3õ4õ8õ9õKõLõYõyõËõÌõ×õØõ²öÏöçö2÷L÷k÷l÷Ï÷Ð÷Ü÷Ý÷à÷â÷ã÷ä÷å÷þ÷ÿ÷øøøø øúõðõèáÝÙáÕÙÑáÍáÍáÍáÉáÕáÕáÑÅáÕáÕáÕá¾·°¬¨ œ‡ ¨x· hê5\jq hŽ-BU)j)ô)Q
hŽ-BCJPJUVaJnH tH hƒqjhƒqUhêh_ä hƒq5\ h_ä5\ h G©h_ähú7th\Q%h‘QÙh?h” …h
6ehb-p hKz}h
D=hKz}h
D=5 hiŠ5 h—0X5 héYF5/ ø ø‚ø¢ø®ø°øù#ù$ù%ù(ù*ù+ù[ù\ùdùeùxù‰ù¥ù¦ù´ùµù¶ùúú!ú"ú+ú,ú;úHPHRHTHVHXH–HÌH I2I4IêIJJJJ:JPYPZPkQlQzQ}QQ„Q‘Q“Q¡QùõîçàõàÜØàÜÔÜØàÜÍÉÂà¾àõà·õ·³¯à«¤õ¤õ¤ ¤ ¤õ¤õ¤õ¤›–‘‰„› hšJˆ5hKz}hö{A5 hÕ25 h³*Î5 hö{A5hö{A hKz}hö{Ah–&—h%ÝhTPå hKz}hh¥kï hKz}h#Èh3-› h#Èh#ÈhÕ2hïo™h#È hKz}h‰`n hKz}huz hKz}huNÇhhP” hKz}h˜U©4OyQzQ¶QˆS‰SÖTUTUVUXUZU\U^UšUœUžU U¢U¤U¦U¨UªU¬U®U°UôììôôôôäììììÜÜÜÜÜÜÜÜÜÜÜÜÜdhgd”6WdhgdTkdhgdö{A
$dha$gdTk¡Q¶QíQ!R"R^RÐRÓR‡SˆS¤S¥S­S®S¶S·SºS»SÃSÄSËSÌSÔSÕSáSâSåSæSíSîS.T0T4T6TTBTDTRTTTXTZTtTŠTŽT”TÄTÔTU U$U&ULUNUPURUTU\U^U`U÷óìèóìóìóäÝÖÎÖÎÖÎÖÎÖÎÖÎÖÎÖÎÖÎÖÎÖÎÖÎÖÎÖÎÖÎÖäÝÖäÊÖäÖ¿Ö±¤¿ä ä˜jh^G†Uh³*Îjo.hKz}hTkEHàÿUj‰ °M
hKz}hTkUVjhKz}hTkUh 97hKz}hTkH* hKz}hTk hùÐhTkhTkheà hKz}hö{Ahö{AhKz}hö{A5;`U’U”U–U˜UšUœUžU U¢U¤U¦U¨U²U´UæUèUêUìUVVVV V+V|V}V®V½V¾VÁVâVXüçß×ÓÌžӺ³¯«£«Ž†£«zupha]aV]VaRh 97 hKz}hÇQôheà hKz}hwlhKz}h§Õ5 hÕ25 h_5 h³*Î5 hKz}hlqjShL~U)j?*Q
hL~CJPJUVaJnH tH jhlqUhlqhzN› hKz}hzN›h
­ hKz}h‰`n hKz}hW%ê hKz}h–&—h–&—jh^G†UjØ0h3]‡U)j·M?Q
h3]‡CJPJUVaJnH tH h^G† °U²UîUðUòUôUöUøUúUüUþUVV+VY2[¢[]]`],^–^¼a¾a÷÷÷÷÷÷÷÷÷÷÷÷÷ìáááÙÙáÍá÷ „Ðdh`„Ðgdsádhgdsá
$dha$gdTk
$dha$gd£Kdhgd”6WXœXžXYYYYYY@YBYDYFYHY„YÄY2Z4Z[[:[{z{{{÷÷÷ììäÙääÙäÙääÙääÙäÎÎÉägd›fá
$dha$gd¾S
$dha$gdTkdhgd”6W
$dha$gdTEdhgdTkDcFcHcJcVchcvcxczc~cäcd(d.d‰?‰B‰F‰J‰L‰M‰T‰Z‰ŠfŠhŠV‹r‹¢‹¤‹ú‹ü‹Œ
Œ Œ ŒZŒ÷òêò÷âêÝê÷âÝâêÝØ÷ØÓâÓâ÷Ý÷âÓËØ÷ËÝËÝê÷Ã˹±©Ý©±©¤±Ý±Ý±©œ¹”hKz}hæ,\hKz}hH8 \ h*&\hKz}hUÓ\hKz}hNc’\hKz}h¨_]5\hKz}h>fF\hKz}h¨_]\ h;õ\ h_)’\ heà\hKz}h)'\hKz}hئ\ h¦.ð\hKz}h~H\7M‰ Œ Œ‚ƒ§¨+“,“¸”¹”™—җúûàÄ Ö¨ר©«ª«°°ôìôìôìôìôìáÖËËËËôôôìôÃdhgd{S
$dha$gd…
& Fdhgd^(’
$dha$gdrædhgd”6W
$dha$gd9ZŒxŒyŒ‰ŒŒŒ’Œ±Œ²ŒÁŒôŒõŒhtu|‚ƒ•–ٍ͍éêúþŽ;Ž=ŽAŽBŽIŽ„Ž‡ŽˆŽŽ–Ž˜ŽœŽŸŽ£Ž¥Ž©Ž¬Ž»Ž¾Ž¿ŽƎGH÷òí÷íåàòàØòØÓåËòËåÁ¹Ó¹±©¤©±¹±œ±¤±¹±¤±”©”Œ±©±Œ¹±¤±„¹¤hKz}h…+U\hKz}h‡¼\hKz}hc/E\hKz}hKiÓ\ h÷ó\hKz}hª Û\hKz}hóLÛ\hKz}hjv—\hKz}hjv—>*\hKz}hôh\ h³ˆ\hKz}h-0©\ hê\™\hKz}hæ,\ h_)’\ heà\hKz}h"„\4Hˆ‹Œ“™*+8Ov‡А¦¨¹ʐːݐސçëò‘)‘+‘;‘vh$Ah˜Yjh¯h§h¾vhKz}h¾vH* hKz}h¾vh†¼h±thƒ4h¬H…h²T6hµ7hKz}hÑ+5>*h}Eñhj) hKz}hÑ+?È,È-È2È3È5È6ÈBÈFÈKÈLÈMÈQÈSÈTÈVÈWÈoÈpÈqÈrÈsÈtÈxÈzÈ{È|ȁȔȕȚȤȴÈÓÈøÈÉÉÉÉÉÉÉùõñíùåùáÝõÖÒÎÊÎÂέ¥ÂÒ ›–‘‰á‚~‚ùõzùõvõrÖnÖh§=wh Šhæk[h˜Yjh$A hKz}h$AhKz}hí5 h‚ED5 hí5 hü×5 hÑ+5j¸ˆh¾vU)j#mO
h¾vCJPJUVaJnH tH jhíUh¦Qhíhü× hKz}hÑ+hTr´hxEhKz}h¾vH*hgºh~>vh¾v hKz}h¾v)uÈvÈwÈxÈyÈzÈ{È|ÈÉÉ Í!ÍpÎqΩνÎíÎÏRÏjÏ•Ï÷÷÷÷÷÷÷ì÷á÷Ö÷÷ËËÀÀËË
& Fdhgd^(’
& F dhgd^(’
$dha$gd)sÄ
$dha$gdÎU
$dha$gd¾vdhgd”6WÉSÉUÉfÉnÉqɓɗɭɶÉ×ÉÊʅʌʱʳÊÕÊÖÊñÊ÷ÊËË:Ë[ËkËpËvÌwÌ÷ÌøÌþÌÍÍÍ!Í"Í0Í2Í7Í8Í9ÍWÍdÍeÍr͈ÍèÍÎÎÎfÎiÎjÎkÎmÎoÎÆÎÐÎ×ÎÚÎÏüøôðéðéåéáéÝéÙéÕéÕÝéÝÑéÍéÉéÕéÅéÅéÅéÕÁ½éÁ½¹é¹éÁéµ±µé­Õ¦¢¦éžéžéh¯h§h¯N hËthÑ+hËth:èhaB hwzBh)&[h4lªhÍh–)Çh¨wh°U›hM|Dh
'çhTfËh ƒhÙT* hKz}hÑ+h¦—h±@ABEFQƒ„‹Ž¶Õ=\]pqúõðèãèÜØÜÔØÔÜÐÌÈÄÐÌÀÌÜÌÐܼܸÜÄÐÜİÜÄÜ©¤õðèܠܜ‘܃jÍ&N
hKz}hÑ+UVjhKz}hÑ+UhC¢h¶+ hP,Ñ5 hdRMhÑ+hdRMhdRMH*h‰kJhJfh ^+hdRMhWRÌhÐ fh hÝS'hPÁ hKz}hÑ+ hQPs5hKz}hÑ+5 hPø5 hHÄ5 hÀ.'51qrsz“ÆÇÎðR X [ \ o p q r Œ  – Ÿ ¥ æ ç õ  !!>!D!E!F!|!}!Î!Ï!Š"¨"©"ª"­"´"¶"òçãߨÔÐØÌØÄØçØ¯¢çžšžØÄؓØØØ…؏؁ØÌØ}yto hHÄ5 h{5hÅVÀhÑ+hÃWeh•NJhÑ+6H*huhe hKz}h‡h–I°h’#)jJhó|hó|EHâÿU)jËöP
hó|CJPJUVaJnH tH hKz}hÑ+H*hC¢he½h3]‡ hKz}hÑ+hÈ&ÿh»:jhKz}hÑ+UjóGhKz}hÑ+EHèÿU*…!†!ª"«"¬"­"Ë"$$.%e%‹%®%&''J'g'h'²'5(˜( )T))À)óèèèààÕàÕààààÕàóóààÕàÕÕó
$dha$gd‡dhgd”6W
$dha$gdÅVÀ „Ðdh^„Ðgd”6W¶"·"Ë"%#&#n#v#~##§#¨#$ $^$_$b$c$f$g$j$k$l$$Ž$’$–$¤$Á$Þ$ß$á$ã$ÿ$%%%%%&%-%.%2%3%4%G%H%I%J%O%P%Q%S%T%V%W%úòëçëãëãëßëãë×ë×ë×Óë×ëÏëËëÓëÏë×ÇÏÿ»·¿»Ï¿Ï¯«–‰¯¿ë×ë×ë×j)Lhah3¤EHüÿU)jö¡ãO
h3¤CJPJUVaJnH tH hajhaUhm²hh}Òh6 ‘>ž>Ÿ>??Y?Z?ë?î?õ?ø?ù? @!@*@+@B@C@k@m@r@“@Ì@Í@AA&A^A`AuA­A®A°A±AµA¶A·A¸AÐAÑAÒAÓAÚAÛAùAûAüAýAþABBBBB BüõñõéõéõéõéõüõéõéäõéõÜõØõéõÔõéõéüõéüõéõÐÌÈÌÀÌ«£À̜õéõéõéõéõéõ hKz}hHlj£lh0@U)j¢©tP
h0@CJPJUVaJnH tH jhHlUh“ÁhHlhÑ+h¤$ThøažhHlhÑ+H* h{=ŽH*hKz}hÑ+H*høg  hKz}hÑ+h{=Ž=ÔAÕAÖA×AØAÙAÚAÛAÆEÞE‚FZGªGR@RBRDRRRhRjRnRpRzR~R€R¦R¨RªR¬R¾RÀRüõêõÜÏêõËÇÿõêõ±¤êõêõ–‰êõ…ÇÁ…õêõsfêõbhY'j‚ÍhKz}hÑ+EHèÿUj ŒqM
hKz}hÑ+UVhÆ@`hÙ4j7ËhKz}hÑ+EHâÿUj{‹qM
hKz}hÑ+UVjóÈhKz}hÑ+EHöÿUjÁŠqM
hKz}hÑ+UVhÈOWh'0Th¾U¯h’kŠj“ÆhKz}hÑ+EHèÿUj؉qM
hKz}hÑ+UVjhKz}hÑ+U hKz}hÑ+hðÇ&ÀRÂRÄRÆRÐRÔRÖRÚRÜRÞRSSSZSÎSÐSòSTiTtTuTŒTTŽT‘T’T¨T®T°TÏTáTUU0U1U2U3UBUDUEUFUKUMUVUeUfUyUüøôðéøåéåáÝéøéüøÙéÕüÕÑÍÑÝÑÉéÅÁ鶯š¶é‰…‰ôéévéjhKz}hÑ+Uh“nhc:Ðh”H¡jòÏh1ZhlEHêÿU)jJ‘P
hlCJPJUVaJnH tH  hKz}h½^ëjhKz}h½^ëUh¨Jäh$aÆhÑ„h
{„hytÐht½h¶^høažh¸¡h¿?[ hKz}hÑ+hY'h‹Wºh'0Th¾U¯.yUzU{U}UU‘U’U“U”U•U¨U©UªU«U¸U»U¼U½UÃUóUôUöU÷UEVgV{V|VVVêÝÒ˶©ÒËÒ˛ŽÒˊ†‚ŠË†ËzËvËkdO)jJ‘P
hlCJPJUVaJnH tH  hKz}hÙaøjhKz}hÙaøUhd*\hKz}hÑ+H*h”H¡h1ZhøDjÍ×hKz}hÑ+EHöÿUj–qM
hKz}hÑ+UVjÿÔhKz}h”H¡EHâÿU)j¯ñ_O
h”H¡CJPJUVaJnH tH  hKz}hÑ+jhKz}hÑ+UjœÒhKz}h”H¡EHâÿU)jñ_O
h”H¡CJPJUVaJnH tH V‘V’V V¡V­V²V·V½V¿VÃVÄV×VØVÙVÚVàVâVãVäVåVêVôV÷VøVXXXXXZXdXvXxXzXŠXŒX²X´XòçàÜØÜÔàØàÉà´§Éà£Ü£Ü£à˜Œ˜Œ˜ˆ„£à˜Œ˜u˜c#j6šqM
hï%[hÑ+CJUVaJjhï%[hÑ+CJUaJhï%[hc:Ðhï%[hÑ+CJH*aJhï%[hÑ+CJaJhµ­jöÛh1ZhlEHêÿU)j.J‘P
hlCJPJUVaJnH tH jhKz}hÑ+UhøDhÙaøh1Z hKz}hÑ+jhKz}hÙaøUjLÙh1ZhlEHêÿU%´X¶X¸XÈXÌXÎXÚXÞXþXYYYÊYÖYØYþYZZZZZZZZ@ZBZPZ`ZlZnZxZ|Z†Z–ZÌZÞZâZ([îߨÔÐÔØÌÈÌØÄØ¹Ø¤—¹ØÌع؂u¹ØÌqÌmØÌØÌØeØhKz}hÑ+H*hÌ)FhÆ@`j>ähKz}hÙaøEHàÿU)j õ_O
hÙaøCJPJUVaJnH tH jáhKz}hÙaøEHÞÿU)jÊô_O
hÙaøCJPJUVaJnH tH jhKz}hÑ+UhÑfbñcòcFd²dÀdnefKfLfMfNfOfPfQf÷÷÷÷ììá÷á÷ÖÖÎÎÎÎÎÎÎÎÎÎÎÎdhgdjG°
$dha$gdŸSz
$dha$gd$Vc
$dha$gdHldhgd”6W([*[,[R[¢[¬[ê[ð\*]R]œ]ž]^=^n^o^‹^¶^·^ä^ç^è^é^ê^û^__$_3_º_»_```` `$`A`B`D`G`H`è`é`aaaa)a*a@aAaHaIaZa[aqarayaza‹aŒa¢a£a®a¯aÀaÁa×aØaÛaüõîêîêîæâîÞîÚîÞîÖÒîÊîÆ¿îÞ»î»îÞî·î·îÊî·îÊ·î³î³îÊîÊîÊîÊîÊîÊîÊîÊîÊîÊîÊîÊîh¯wGhHÞh§q* hKz}hô:«hÑ+hKz}hÑ+H*hr~îhþr=h24h=:ÚhÆ&PhÏ3˜htA hKz}hÑ+ hKz}h$aÆhc:ÐFÛab$b%b8b9b:b;b=b>bHb‹b°bÄbÉbÜbßbc9cðcñcòcdd$d*d,d.d0dFdHdndpdrdtd„dšdüõêõÕÈêĽõ¹õµ±­©õ¥õ¡š“‹‡“‡“‡“|“na|“‡jýêhKz}hŸSzEHâÿUj‹+QN
hKz}hjG°UVjhKz}hjG°UhjG°hKz}hjG°H* hKz}hjG° hKz}h$aÆhŸSzhÀ- hIg¤h»7ÙhEéh'zÿht|ý hKz}h¬3&h¬3&jçh\)¡hùmJEHâÿU)jXJ‘P
hùmJCJPJUVaJnH tH jhKz}hÑ+U hKz}hÑ+h#©$šd¢d®d°d²d¾dÀdÂdÆdÈdÐde"e&enereþefff"f#fCfDfKfLfSfUfVfWfpfqfrfsf†f‡fˆf‰f¡f¢f£f¤f«f¬fµf¶fõf÷füøñíéñåñÝñÙñÑñåñÙñÝñÙñÍñøÉÍÉÁɬ¤ÁÉÍÉÁɏ‡ÁÉ̓|ƒ| hKz}h¬Zh¬ZjGýh2fOU)jòñ$Q
h2fOCJPJUVaJnH tH jlíhVjU)jéû=Q
hVjCJPJUVaJnH tH jh}8Uh}8h*tRhKz}hjG°H*h¬3&hKz}hjG°H*h$VchPøh$aÆ hKz}hjG°hjG°hûJ¾/QfRfSfTfUfVftfufvfwfxfyfzf{f|f}f~ff€ff‚fƒf„f…f†f‡fˆf¥f¦f÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷dhgdjG°¦f§f¨f©fªf«f¬fii"i€l¡lôlDmSm mÓm!nqnrnîo‚pvr÷÷÷÷ììáááÖÎÎÎÎÎÎÎÎÎÃÎÃ
$dha$gdnUvdhgd”6W
$dha$gdÕG*
$dha$gd“Á
$dha$gd$VcdhgdjG°÷fgg`gagÂgÄgShThZh]h^hhh‰h‹h”hõhühiii i
ii!i"i/i0i;iAiTi)j*j+j>jYjxjèjk8k9kXkYk¸kÂkÖkðk
llClDlpll€l‚lžlüøñíñéñíñíåñøñøñøñáÝØÓÎÆ¿¸±­±­¸­¸©¸¥¸¥¡¸™¸•¸‘¸•¸­‘¸‘¸hÜcÇhÊt8h 8;h Ìh’n¹hšP¥hˆUh,r‹hØÇ hKz}h‡ hKz}hÑ+ h“ÁhÑ+hKz}hÑ+5 hHÄ5 h{5 h±8*hêZê“ê›ê·ê÷÷÷÷÷ë÷÷÷à÷àÕÕ÷ÍÍÍÍÅ÷÷dhgdñAèdhgdç
$dha$gd`ó
$dha$gdüð „ dh^„ gd”6Wdhgd”6Wêæëæìæíæðæüæçç
çççççççççççKçLç`çfç|ç†ç‡ç¡çÁçÂçÜçWèXèxèœè èêÛÎÉĿĺ³¬š’€š’xqmqiqeaqieqieqiq]Yh(tèhî*hÕ3Vhe>¸hÛt.h]. hKz}hÑ+hKz}hñAè5#jhKz}h¦r
5UmHnHuhKz}hÑ+5#jhKz}h®8ü5UmHnHu h IhÑ+ h Ih I hõu²5 h¦r
5 h®8ü5 h I5jhKz}hÑ+5Ujâ#hKz}h®8ü5EH¾ÿU)jkëeO
h®8üCJPJUVaJnH tH " è¡è¬è´èµè×èØèæèèèïèééé*é+é`éeéfégé~ééŒé•é¨é®é¯é°é²é³é´éÇéÈéÉéÊéÌéÚéâéãéèéééêêêêùõùõùõùñõíõæùâùÞùÚùâùÖùÒùËÀùµù§šµùâ–â’Ž‡|‡njïL•M
hKz}hçUVjhKz}hçU hKz}hçhñAèh÷hõu²j{&hKz}hKEHàÿUj»|SN
hKz}hÑ+UVjhKz}hÑ+UhKhÑ+CJaJ h‘ÅhÑ+hçhÜ`2h`óhrzŽhK hKz}hñAèhºvŒh¿ArhÛt. hKz}hÑ++êêêêê/ê0ê1ê2ê6ê8êBêCêVêWêXêYêZê]ê_ê`êsêtêuêvê’ê“ê•êŸê ê³ê´êµêòçàçàÒÅçàÁàç೦çà¢àçà”‡çà€Áyny`Sj 3hKz}hÑ+EHàÿUjÖî—M
hKz}hÑ+UVjhKz}hÑ+U hKz}hÑ+ hZSÜhñAèj‹0hKz}hçEHàÿUjEî—M
hKz}hçUVh·Nrj.hKz}hçEHàÿUjÅM•M
hKz}hçUVhõu²j…+hKz}hçEHàÿUjRM•M
hKz}hçUV hKz}hçjhKz}hçUj)hKz}hçEHàÿU µê¶ê¹êºêÍêÎêÏêÐêõê÷êëëëëëëëë/ë0ë1ë2ë²ëÀëÁëòëì
ì ìƒì„ìéì÷ìíí/í9í:íð—M
hKz}hÑ+UVj8hKz}hÑ+EHàÿUjð—M
hKz}hÑ+UVhõu²j‘5hKz}hÑ+EHàÿUj¯ï—M
hKz}hÑ+UV hKz}hÑ+jhKz}hÑ+U,·êõêýêëcëèìéì0ífíýïþïüðýð\ñºñ»ñ#ò$ò[ò°ò
ó óBó—ó÷÷÷÷ì÷äÜÑ÷ìììììÆÆììì÷ìì
$dha$gd>
$dha$gdÔ-dhgdÔ-
& F$gdO
$dha$gd˜Çdhgd”6WYíeífí}ííííŽí”í˜í™í¢í£í¨í½íÈíãíäíîîî!î8î9î;îh«j1?hKz}hÑ+EHâÿUjú˜M
hKz}hÑ+UVjhKz}hÑ+Uh/ñhiQähJ[½ hKz}h>hÑ+ hKz}hÑ+h \€jhŸh \€Uj=hŸh \€EHèÿU0—ó¾ó$ô[ôÙô)õ*õèõéõ9ö:ööŽöê÷øUø‡øˆøùù@úAúÞûßû?ü@üºü»üôôôôôìôìôìììôôìììôìôìôìììôìdhgd”6W
$dha$gd˜Ç¿ôØôÙôåôæôõõõõõõõõ#õ(õ*õ1õ2õ5õ6õ=õ>õ’õÑõçõèõéõîõö ö
ö ö
ööööööö:ö;öNöOöPöQöSöTögöùõùñùíâùÔÇâõÃõ¿ù·ù·ù·ù³Ã¯¨ù¡¡ù¡¡ù¡¡ùâùˆ{âùâùj®EhKz}h*h÷QJ hKz}h3=Th \?hÑ+5 hKz}hÑ+hNiÿhèUWhŒfð;4MZ\syz~‚‡ˆŠ‹šª­ó•–
 €¬ÙÚÛÜî:CFILQTWYn…¥§¶ÃØóõöúüýþüõüñêãêãêßêãõÛ×Ó×ÏËÇËÏÃõ¿õ»·°¬õ¨õ õ õ õ ¬õœ˜õ˜”˜õ õ õ õ õhÂ) hó^!h¶'ÂhKz}hÑ+H*hzÕha … hKz}h/ThÑ+hÏjXh£l–h/Thà´hSåh“NlhÇn.hýehéPh¦F hKz}hºe‹ hKz}hV`ôhø{â hKz}hÑ+h.~Í9!%&*,-.245?ILNQUVZ\]^bdeoy|~«ËÍÞßÿ  '(dfrs’”±²·¸¿ÀÞßþÿ7=>v—ë  - Œ ¢ § ¨ © « úöòöîçßçßçßçßçúöòöîçßçßçßçßçúöòöîçÛ×çÓÏçßçßçßçßçßçßçßçßçËçßçßçËçËçÇçÏçÏçÃ×Ãòçh÷Qh`h§e;h¸T±h¶
¶h%öhÇ[hKz}hÑ+H* hKz}hÑ+h;&hQ`áhÂ) hÂ) H*M…š›ª « Ó Ô 
Y Z [ \ ] ^ _ ` a b c d Ê á â ã [
\

ôìììììôôìììììììììììççìììììgdƒ)Qdhgd”6W
$dha$gd˜Ç« ¬ ­ À Á  à Ê Ì Í Ñ Ò Ó Ô ÷ 








!
"
%
&
*
+
.
0
Q
d





         µ Ð 
) / E H ïäÝÏÂ侺¾¶¾Ý²Ý®ª®¦®¦ªÝžÝžÝžšÝžÝ–ݖݖݚݞݞݞšÝžÝšÝ’ŽÝŽh heaVhbDh©hKz}hÑ+H*h%h¸T±hÜhB"}h%öhÒ.ˆh÷QjYihKz}hÑ+EHvÿUj¤ÊSN
hKz}hÑ+UV hKz}hÑ+jhKz}hÑ+U jhKz}hÑ+UmHnHu6H V ] ^ _ ` b c d e ~  €  ‚ Œ  ¢ £ ¶ · Ý à á ã ä ÷ ø ù üõñíæõñß×Ó¾¶×¯£—‹‹v‹nõcõUHj©hKz}hÑ+EHöÿUjÐSN
hKz}hÑ+UVjhKz}hÑ+Uh˜ÇhÑ+5hƒ)Q5CJaJhƒ)Qh,†5CJaJhƒ)Qh€·5CJaJhƒ)QhD]5CJaJhƒ)Qhq(²5CJaJ h€·h€·j7mhD]U)j§â3Q
hD]CJPJUVaJnH tH h€·jh€·U hKz}hâ
ù hKz}h˜Çh˜ÇhÑ+ hKz}hÑ+hj_ºù û 





'
(
)
*
/
0
C
D
E
F
R
T
Y
_





JV\bcvwxyz}~‘ôíßÒôíôíÄ·ôíôí©œô˜”˜ííˆíˆí„íôíßwôfíôí jhKz}hÑ+UmHnHuj*¸hKz}hÑ+EH ÿUh-o…hKz}hÑ+H*h.[Ãh-_"hj_ºj·µhKz}hÑ+EHÎÿUjÐTN
hKz}hÑ+UVjh¯hKz}hÑ+EH ÿUjü TN
hKz}hÑ+UVj«hKz}hÑ+EH ÿUj TN
hKz}hÑ+UV hKz}hÑ+jhKz}hÑ+U'™

J_`abÑmntuŠ‹ŒŽš›œž ¡¢¥¦÷÷÷÷÷÷÷ì÷ì÷ì÷÷÷÷ç÷÷÷÷÷÷÷÷÷gd#^¦
$dha$gd˜Çdhgd”6W‘’“”—˜«¬­®ÁÉÎÏÐÑÛÝ"B/Qn}–—˜™³Ð÷øúì 
  2W^abqñäÙÒÙÒÄ·Ù³¯³Ò§Ò³¯Ò¯Ò³Ò£Ÿ›—“Ò‹‡‹‡Ò€‹|€ŸÒ€Ò€x€ÒhêAôh £ hKz}hHlh¬ hzÕhd;ñh \hÚD
hM@h;W^h‰OhKz}hÑ+y(h-_"h.[Ãj¶ÀhKz}hÑ+EHÎÿUjÐTN
hKz}hÑ+UV hKz}hÑ+jhKz}hÑ+Uj„¼hKz}hÑ+EHÿUj'TN
hKz}hÑ+UV/qr|Š»¼ÕÞçëí@‡‰œ°±´ÔÛÿ*/0=EI`rsuRSwxœ¸¹æçíî#$?@MNQRYZpzŠ‹÷óìåìáìÝóáìÙáÙÕÙìÑÍÉìÉìÅÁÅÁìѽ¹µìÉì±ì­±ì÷ì÷ì÷ì÷ì÷ì÷ì©ì÷ì÷ì÷ì÷ì¥ì÷h3Îh{Qñh¹nÆh^:h 97ht0Zhj*hùDÝhñ=
hÚD
h¸>àh £hL ÈhÃ[Hhø`~hy1ê hKz}h¶§ hKz}hÑ+h=*hKz}hÑ+H*A‹­®»ÐÑ
 ')de¦§¨©õö+,[\µ¶âãO`‡‰ŒŽ˜™žŸ¢¤¨¯°³´·ùõñùéùéùéùéùéùåùéùéùõùéùéùõáùÝÙÝɶݣݓݏ|xÙh#^¦$jh1h#^¦UmHnHtHuh[3…jh¢~èUmHnHtH u$jh1h¢~èUmHnHtHu$jh1h[3…UmHnHtHujh[3…UmHnHtH uh˜Çh³$™hœJÇhh-P-…-‰-æ-è-.….†.‰.Š..Ž.‘.’.•.–.™.š..ž.¥.¦.¹/º/Â/Å/Ñ/Ò/00,0-0;0$?%?&?'?E?_?`?ù?@@¯@É@Ê@ß@ù@û@ŒA¦A§AÍAÖA×AkB…B†B›BµB¶BËBùòëçëçãëßÛëÛëÛëßÛëÔÐÈÔÄÔ½Ô¹±Ô­¥Ô¡™Ô¡™Ô•Ô•Ô‰Ô‰Ôh¬xÇhÑ+H*h¬xÇh9chÑ+H*h9ch®muhÑ+H*h®muhEDhÑ+H*hEDhChÑ+H*hC h9)hµh3V~hòGCGEGFGOG\G‡GŒG™G›GžG¤GÃGÒGÓGãGI¬I J
JJ3JPJQJJŽJÖJ×JöK÷KBLCLKLMLL£L¯L°LÐL×LØL=M>M›M¤M‡NˆN§N¨N¯N±N»N¼NïNO&O*OùòîùêæâæâêòÞòêÚòÓòÏòËòÏòÇòÃò¿ò¿ò¸±¸±­©±¢±¢›—›—›¿›Ã›Ã›ò“òh¥B±h9 h9)høke h9)hxh;W^hZcö h9)hE"ë h9)h 8h*9JhôxÖhŠQÓh5h
ri h9)h6JHhhÈHˆhU/Th†IqhCvGho h9)hÑ+ h9)h†…8EGFG†J‡JðNñNˆQ‰QüRýRÕSTuT¬TU©VªVÝV)W*W]X^X£XôôìáìÖìËìÀÀÀÀÀÀÀÀÀÀÀÀì
$dha$gdfŠ
$dha$gd_Yf
$dha$gd¥zn
$dha$gd*9Jdhgd”6W
$dha$gd
ri*O/O‚OƒO„OŠO‹O›OœO¡O0P1P{P|P—P¢P£P¤P¥P´PÀPÄPÛPíPQQQQ*\ hrfü\ h)Ó\ h©]©ð©Mª««>«^«d«e«Ì«ä«¬$¬j¬Ь¾¬¿¬À¬ϬЬ߬ö¬ý¬þ¬­­­&­)­*\j’5 hwhL,´EHèÿU)jó”N
hL,´CJPJUVaJnH tH jL3 hwhL,´EHâÿU)jÈí”N
hL,´CJPJUVaJnH tH hÍcNhL,´\hwhL,´H*\ hL,´\hwhL,´\hL,´jhL,´Uj©0 hwhL,´EHèÿU+ÂÂÄÂêÂìÂîÂðÂòÂô Ã$Ã(ÃnÃpÃrÃ~ÃÖÃØÃÄÄFÄfÄÿÄÅQÅRÅWÅvşŠšūÅÒÅÕſů Æ Æ
Æ Æ+ÆCÆ_Æ÷óÞÑ÷óÉĺÉÄɲªÄ¥Ä Ä Ä›Ä–ěċ²ƒÄ{vÄq›{óÄóÄ h`OŠ\ hj9!\hC*\ h{Ÿ\ hœ[\ hoTR\ hù\s\hàNhL,´\hL,´5>*\hwhL,´H*\ hL,´\hwhL,´\j{: hwhL,´EHèÿU)jó”N
hL,´CJPJUVaJnH tH hL,´jhL,´U)¡Å
ÆOÆ^Æ}Æ~Æ­ÇåÇûÇÈÈÉÈáȣ̤̥̦ÌÚÌÄÓÔ÷ëëëëàÔÔ÷÷÷É÷÷÷÷¾°$
& F dha$gd›
Î
$dha$gd$Q9
$dha$gdmÎ „Ðdh`„ÐgdL,´
$dha$gdœ[ „Ðdh^„ÐgdL,´dhgdL,´_Æ`ÆsÆtÆuÆvÆ}Æ~ÆÇ6ǬǭÇÁÇÌÇúÇûÇ
ÈfÈhÈÇÈÈÈÉÈÌÈÎÈÏÈÐÈáÈ É.ÉeÉlÉsɄɊɘÉÛÉÊ6ʆʨÊÖÊËöñÜÍöñÈñÃñ»·ñ·°¨ñ ñ ˜“ކ“ñ|ñtño|oñjñjñjñ h½M¢\ h›Pì\hS;hL,´\ h£\ h
Lý5hÐ2hL,´5 h#5 hL,´5hL,´5>*\hgz\hL,´\höL©hL,´\ hmhL,´hL,´h4z;hL,´\ hœ[\ hÓÒ\j= hQhL,´EHüÿU\)j1•N
hL,´CJPJUVaJnH tH  hL,´\jhL,´U\)ËËTË\ËcỊ̡̢̤̥̦̭̮̯̀̾ÌÎÌÓÌÙÌÚÌâÌåÌæÌçÌÍ;ͿÍÃÍÖÍïÍ\Î]Îf΅δϵϹϝОÐ4Ñ5ѵѶÑúõðõëõãÛÓËÀ»¶±¬§¬§¬£Ÿ˜‘£¬†‚~z¬soo£hq?] hý#©hL,´häïh{ŸhAyW hñ+hL,´hL,´ h8ÆhL,´ h~ÄhL,´h6th$Q9 hÂ[ü5 hL,´5 h
Lý5 hãUú5 h#5h‘&¼hÇr¥5>*\h…,05>*\hmÎ5>*\hL,´5>*\hžcwhL,´\ hY\ hš\ hL,´\ hmÎ\+¶ÑõÑýÑþÑ
ÒÒÒÒ7Ò8Ò;Ò=Ò_ÒÒÖÒ×ÒáÒâÒÓ Ó!Ó+Ó,Ó;ÓžÓŸÓ ÓÀÓíÓ ÔÔÔÔÔˆÔ±Ô²Ô´Ô¸ÔöÔÕÕ6Õ7Õ:Õ;ÕTh$
& Fdha$gdq?]
& Fdhgd^(’dhgdL,´
$dha$gdV ¦$
& Fdha$gd^(’$„hdh^„ha$gd›
ÎëÖ(×)×L×c×jלם×.Ø/ØwØx؋،؍؎ؖؗتثجحØÙÙZÙ[ÙnÙoÙpÙqÙyÙzٍَُِÙQÚúõúðëúæúáú×ú³×ú×úž×úáú×úzk×ú×úž\×újwE hQhL,´EHüÿU\j8C hQh«a EHâÿU\)jk%rP
h«a CJPJUVaJnH tH jJA hQhL,´EHüÿU\)j³0•N
hL,´CJPJUVaJnH tH j
? hQh«a EHâÿU\)j_%rP
h«a CJPJUVaJnH tH jhL,´U\ h>Th\ h$Q9\ hs}À\ h58R\ h^7\ hL,´\$QÚRÚšÚ›Ú®Ú¯Ú°Ú±ÚÛÛbÛcÛvÛwÛxÛyÛÇÛËÛÒÛÓÛ@ÜA܉܊ܝܞܟܠܧܨܻܼܾܽÜúõëõÖÇëõúõëõ²£ëõžõžõúõëõ‰zëõëõeVëj
N hZhL,´EHüÿU\)jÚ4•N
hL,´CJPJUVaJnH tH jÜK hQh«a EHâÿU\)j“%rP
h«a CJPJUVaJnH tH  hq?]\j I hQh«a EHâÿU\)j|%rP
h«a CJPJUVaJnH tH jeG hQhÞhÿEHâÿU\)jÒ%rP
hÞhÿCJPJUVaJnH tH jhL,´U\ hL,´\ h>Th\!¾Ü!Ý"ÝjÝkÝ~ÝÝ€ÝÝ¥ÝµÝÆÝÇÝÒÝ×ÝáÝæÝóÝôÝ'Þ…ÞŠÞ’Þ¢ÞßߚߺßÙßâß=à>à+á/áAáFáGáPá[ábácápáqáúõúëúÖÇëúÂúõú½¸ú³½ú½®ú®úõú©ú¤úõú“މ„wwoh„1ÚhL,´5hÐ2hL,´5 hûDM5 hL,´5 hÂ[ü5 h#5hÐ2hL,´5\ h#5\ hö-æ\ h|vÚ\ hÙL\ hCª\ hÀ>5\ h”À\ h¹S[\jôO hQhÞhÿEHâÿU\)j¢%rP
hÞhÿCJPJUVaJnH tH jhL,´U\ h>Th\ hL,´\*=áqáÜâãpãâã8ä\ä]ä^ä_ä`äaäbädäeäfä²ä ådå†å‡åˆåÍç÷ì÷÷÷÷÷÷ááááááááÜÜÜÜááÑ
$dha$gd’D”gd…,0
$dha$gdi6§
$dha$gdÇvzdhgdL,´qáLâNâêâìâãããã6ã8ã^ã`ãbãdãhãjãlãnãpã¢ãºã¼ãÞãàãöãøãääää6ä7äCäDäZä[ä÷ò÷å÷ÑÂå÷å÷­žå÷™÷÷Š÷…÷€÷å÷l]å÷€÷÷€jKV hÐ2hL,´EHöÿU\'jïwuN
hÐ2hL,´PJUVnH tH  h,[ \ hÇvz\ hS*\hÐ2hL,´H*\ hÃTe\jCT hÐ2h0t)EHöÿU\)j‡(rP
h0t)CJPJUVaJnH tH j R hÐ2hL,´EHöÿU\'jtvuN
hÐ2hL,´PJUVnH tH jhÐ2hL,´U\ h^7\hÐ2hL,´\$[ä\ä]äbäcäfä²äÅä
ååå&åcåuå†åˆåÙæõæöæbçcçgçhçtç€çŒçç‘çÌçÍçMèNèaèé%éNéPénéoé›éœéÇéæé@êôìçÕçËÄËÄËÄËÄËçì¼·¼·²­¨¼¤ œ••œ•†‚†z†v†r†‚†h>&nh,[ hÐ2h’D”H*h‹IÅ hÐ2h’D”hMhó.>\ hÐ2hó.>h/2Ôh^chi6§ hó.>\ h,[ \ hM\ hi6§\hÐ2hó.>\ h…,05\h@h…,05\#jSX hÄ$¦h…,05UmHnHu h…,0\hÐ2hL,´\hÐ2hL,´5>*\+Íçaèçèèèåìæìjîkî¯îÖï×ï¬ñ­ñáñêñ ò)ò7òmòvò„ò’òññéÞÞÓËËÀ˵ËËËËËËËËËË
$dha$gd¶
$dha$gd iºdhgdL,´
$dha$gdM
$dha$gd"dhgd’D”$
& Fdha$gd^(’@êRê^ê_ê¿ëÁëÂë·ì¸ìäìåìæì íBíÅíåí&î6îiîjîkîlîpîqîîŠî•îœîî¯î´îêîùî&ï?ïTïxïyï•ï–ïŸï¥ï«ïÂïÃïÕïÖï×ï÷óïèàÛè×èÓË÷Æ÷Æ÷Á÷¼·²­²¥² ¥›¥—“—“—“—“—‹‡—ƒ—x— hÐ2hL,´hlÉhO
hhè'ÑhfWh’h!ÈhL,´ hûDM5 hóIË5hÐ2hL,´5 hŒ>b5 hL,´5 h«#_5 h’D”\ hM\ hí3Þ\hÐ2h"\h’D”h>&n h>&nH*hÐ2h’D”H* hÐ2h’D”hÑG]hä:¹hÐ2h’D”\/×ïðððð ðððððð)ð7ðLðMðNðQðRðYðZð[ðkðmðqðrð™ðšððžð¡ð¢ð¥ð¦ð©ðªð­ð®ð±ð²ð¹ðºðÞðßðêðìðöð÷ðññññññ&ñ'ñ6ñ7ñ?ñ@ñiñqñ~ñ•ñ˜ñšññ³ñüøôðôüìèôøôáôÜÔôÜôáøôøìÐìÈìÈìÈìÈìÈìÈìÈìÈìÀìÀìÀìÀìÀìÈìÀìÀì¼ì¸ì´°¬´ìhšh‹gh^chÝ6¯h¶h¬+-hL,´H*hŒ7FhL,´H*hÖ5vhn?hìH* hìH* h9)hìhÜ&ühL,´hfWhìhÁ}h¦3èB³ñ´ñãñäñèñéñðñññ"ò#ò'ò(ò+ò,ò0ò2ò5ò6ò=ò>òLòlòoòpòtòuòxòyò}ò~ò‚òƒò†ò‡ò‹òŒòò‘ò˜ò™òÊòËòÏòÐòÓòÔòØòÙòÝòÞòáòâòæòçòëòìòïòðòôòõòùòúòóó3ó4ó8ó9ómâmn…n†nooXoüøñíéíüñåñáñÝøÙÕÑÙÑÙÍÙÉÙÉÙåÙÅÁŽŹµ¹±ª±ª¦Ùñí¢ñéžñøñšñø–’–hía[hD`…hdP-h©.ýhÚŒhüH hüHh8y’hT;Ph8y’h/h©}êh#n¿h3xh¹c½h¹S[h”@ÜhSVqh/âh#qÛh­BQh^úhJh¤N‹ hKz}há) há) h9K×9WoYoko»o"q#qrrr³r´r7w8w}x~x­y®y™zšzðèàÕÉÕèÁ¶®¶Á£Á£›£èdhgd¿w
$dha$gd?DõdhgdÓhÏ
$dha$gd§mdhgd-tK „ dh^„ gdçAL
$dha$gdêdhgdçALdhgdL,´$„dh^„a$gdE>QXoYoZo\oko~oo’o”ošoºo¿o!q¹q½qÀqËqÓqçqrrrrr r rrrrNrOr’r²r´r¸r¼rôrõrs*swsŽss¤s¥s¯s9tùôïçãßÛ×ÛÓãÏãËÇËÇãÓÏÛÀôﻳ«¦¢ž¢žš•‹†††† hËYÿ\ hï7\\ hå\>\ hÓhÏ\ h-tK\h{@–hËYÿhÓhÏ hL,´5hp1EhL,´5hp1Ehp1E5 h-tK5 hÐ2hL,´h!Gh¦Dõh@=‹h~H hÌVœhE>QhŽE“hçALhD`…hD`…5 hg•5 h®Eû5 hÐ2hÕz.9t;tAtCtStUt[t]tptqtˆt•t´tØtÛtätñtuuuu'u(u,uLuMuu‘u&v'v6vXvpvqv™v¸vÒvÓvívîvw)w*w5w6w7wEwrw’wxFxqxrx}x~xx°x¹x¾xÆxöñöñöñöñìçñçñçñçñâÝØñÝñìÔÍÉÍÔÍÉÍÉÍÅÍÉÍÁÍÔÉÔ½µ½®É®ª®É®ñ¥ ç œhX|þ h‘@i\ hÐZ6\h?Dõ h-tKh-tKh?Dõh-tK\h-tKhŽnh£TLhËYÿ hÐ2hÓhÏhï7\ h?Dõ\ h-tK\ h+T\ hËYÿ\ hï7\\ hÓhÏ\h ìhÓhÏH*\;ÆxÈxÕxyyGyKyiyjy­y®y×yàyéyíyzz7z=zEzOzPz—z™zšz›zœzz£z¤zÃzÄzÐzÒz{{"{.{={Y{Š{«{´{¶{·{¹{||||†|Ì|÷óïëïëïçïãïëïãïãïãïãïßãÛ×çÒÍÈÀ¸´°´¬ë´¨´¨¬ë¨¤ œ•œ‘Іhy?x h$*âhÝÇhÌVœ h»#¬hª{hª{hFGTh$*âhWÒh#qÛh\hQóhÐZ6hL,´5hÐZ6hÐZ65 hö§5 hØ.@5 h®Eû5hg•hªJhF
h¿whqqqh¦9¹hX|þh{@–hªJhX|þH*3šz›zœzÄzµ{¶{bd89*‚+‚LƒMƒNƒOƒPƒQƒ\ƒ]ƒ±ƒ÷÷÷ì÷ì÷ìäìÜìÔÔÔÔÔÅŽ$a$gd{G3$„Ðdh`„Ða$gde½dhgd”6WdhgdÝÇdhgd$*â
$dha$gd?DõdhgdL,´Ì|Î|Ä}Ö}Ø}Ú}þ}~ ~$~&~Ž~~’~¼~¾~Â~`dløg€z€€Œ€¶€ˀö€489ˆ?ˆBˆCˆDˆaˆbˆlˆmˆnˆtˆuˆ•ˆÈʈˈ͈̈ш҈‰‰ ‰!‰"‰#‰%‰&‰[‰\‰g‰h‰i‰üøôíéíôüåíøôüíøôüøíøôüáíøôáíøáíôíÙíøíÕáôíéíôáøíøáôíøíøáôíéíôáøíáíøáôøíøíøíôøáíøíéíôáhe½h1h‘~H*h¿5h»eÕh xx h1h‘~h(h‘~h(myTi‰o‰¸‰¾‰¿‰Á‰‰ ŠŠŠŠŠ'Š(Š1Š2Š3Š9ŠVŠvŠƒŠŒŠŠ”Š¹ŠيãŠäŠåŠæŠçŠèŠóŠúŠ@‹D‹`‹a‹h‹°‹µ‹¸‹À‹Œ Œ:Œ;ŒDŒKŒ–Œ Œ̌ӌ)cdmtƒ‡čύŽŽŽ%ŽuŽŽ Ž«Ž¬ŽƎǎԎێ(1üõüõñíõüíñüõéõñíüõåõíñíõíñõñíõéõüõüõáüõüõüõüõéõüõüõüõüõéõüõÝõüõéõüÖüÖÒËÇÃÖüõühÎ.êh‘~5h1h‘~H*h1h‘~5hðQ· h‘~5 h>.êh‘~hçó hçóh(Aýh(Aýh‘~ hýe*h‘~hgh xx h1h‘~:÷á7â…âÔâøâEã•ãçã$ä~ä§äüäåpåÎåæ[æ­æüæç«çÓç"è{èÓè#ézé·éôììôìììôìôìôììôììôìôìôììôìô$a$gd{G3
$dha$gd{G3¼ã¾ãää$ä+ä,äJäKäLäqäsätä}ä‡ä§ä®äÂäÃäÄäûäåå å-åLåMåNåoåyåÍå×åãåøåùåæ æ+æJæKæLæZædæ–æ˜æ¬æ¶æôæõæüæççççççç'ç÷ðìðèáÝÖÏáÇÃáèáèÝÖÏáèáèáÝÖÏáèáèÃáìáèáÝÖÏáèáÇáèáìá¼·°©¢ž™” h6â5 h"2Û5hâo h1hó h"2Ûh"2Û h1h, h‘~5 hõ-Ýh‘~h¯z8h1h‘~H* h1h±&ë h±&ëh±&ëh±&ë h1h‘~h‘~h xx h]·h‘~h]·h‘~H*9'ç(ç1ç3ç4çgçhçiçŽççççªç«ç¯ç²ç³çÏçÐçÓçÝçççèç
èèè!è"è)è9èXèYèZèzè…èúóìáÚÌáÂẳ¯¨³£œ•Ž•‰ó„}v‰•‰rkg`Ykr h1h±&ë h±&ëh±&ëh±&ë h1h‘~h‘~ hR>hR> hR>h‹.• h6â5 h‹.•5 h1hó h‹.•h‹.• h1h, h‘~5 hVkBh"2Ûh"2Û hõ-Ýh‘~h1h‘~5hs
ƒh‘~0J>*jÑ,
hs
ƒh‘~U hs
ƒh‘~jhs
ƒh‘~U hs
ƒh6â h6âh6â h,5"…è¼è¾èÄèéé#é'é*é/éNéOéPéyé‚é¶é·éÉéèéééêé]êdêeêgêiêkêêžê¶êÀêÆêÉêÊêÔêñêòêóêOëQëeëë‘ë›ë£ë¶ë·ëéëóë ìì(ì)ì*ì=ì?ìdìnìùñùêæêßÚùÖÏÈùÄù¼µÖÏȵĵĵ­µ©µÄµÄæµÖÏÈ¢š–¢æ¢’ÖÈùÄùÄÖÏÈùñùÄhn?—hþZ{hëIMh‘~H* hëIMh‘~hR>h1jàh‘~H* h1jàh‘~húN‡h‘~5h‘~ h1h±&ë h±&ëh±&ëh±&ë h‘~5 húN‡h‘~h xx hw4Ãh‘~h1h‘~H* h1h‘~9·é
ê^ê·êÔêëeëœëêë ìeì}ìÔìící´íêí/îˆîÞî5ïQïèï÷÷÷ì÷àÑ÷ì÷ì÷÷ì÷÷Â÷÷ì÷Â$„dh^„a$gd{G3$„#dh^„#a$gd{G3 $„#^„#a$gd{G3
$dha$gd{G3$a$gd{G3nì}ì„ì¢ìÁìÂìÓìÝìíí&í8íUíVícíjí‡í¦í§í³í½íêíôíõíî!î"î/î5îIîhîiî‡î‘îÝîçî+ï,ï5ï;ïPïQïRï\ï]ï^ï§ï¨ï©ïåïæïèïïïùïððùõùñêùõùõæßÛæùõùñêùõù×õÐÛÌùõùñêùõùõùÈùõÁùõ×ù¶¯¡¶—¶õùõŒhH?} hs
ƒh‘~hs
ƒhv½0J>*jÞ-
hs
ƒhv½U hs
ƒhv½jhs
ƒhv½U h•6vh‘~h xxhv½ h&hv½h6âh& h&h‘~hÙ
h1h±&ëh±&ëh‘~ h1h‘~7ððð>ðMð`ðkðtðvðwð³ð´ðµðãðäðèðéðñðòðøðùð ñ
ññññbñcñdññžñ ñ¥ñ¦ñ§ñÆñÇñëñíñïñüõîêîæêßÔßÆÔ¼Ôê¸êî´õ­¦­¢—‚—x—qæmæüõæeæh>?Jh‘~H*h¸É hs
ƒh‘~hs
ƒhv½0J>*jt0
hs
ƒhv½U hs
ƒhv½jhs
ƒhv½Uh‚Kp h1h, h•6vh‘~h%G­h½\©hs
ƒhH?}0J>*jC/
hs
ƒhH?}Ujhs
ƒhH?}U hs
ƒhH?}h‘~hH?} h1h‘~ h1h±&ëh±&ë'èï?ðbðòðñ ñïñ#òuò—òãò*'&û'û2û3û>ûiûkûxû{û|ûû€û–ûµû¶û·ûÀûÂûÍûÎû ü
üü>ü?ü@üLüMüPüQüTü_ü`üaükülü¬ü­ü´üÊüËüþüýHýIý]ýgý³ý½ýþþ:þYþùõñõêõãߨßÑÍÉùÉÍñ;·©¾Ÿ¾·ÍØß˜ß‘ß‘ñÑߘßñъߊ†ŠßŠßŠßŠñh, h1h‘~ hôk3h‘~ h_"_h‘~hs
ƒh³0ÿ0J>*j34
hs
ƒh³0ÿU hs
ƒh³0ÿjhs
ƒh³0ÿUh»eÕh³0ÿ h1h¼t­ hšbh‘~h‘~ huKh‘~ hÐ8¶hÐ8¶h¼t­hÐ8¶ h1h,4YþZþ`þjþµþ¿þâþãþéþòþýþÿ ÿ-ÿ.ÿ4ÿ5ÿBÿvÿwÿxÿyÿƒÿŽÿÿÐÿÑÿÒÿfv£¥¦åæçùòîòîòîòîêãêòîßùÛîÔÐÈîê½¶¨½ž½êîÛùîšîš“ˆ“zˆpˆhs
ƒhê¡0J>*j±6
hs
ƒhê¡Ujhs
ƒhê¡U hs
ƒhê¡hê¡hs
ƒhuK0J>*jl5
hs
ƒhuKU hs
ƒhuKjhs
ƒhuKUhpOÕh‘~5h, h1h,h¼t­h%G­ huKhuKhuKh‘~ h1h‘~ h1h¼t­,yÿf'{™+aUÍ2Žä2Î%Vè»n½ôììôììôììôìôììôìôììôìôìôììì$a$gd{G3
$dha$gd{G3&'-.™­¯°ìíî*+0FGaijuvÂÃÄ   !RS]^ghijšüõñêæâÛÐÛÂиÐâõæñêæ´°´¥ž¥†¥´°´wæêæpæ°´ž¥ž h1h,hÞ
êhAbW5 hs
ƒh‘~hs
ƒhAbW0J>*j9
hs
ƒhAbWU hs
ƒhAbWjhs
ƒhAbWUh»'.hAbWhs
ƒh³-0J>*jî7
hs
ƒh³-Ujhs
ƒh³-U hs
ƒh³-h³-h‘~ h1h¼t­h¼t­ h1h‘~hê¡,š›œ¾¿ÌÍÓÔÜÝ/01;FGijkxy{|}~Š”•¦§ìí ñæÜæØÑÍÆÍ¾Í·³Íب¡“¨‰¨Í¾„|unÍjfÆÍjÍ_ÍÆ h`j¿h‘~h¼t­hÛó hAbWh‘~ hAbWhAbWhAbWh‘~H* h‘~H*hh h‘~0J>*j‘;
hh h‘~U hh h‘~jhh h‘~Uh, h1h,h ~¹h‘~H* h1h¼t­h‘~ hÄrØh‘~hAbWhs
ƒhAbW0J>*jhs
ƒhAbWUj:
hs
ƒhAbWU% /012;*jN<
hh h:m¸U hh h:m¸jhh h:m¸U hh h‘~h, h1h,h‘~-stu¬­º¾ÁÂßæõöuvÅÆÿ   $ + : ; í ï ð ñ ô ÷ ø ù 



*
0
W
Y
\
_
`
a
z
{
}




ñæÜæØÔÐÌÈÁ½¶Ð²Ð²Ð®ÐªÔÐÈÁ½¶¦¢¦ªÔžš¶š“š½šŒšžÔˆš¶š“š²½²šhõYä hÛóhùt® h1h,hÛóhùt®hIh1Nh‘~h,h*sˆ h1hæÓhæÓ hºjhÎ5Çh»hoh=BPhÎ5Çhp^ûh:m¸hh hp^û0J>*jhh hp^ûUjÜ>
hh hp^ûU5½ X ° ñ 
Y
}



















ôììôìôìôáááááááááááááááááá
$dha$gd·KÎ$a$gd{G3
$dha$gd{G3’









œ ° · Á  à   
    a k ç ø 

!
&


FGabx„2FGLNQ"¸¼ùõñìçìçâçÝØÝÓËļ·¯¼¯ñ§Ä§Ä§Ä§Ÿ§Ä§Ä§Ä›Ä›Ä§Ä§“§‹Ä§âďćhôRh¾8hJÕhõYähKZ÷5hKZ÷hõYähƒW5hõYähÕgm5hÅ@}hùt®5 hôR5hÅ@}hÕgm5 hKz}hÕgmhKz}hÕgm5 h,x25 h~Yü5 h#*/5 h¾85 h?¹5 h‘~5hùt®hÛó hÛóhõYä5Ô



























ôôôôôôôôôôôôôôôôôôôôôôôôôôô
$dha$gd·KÎï















ÿ
          
÷÷ììììììììììììììììììììììììì
$dha$gd·KÎdhgd¾8
  
                   ! " # $ % & ôôôôôôôôôôôôôôôôôôôôôôôôôôôô
$dha$gd·KÎ& ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B ôôôôôôôôôôôôôôôôôôôôôôôôôôôô
$dha$gd·KÎB C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ ôôôôôôôôôôôôôôôôôôôôôôôôôôôô
$dha$gd·KÎ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z ôôôôôôôôôôôôôôôôôôôôôôôôôôôô
$dha$gd·KÎz { | } ~  €  ‚ ƒ „ … † ‡ ˆ ‰ Š ‹ Œ  Ž   ‘ ’ “ ” • – ôôôôôôôôôôôôôôôôôôôôôôôôôôôô
$dha$gd·KΖ — ˜ ™ š › œ  ž Ÿ   ¡ ¢ £ ¤ ¥ ¦ § ¨ © ª « ¬ ­ ® ¯ ° ± ² ôôôôôôôôôôôôôôôôôôôôôôôôôôôô
$dha$gd·Kβ ³ ´ µ ¶ ·  à   ` a æ ç 

Q
R





ôôôôìôääääääääääääääÙÙÙÙä
& F dhgd^(’dhgd”6WdhgdOT2
$dha$gd·KÎwx12¿ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔ÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷dhgd”6W¼½¿ÀÖ×áâãFHQ õGTÒÞ *+,:AS]histüõñíæáÜÔÏÇ¿·õ­õ­õíõ·­õíõ¥ á›“Œ‚{‚vld‚hKz}hÊET\hKz}hÕgm5\ hýè\ ho•5\hKz}hÊET5\ h”D5\hKz}hýè5 hÊET5 hýè5hKz}hÕgm\h?¹hÕgm5>*h?¹hÕgm5h÷R8hÕgm5h÷R8h÷R85 há5h÷R8h?¹5 hÕgm5 h,x25 hKz}h?¹h?¹hÕgm hKz}hÕgmhJÕ%ÔÕÖ×âãHQx¥Ù Mu”­ìõG÷÷÷ìì÷áÙ÷÷÷Í÷÷÷÷÷÷½÷÷á„à„Ðdh^„à`„Ðgd”6W „°dh^„°gd”6Wdhgd?¹
& Fdhgd^(’
$dha$gd?¹dhgd”6WGTv”Òü +,i¨ª¬®°²´¶÷ïïïãïïïïïïØïïÍÍïïïïïïï
$dha$gdo•
$dha$gdýè „àdh^„àgd”6Wdhgd”6Wdhgd?¹twyz}„…ˆÚÛÞàáäëìïÿ3wxÈj¤¦¨ªÖêìîòøúöîäÚîäÚîÐÆ¾ÐÆ¾ÐÆ¾¹¾±§±§±§±Ÿš•އ}vlbhÐ?(h§±5\hÐ?(h,x25\ hØ>×5\h2Vûhýè5\ h2Vû5\ h,x25\ hýè\ hö \hKz}hQAÜ\hKz}hÊ5\hKz}hÊ\ ho•\hKz}hXsf\hKz}hXsfH*\hKz}hXsf5\hKz}hL€H*\hKz}hL€5\hKz}hL€\hKz}hÊETH*\#¶¸º¼¾ÀÂÄÆÈÊÌÎÐÒÔÖìîtœ7n—¬æ÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷ìì÷÷÷÷÷÷÷ädhgd

$dha$gdýèdhgd”6Wúü
prt†œ¦²¸òôþ7KOmnuy¬ÃÇíñ$1fg€âãABJOPaöëãëØëÎëÉ¿º¿°«°Éº¿¦¡¦œ¦¡¦—¡—¡—¡—’¡¡œˆœƒ~y~t hì>`\ hÂQ¨\ h½ \\ hÛ ö\ h™;%\ hƒW\ h]q\ h
D‰\ hd\ h‡"^\ h•\ h‡ \h°'wh‡ 5\ h°'w\h°'wh£p#5\ h£p#\hÐ?(h§±5\hÐ?(h§±5>*\hÐ?(5>*\hÐ?(hd5>*\hÐ?(hd5\-æ$W€Èã!A†–—Ï;m†¡Ëø$D÷÷ïãÛÛÛËÛ¿··ïïïïïïïïïdhgdA%ú „°dh^„°gd½ \„à„Ðdh^„à`„ÐgdÛ ödhgd™;% „°dh^„°gd‡"^dhgd”6Wdhgd
D‰a†ˆ•–—š›œÎÏÑÕâãõ÷øþlm}…†±ÊËò÷øDEFGHIKLúõúðëáÚÕÊ¿ºá°¦°¡œºœ°—œ—œ’œ’’ˆˆ—ƒ{skgkhÎ:jhÎ:UhKz}h”6W\hKz}h,†\ h,†\ hÆ~£\ hßJ\ hƒKV\ h2Vû\ hö \ h…Nk\hÐ?(hì>`5\hÐ?(h…Nk5\ h,x2\hÐ?(h2Vû5>*\hÐ?(hä, 5>*\ hä, \ hÐ?(5\hÐ?(h,x25\ hA%ú\ hÛ ö\ hì>`\ h½ \\(DEFGHJKMNPQSTpqrsvwz{Ä!+÷÷÷õðõðõðõðõëéõõäõäõääõßßßgd"cgd[3…$a$gd
c+dhgd”6WLNOQRTUlmnorstvwxz{’•—ÃÓÔרÙÜÝäæôõöùû/âh‘m25H*hÙ>âh‘m25 h‘m25h‘m2hÄ4žh‘m2CJ H*aJ h‘m2CJ H*aJ *â"è"ê"î"ð"#:#F#H#J#L#N#P#R#®#°#ô#ö#+,>?[\uvúøúøúúúøøøøøøóøóøóøóøóøóøëø$a$gdì^‰gdì^‰gd
lÆ MAC e.g Ethernet MAC

Transmission Medium

Scope of IEEE 802

802.4 MAC e.g Token Bus MAC

Logic Link Control (LLC)


Data Link
Layer

Upper Layer

IEEE Standard

OSI Model

Medium Access Control (MAC)

...

MAN Physical Layer

Token Ring Physical Layer

Token Bus Physical Layer

Upper Layers

...

802.6 MAC e.g. MANs MAC

802.5 MAC e.g. Token Ring MAC

Physical
Layer

CISCO
2600
Router


Modem

DVB
Receiver


VSAT


Switch
(S6)


CISCO
2950
Switch

CISCO
2950
Switch

CISCO
2950
Switch

CISCO
2950
Switch

CISCO
2950
Switch

CISCO
2950
Switch


Switch
(S8)



Switch
(S7)


Figure 2.17 Different Ways to Study an Engineering System [59, p. 4]


Switch
(S4)



Switch
(S5)



Switch
(S9)



Router
(S1)


Switch
(S2)

Simulation

Analytical solution


Switch
(S3)


Mathematical model

Physical model

Experiment with a model of the system

Experiment with actual system


System