Adaptive Multipath Routing architecture

Adaptive Multipath Routing architecture

Our proposed routing architecture, called AMR, consists of an OpenFlow centralized controller based application designed to “discover » topology, calculate multiple paths with maximum flow capacity between nodes, and alter the forwarding table of switches dynamically to set up loop-free multipath forwarding and routing.

A multipath routing module (MRM) is responsible for calculating multiple paths, and sets up those paths using various features of the OpenFlow switch: multiple tables, group entries, and meter entries. Ingress flows from the downstream ports of an edge node are transparently mapped to backbone-level paths with PBB to provide in network multipaths. So, any higher layer, IP L4 TCP or UDP, for example, is transparently forwarded on multipaths by L2 multipath capabilities.

In DCN, networks are expected to operate without interruption, even in the presence of node or link failures. The AMR accomplishes three key tasks: adaptation to link failures, multipath routing computation, and path setup, as follows.

Adaptation to link failures

The AMR adapts to network state changes, such as link up and link down, as follows.

In an OpenFlow-enabled network, a central controller controls all nodes via the OpenFlow protocol. A topology discovery module (TDM)   running on top of the controller connects to the OpenFlow switches and automatically discovers the topology by listening to LLDP (IEEE 802.1AB-2009 Link Layer Discovery Protocol (IEEE Standard 802.1AB, 2009)) packets, and then triggers link-up events on link discovery, or link-down events on link failure.

Two threads, called Main and adaptiveRouting, are called at the controller’s multipath routing module (MRM) startup. Three different data structures are used: edgeNodes, a list for storing edge nodes; multiPDict, a five-dimensional (destination (t), source (s), node (u), in port, out port) dictionary for storing outgoing bandwidth weights; and outPortDict, a two-dimensional (node u, node v) dictionary for storing outgoing ports for u-v pairs. The Main thread listens to link events and tracks changes by enqueuing link events in queue Q. The adaptiveRouting thread initially waits for a link event in Q, at which point the procedure dequeues all link events, updates the outPortDict and a topology graph G. It then calls the multiPathRouting(G) procedure, which computes and sets up multipaths according to G by calling the multiPathCompute(G,s,t) and multiPathSetup(t,s) procedures for every s-t pair from the edge nodes discovered. So, G represents a topology when a routing has been calculated. All network changes made during a routing computation will be queued on Q and processed on the next iteration.

Multipath routing computation

The most straightforward algorithms for calculating throughput paths are maximum flow algorithms, Ford-Fulkerson and Edmonds-Karp (Cormen et al., 2009), for example. These algorithms produce a set of links and capacities designed to maximize the aggregated capacity between nodes. In a packet network, however, packets that traverse different paths may reach the receiver in a different order. In this case, the TCP retransmission mechanism, which is based on the packet’s round trip time (RTT), is triggered to recover from the loss. In order to reduce the possibility of out-of-order packet delivery, the length of multiple paths should be considered and the intended path of the flow needs to be maintained. Multiple paths between two nodes can be limited, such that the difference between each path length and the shortest path does not exceed R hops (e.g. R=1). To preserve the intended path of the flow on multiple paths, we have developed an algorithm, based on Edmonds Karp, to compute the outgoing interface rate for each incoming interface on every node on the paths, instead of computing the outgoing interface rate of every node on the paths.

Table des matières

INTRODUCTION
0.1 Context
0.2 Problem statement
0.2.1 Multipath bandwidth aggregation in DCN
0.2.2 Bandwidth reservation in DCI
0.2.3 Optimization in carrier network
0.3 Outline of the thesis
CHAPTER 1 LITERATURE REVIEW
1.1 Data Center Federation
1.2 Multipath in DCN
1.3 Bandwidth reservation in DCI
1.4 Optimization in multi-layer carrier network
CHAPTER 2 OBJECTIVES AND GENERAL METHODOLOGY
2.1 Objectives of the research
2.2 General methodology
2.2.1 Adaptive multipath routing architecture for DCN
2.2.2 Bandwidth reservation framework for DCI
2.2.3 Optimization model for multi-layer carrier network
CHAPTER 3 OPENFLOW-BASED IN-NETWORK LAYER-2 ADAPTIVE MULTIPATH AGGREGATION IN DATA CENTERS
3.1 Introduction
3.2 Related work
3.3 Adaptive Multipath Routing architecture
3.3.1 Adaptation to link failures
3.3.2 Multipath routing computation
3.3.3 Path setup
3.4 Link selection
3.5 Flow mapping to a multipath
3.5.1 Address learning and PBB encapsulation
3.5.2 Multiple VNs and PBB decapsulation
3.6 Scalability in a large topology
3.7 Evaluation
3.7.1 Path aggregation for a single TCP session
3.7.2 The TCP’s CWND and segment sequence number
3.7.3 Dynamic adaptation to link and path failures
3.7.4 36 edge node topology
3.7.4.1 Bisection bandwidth
3.7.4.2 Forwarding table size
3.7.4.3 Convergence time
3.8 Conclusion
3.9 Acknowledgments
CHAPTER 4 SDN-BASED FAULT-TOLERANT ON-DEMAND AND INADVANCE BANDWIDTH RESERVATION IN DATA CENTER INTERCONNECTS
4.1 Introduction
4.2 Related work
4.2.1 Bandwidth reservation architectures
4.2.2 Algorithms for bandwidth reservation
4.3 Problem description
4.4 Topology, time and reservation models
4.4.1 Topology model
4.4.2 Time model
4.4.3 Reservation model
4.5 Proposed solutions
4.5.1 Determining the available bandwidth and path computation (Solution for Problem P1)
4.5.1.1 Determining the available bandwidth of a link (Solution for Problem P1-1)
4.5.1.2 Path compute: ECMP-like multiple paths consideration (Solution for Problem P1-2)
4.5.2 Path setup and scalable forwarding (Solution for Problem P2)
4.5.2.1 Co-existence of reservation and best-effort traffic
4.5.2.2 Path setup
4.5.2.3 Tunnel assignments for scalable forwarding
4.5.3 Fault tolerances to (ReRoute on) link/path failures and end-host migrations (Solution for Problem P3)
4.5.4 SDN-based fault-tolerant bandwidth reservation (SFBR) architecture
4.6 Approach evaluation
4.6.1 Acceptance rates
4.6.2 Forwarding rules scalability
4.6.3 Link failure and migration handling
4.6.4 Affected reservation lookup efficiency
4.6.5 Best-effort versus reservation flows
4.7 Conclusion
4.8 Acknowledgments
CHAPTER 5 SDN-BASED OPTIMIZATION MODEL OF MULTI-LAYER TRANSPORT NETWORK DYNAMIC TRAFFIC ENGINEERING
5.1 Introduction
5.2 Related work
5.3 Traffic mapping in an OTN network
5.4 Modeling of the three-layer network
5.5 Optimization model
5.6 MLO heuristics
5.7 Experimental results
5.7.1 Topology
5.7.2 Demand
5.7.3 Cost values
5.7.4 Numerical results
5.7.4.1 Visualization of results
5.7.4.2 Three use cases
5.7.5 Heuristics results
5.8 Conclusion
5.9 Acknowledgments
CHAPTER 6 GENERAL DISCUSSIONS
6.1 Multipath in DCN
6.2 Bandwidth reservation in DCI
6.3 Optimization in multi-layer carrier network
6.4 Combination in general framework
CONCLUSION

Cours gratuitTélécharger le document complet

 

Télécharger aussi :

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *