• Aucun résultat trouvé

2. Overview of Use Cases

2.3. Dual-Homing

A further complication may be added to the client-server relationship described in Section 2.2 by considering what happens when a client network domain is attached to more than one domain in the server network or has two points of attachment to a server network domain.

Figure 6 shows an example of this for a VPN.

| Domain B | | (VPN site) | --- | --- | | Domain A | | | Src | | | (VPN site) | | --- | | | | | ---\ \x1 | | \ x2| |x3

\ | | \---+- -+--- | Domain C | | Domain X | x8 | Domain Y | x4 | (VPN site) | | (core) +----+ (core) +----+ --- | | | | | | | Dst | | | +----+ +----+ --- | | | x9 | | x5 | | /--- ---\ / \

/ \ /x6 x7\

---/ | Domain D | | Domain E | | (VPN site) | | (VPN site) | | | | | Figure 6: Dual-Homing in a Virtual Private Network 2.4. Requesting Connectivity

The relationship between domains can be entirely under the control of management processes, dynamically triggered by the client network, or some hybrid of these cases. In the management case, the server

network may be asked to establish a set of LSPs to provide client network connectivity. In the dynamic case, the client network may make a request to the server network exerting a range of controls over the paths selected in the server network. This range extends from no control (i.e., a simple request for connectivity), through a

set of constraints (latency, path protection, etc.), up to and including full control of the path and resources used in the server network (i.e., the use of explicit paths with label subobjects).

There are various models by which a server network can be asked to set up the connections that support a service provided to the client network. These requests may come from management systems, directly from the client network control plane, or through an intermediary broker such as the Virtual Network Topology Manager (VNTM) [RFC5623].

The trigger that causes the request to the server network is also flexible. It could be that the client network discovers a pressing need for server network resources (such as the desire to provision an end-to-end connection in the client network or severe congestion on a specific path), or it might be that a planning application has

considered how best to optimize traffic in the client network or how to handle a predicted traffic demand.

In all cases, the relationship between client and server networks is subject to policy so that server network resources are under the administrative control of the operator or the server network and are only used to support a client network in ways that the server network operator approves.

As just noted, connectivity requests issued to a server network may include varying degrees of constraint upon the choice of path that the server network can implement.

o "Basic provisioning" is a simple request for connectivity. The only constraints are the end points of the connection and the capacity (bandwidth) that the connection will support for the client network. In the case of some server networks, even the bandwidth component of a basic provisioning request is superfluous because the server network has no facility to vary bandwidth and can offer connectivity only at a default capacity.

o "Basic provisioning with optimization" is a service request that indicates one or more metrics that the server network must

optimize in its selection of a path. Metrics may be hop count, path length, summed TE metric, jitter, delay, or any number of technology-specific constraints.

o "Basic provisioning with optimization and constraints" enhances the optimization process to apply absolute constraints to

functions of the path metrics. For example, a connection may be requested that optimizes for the shortest path but in any case requests that the end-to-end delay be less than a certain value.

Equally, optimization may be expressed in terms of the impact on the network. For example, a service may be requested in order to leave maximal flexibility to satisfy future service requests.

o "Fate diversity requests" ask the server network to provide a path that does not use any network resources (usually links and nodes) that share fate (i.e., can fail as the result of a single event) as the resources used by another connection. This allows the client network to construct protection services over the server network -- for example, by establishing links that are known to be fate diverse. The connections that have diverse paths need not share end points.

o "Provisioning with fate sharing" is the exact opposite of fate diversity. In this case, two or more connections are

requested to follow the same path in the server network. This may be requested, for example, to create a bundled or aggregated link in the client network where each component of the client-layer composite link is required to have the same server network properties (metrics, delay, etc.) and the same failure characteristics.

o "Concurrent provisioning" enables the interrelated connection requests described in the previous two bullets to be enacted through a single, compound service request.

o "Service resilience" requests that the server network provide connectivity for which the server network takes responsibility to recover from faults. The resilience may be achieved through the use of link-level protection, segment protection, end-to-end protection, or recovery mechanisms.

2.4.1. Discovering Server Network Information

Although the topology and resource availability information of a server network may be hidden from the client network, the service request interface may support features that report details about the services and potential services that the server network supports.

o Reporting of path details, service parameters, and issues such as path diversity of LSPs that support deployed services allows the client network to understand to what extent its requests were satisfied. This is particularly important when the requests were made as "best effort".

o A server network may support requests of the form "If I were to ask you for this service, would you be able to provide it?" that is, a service request that does everything except actually provision the service.

3. Problem Statement

The problem statement presented in this section is as much about the issues that may arise in any solution (and so have to be avoided) and the features that are desirable within a solution, as it is about the actual problem to be solved.

The problem can be stated very simply and with reference to the use cases presented in the previous section.

A mechanism is required that allows TE path computation in one domain to make informed choices about the TE capabilities and exit points from the domain when signaling an end-to-end TE path that will extend across multiple domains.

Thus, the problem is one of information collection and presentation, not about signaling. Indeed, the existing signaling mechanisms for TE LSP establishment are likely to prove adequate [RFC4726] with the possibility of minor extensions. Similarly, TE information may currently be distributed in a domain by TE extensions to one of the two IGPs as described in OSPF-TE [RFC3630] and ISIS-TE [RFC5305], and TE information may be exported from a domain (for example,

northbound) using link-state extensions to BGP [RFC7752].

An interesting annex to the problem is how the path is made available for use. For example, in the case of a client-server network, the path established in the server network needs to be made available as a TE link to provide connectivity in the client network.

3.1. Policy and Filters

A solution must be amenable to the application of policy and filters.

That is, the operator of a domain that is sharing information with another domain must be able to apply controls to what information is shared. Furthermore, the operator of a domain that has information shared with it must be able to apply policies and filters to the received information.

Additionally, the path computation within a domain must be able to weight the information received from other domains according to local policy such that the resultant computed path meets the local

operator’s needs and policies rather than those of the operators of other domains.

3.2. Confidentiality

A feature of the policy described in Section 3.1 is that an operator of a domain may desire to keep confidential the details about its internal network topology and loading. This information could be construed as commercially sensitive.

Although it is possible that TE information exchange will take place only between parties that have significant trust, there are also use cases (such as the VPN supported over multiple server network domains described in Section 2.2) where information will be shared between domains that have a commercial relationship but a low level of trust.

Thus, it must be possible for a domain to limit the shared

information to only that which the computing domain needs to know, with the understanding that the less information that is made available the more likely it is that the result will be a less optimal path and/or more crankback events.

3.3. Information Overload

One reason that networks are partitioned into separate domains is to reduce the set of information that any one router has to handle.

This also applies to the volume of information that routing protocols have to distribute.

Over the years, routers have become more sophisticated, with greater processing capabilities and more storage; the control channels on which routing messages are exchanged have become higher capacity; and the routing protocols (and their implementations) have become more robust. Thus, some of the arguments in favor of dividing a network into domains may have been reduced. Conversely, however, the size of networks continues to grow dramatically with a consequent increase in the total amount of routing-related information available.

Additionally, in this case, the problem space spans two or more networks.

Any solution to the problems voiced in this document must be aware of the issues of information overload. If the solution was to simply share all TE information between all domains in the network, the effect from the point of view of the information load would be to create one single flat network domain. Thus, the solution must deliver enough information to make the computation practical (i.e., to solve the problem) but not so much as to overload the receiving domain. Furthermore, the solution cannot simply rely on the policies and filters described in Section 3.1 because such filters might not always be enabled.

3.4. Issues of Information Churn

As LSPs are set up and torn down, the available TE resources on links in the network change. In order to reliably compute a TE path

through a network, the computation point must have an up-to-date view of the available TE resources. However, collecting this information may result in considerable load on the distribution protocol and churn in the stored information. In order to deal with this problem even in a single domain, updates are sent at periodic intervals or whenever there is a significant change in resources, whichever happens first.

Consider, for example, that a TE LSP may traverse ten links in a network. When the LSP is set up or torn down, the resources

available on each link will change, resulting in a new advertisement of the link’s capabilities and capacity. If the arrival rate of new LSPs is relatively fast, and the hold times relatively short, the network may be in a constant state of flux. Note that the problem here is not limited to churn within a single domain, since the information shared between domains will also be changing.

Furthermore, the information that one domain needs to share with another may change as the result of LSPs that are contained within or cross the first domain but that are of no direct relevance to the domain receiving the TE information.

In packet networks, where the capacity of an LSP is often a small fraction of the resources available on any link, this issue is partially addressed by the advertising routers. They can apply a threshold so that they do not bother to update the advertisement of available resources on a link if the change is less than a configured percentage of the total (or, alternatively, the remaining) resources.

The updated information in that case will be disseminated based on an update interval rather than a resource change event.

In non-packet networks, where link resources are physical switching resources (such as timeslots or wavelengths), the capacity of an LSP may more frequently be a significant percentage of the available link resources. Furthermore, in some switching environments, it is

necessary to achieve end-to-end resource continuity (such as using the same wavelength on the whole length of an LSP), so it is far more desirable to keep the TE information held at the computation points up to date. Fortunately, non-packet networks tend to be quite a bit smaller than packet networks, the arrival rates of non-packet LSPs are much lower, and the hold times are considerably longer. Thus, the information churn may be sustainable.

3.5. Issues of Aggregation

One possible solution to the issues raised in other subsections of this section is to aggregate the TE information shared between domains. Two aggregation mechanisms are often considered:

- Virtual node model. In this view, the domain is aggregated as if it was a single node (or router/switch). Its links to other

domains are presented as real TE links, but the model assumes that any LSP entering the virtual node through a link can be routed to leave the virtual node through any other link (although recent work on "limited cross-connect switches" may help with this problem [RFC7579]).

- Virtual link model. In this model, the domain is reduced to a set of edge-to-edge TE links. Thus, when computing a path for an LSP that crosses the domain, a computation point can see which domain entry points can be connected to which others, and with what TE attributes.

Part of the nature of aggregation is that information is removed from the system. This can cause inaccuracies and failed path computation.

For example, in the virtual node model there might not actually be a TE path available between a pair of domain entry points, but the model lacks the sophistication to represent this "limited

cross-connect capability" within the virtual node. On the other hand, in the virtual link model it may prove very hard to aggregate multiple link characteristics: for example, there may be one path available with high bandwidth, and another with low delay, but this does not mean that the connectivity should be assumed or advertised as having both high bandwidth and low delay.

The trick to this multidimensional problem, therefore, is to aggregate in a way that retains as much useful information as possible while removing the data that is not needed. An important part of this trick is a clear understanding of what information is actually needed.

It should also be noted in the context of Section 3.4 that changes in the information within a domain may have a bearing on what aggregated data is shared with another domain. Thus, while the data shared is reduced, the aggregation algorithm (operating on the routers

responsible for sharing information) may be heavily exercised.

4. Architecture 4.1. TE Reachability

As described in Section 1.1, TE reachability is the ability to reach a specific address along a TE path. The knowledge of TE reachability enables an end-to-end TE path to be computed.

In a single network, TE reachability is derived from the Traffic Engineering Database (TED), which is the collection of all TE information about all TE links in the network. The TED is usually built from the data exchanged by the IGP, although it can be

supplemented by configuration and inventory details, especially in transport networks.

In multi-network scenarios, TE reachability information can be

described as "You can get from node X to node Y with the following TE attributes." For transit cases, nodes X and Y will be edge nodes of the transit network, but it is also important to consider the

information about the TE connectivity between an edge node and a specific destination node. TE reachability may be qualified by TE attributes such as TE metrics, hop count, available bandwidth, delay, and shared risk.

TE reachability information can be exchanged between networks so that nodes in one network can determine whether they can establish TE paths across or into another network. Such exchanges are subject to a range of policies imposed by the advertiser (for security and administrative control) and by the receiver (for scalability and stability).

4.2. Abstraction, Not Aggregation

Aggregation is the process of synthesizing from available information. Thus, the virtual node and virtual link models

described in Section 3.5 rely on processing the information available within a network to produce the aggregate representations of links and nodes that are presented to the consumer. As described in Section 3, dynamic aggregation is subject to a number of pitfalls.

In order to distinguish the architecture described in this document from the previous work on aggregation, we use the term "abstraction"

in this document. The process of abstraction is one of applying policy to the available TE information within a domain, to produce selective information that represents the potential ability to connect across the domain.

Abstraction does not offer all possible connectivity options (refer to Section 3.5) but does present a general view of potential

connectivity. Abstraction may have a dynamic element but is not intended to keep pace with the changes in TE attribute availability within the network.

Thus, when relying on an abstraction to compute an end-to-end path, the process might not deliver a usable path. That is, there is no actual guarantee that the abstractions are current or feasible.

Although abstraction uses available TE information, it is subject to policy and management choices. Thus, not all potential connectivity will be advertised to each client network. The filters may depend on commercial relationships, the risk of disclosing confidential

information, and concerns about what use is made of the connectivity that is offered.

4.2.1. Abstract Links

An abstract link is a measure of the potential to connect a pair of points with certain TE parameters. That is, it is a path and its characteristics in the server network. An abstract link represents the possibility of setting up an LSP, and LSPs may be set up over the abstract link.

When looking at a network such as the network shown in Figure 7, the link from CN1 to CN4 may be an abstract link. It is easy to

advertise it as a link by abstracting the TE information in the server network, subject to policy.

The path (i.e., the abstract link) represents the possibility of establishing an LSP from client network edge to client network edge across the server network. There is not necessarily a one-to-one relationship between the abstract link and the LSP, because more than one LSP could be set up over the path.

Since the client network nodes do not have visibility into the server network, they must rely on abstraction information delivered to them by the server network. That is, the server network will report on the potential for connectivity.

4.2.2. The Abstraction Layer Network

Figure 7 introduces the abstraction layer network. This construct separates the client network resources (nodes C1, C2, C3, and C4, and the corresponding links) and the server network resources (nodes CN1, CN2, CN3, and CN4, and the corresponding links). Additionally, the architecture introduces an intermediary network layer called the

abstraction layer. The abstraction layer contains the client network

abstraction layer. The abstraction layer contains the client network

Documents relatifs