--- 1/draft-ietf-teas-actn-framework-02.txt 2017-02-02 04:13:35.036504546 -0800 +++ 2/draft-ietf-teas-actn-framework-03.txt 2017-02-02 04:13:35.104506149 -0800 @@ -1,20 +1,20 @@ TEAS Working Group Daniele Ceccarelli (Ed) Internet Draft Ericsson Intended status: Informational Young Lee (Ed) -Expires: June 2017 Huawei +Expires: August 2017 Huawei - December 22, 2016 + February 2, 2017 Framework for Abstraction and Control of Traffic Engineered Networks - draft-ietf-teas-actn-framework-02 + draft-ietf-teas-actn-framework-03 Abstract Traffic Engineered networks have a variety of mechanisms to facilitate the separation of the data plane and control plane. They also have a range of management and provisioning protocols to configure and activate network resources. These mechanisms represent key technologies for enabling flexible and dynamic networking. @@ -40,74 +40,72 @@ Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. - This Internet-Draft will expire on January 22, 2017. + This Internet-Draft will expire on August 2, 2017. Copyright Notice - Copyright (c) 2016 IETF Trust and the persons identified as the + Copyright (c) 2017 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Table of Contents 1. Introduction...................................................3 1.1. Terminology...............................................6 2. Business Model of ACTN.........................................9 2.1. Customers.................................................9 2.2. Service Providers........................................10 - 2.3. Network Providers........................................12 + 2.3. Network Providers........................................11 3. ACTN Architecture.............................................12 3.1. Customer Network Controller..............................14 3.2. Multi Domain Service Coordinator.........................15 3.3. Physical Network Controller..............................16 3.4. ACTN Interfaces..........................................17 - 4. VN Creation Process...........................................19 - 5. Access Points and Virtual Network Access Points...............20 - 5.1. Dual homing scenario.....................................22 - 6. End Point Selection and Mobility..............................23 - 6.1. End Point Selection......................................23 - 6.2. Pre-Planned End Point Migration..........................24 - 6.3. On the Fly End Point Migration...........................25 + 4. VN Creation Process...........................................20 + 4.1. VN Creation Example......................................20 + 5. Access Points and Virtual Network Access Points...............22 + 5.1. Dual homing scenario.....................................25 + 6. End Point Selection Based On Network Status...................26 + 6.1. Pre-Planned End Point Migration..........................27 + 6.2. On the Fly End Point Migration...........................28 - 7. Manageability Considerations..................................25 - 7.1. Policy...................................................26 - 7.2. Policy applied to the Customer Network Controller........26 - 7.3. Policy applied to the Multi Domain Service Coordinator...27 - 7.4. Policy applied to the Physical Network Controller........27 - 8. Security Considerations.......................................28 - 8.1. Interface between the Application and Customer Network - Controller (CNC)..............................................29 - 8.2. Interface between the Customer Network Controller and Multi - Domain Service Coordinator (MDSC), CNC-MDSC Interface (CMI)...29 - 8.3. Interface between the Multi Domain Service Coordinator and - Physical Network Controller (PNC), MDSC-PNC Interface (MPI)...30 - 9. References....................................................30 - 9.1. Informative References...................................30 - 10. Contributors.................................................31 - Authors' Addresses...............................................32 + 7. Manageability Considerations..................................28 + 7.1. Policy...................................................28 + 7.2. Policy applied to the Customer Network Controller........29 + 7.3. Policy applied to the Multi Domain Service Coordinator...29 + 7.4. Policy applied to the Physical Network Controller........30 + 8. Security Considerations.......................................30 + 8.1. Interface between the Customer Network Controller and Multi + Domain Service Coordinator (MDSC), CNC-MDSC Interface (CMI)...31 + 8.2. Interface between the Multi Domain Service Coordinator and + Physical Network Controller (PNC), MDSC-PNC Interface (MPI)...32 + 9. References....................................................32 + 9.1. Informative References...................................32 + 10. Contributors.................................................33 + Authors' Addresses...............................................34 1. Introduction Traffic Engineered networks have a variety of mechanisms to facilitate separation of data plane and control plane including distributed signaling for path setup and protection, centralized path computation for planning and traffic engineering, and a range of management and provisioning protocols to configure and activate network resources. These mechanisms represent key technologies for enabling flexible and dynamic networking. @@ -189,21 +187,21 @@ topology. Such a view may be specific to a service, the set of consumed resources, or to a particular customer. Network abstraction refers to presenting a customer with a view of the operator's network in such a way that the links and nodes in that view constitute an aggregation or abstraction of the real resources in the operator's network in a way that is independent of the underlying network technologies, capabilities, and topology. The customer operates an abstract network as if it was their own network, but the operational commands are mapped onto the underlying - network through orchestration. + network through domains coordination. The customer controller for a virtual or abstract network is envisioned to support many distinct applications. This means that there may be a further level of virtualization that provides a view of resources in the customer's virtual network for use by an individual application. The ACTN framework described in this document facilitates: . Abstraction of the underlying network resources to higher-layer @@ -221,22 +219,21 @@ network. . The possibility of providing a customer with a virtualized network. . A virtualization/mapping network function that adapts the customer's requests for control of the virtual resources that have been allocated to the customer to control commands applied to the underlying network resources. Such a function performs the necessary mapping, translation, isolation and - security/policy enforcement, etc. This function is often - referred to as orchestration. + security/policy enforcement, etc. . The presentation to customers of networks as a virtualized topology via open and programmable interfaces. This allows for the recursion of controllers in a customer-provider relationship. 1.1. Terminology The following terms are used in this document. Some of them are newly defined, some others reference existing definition: @@ -271,21 +268,24 @@ abstraction may be applied recursively, so a link in a topology may be created by applying slicing or abstraction on the links in the underlying topology. While most links are point-to- point, connecting just two nodes, the concept of a multi-access link exists where more than two nodes are collectively adjacent and data sent on the link by one node will be equally delivered to all other nodes connected by the link. . PNC: A Physical Network Controller is a domain controller that is responsible for controlling devices or NEs under its direct - control. + control. This can be an SDN controller, a Network Management + System (NMS), an Element Management System (EMS), an active PCE + or any other mean to dynamically control a set of nodes and + that is implementing an NBI compliant with ACTN specification. . PNC domain: A PNC domain includes all the resources under the control of a single PNC. It can be composed of different routing domains and administrative domains, and the resources may come from different layers. The interconnection between PNC domains can be a link or a node. _______ Border Link _______ _( )================( )_ _( )_ _( )_ @@ -318,21 +318,21 @@ up/release/modify). These changes will result in subsequent LSP management at the operator's level. o VN View: a. The VN can be seen as set of end-to-end tunnels from a customer point of view, where each tunnel is referred as a VN member. Each VN member can then be formed by recursive slicing or abstraction of paths in underlying networks. Such end-to-end tunnels may - comprise of customer end points, access links, intra + comprise of customer end points, access links, intra- domain paths, and inter-domain links. In this view VN is thus a set of VN members. b. The VN can also be seen as a topology comprising of physical, sliced, and abstract nodes and links. The nodes in this case include physical customer end points, border nodes, and internal nodes as well as abstracted nodes. Similarly the links include physical access links, inter-domain links, and intra-domain links as well as abstract links. The abstract nodes @@ -414,25 +414,24 @@ resources. They may also have the ability to modify their service parameters within the scope of their virtualized environments. The primary focus of ACTN is Advanced Customers. As customers are geographically spread over multiple network provider domains, they have to interface to multiple providers and may have to support multiple virtual network services with different underlying objectives set by the network providers. To enable these customers to support flexible and dynamic applications they need to control their allocated virtual network resources in a dynamic - fashion, and that means that they need a view of the - topology that spans all of the network providers. - - Customers of a given service provider can in turn offer a service to - other customers in a recursive way. + fashion, and that means that they need a view of the topology that + spans all of the network providers. Customers of a given service + provider can in turn offer a service to other customers in a + recursive way. 2.2. Service Providers Service providers are the providers of virtual network services to their customers. Service providers may or may not own physical network resources (i.e, may or may not be network providers as described in Section 2.3). When a service provider is the same as the network provider, this is similar to existing VPN models applied to a single provider. This approach works well when the customer maintains a single interface with a single provider. When customer @@ -537,44 +536,20 @@ customer service-related information into virtual network service operations in order to seamlessly operate virtual networks while meeting a customer's service requirements. In the context of ACTN, service/virtual service coordination includes a number of service orchestration functions such as multi-destination load balancing, guarantees of service quality, bandwidth and throughput. It also includes notifications for service fault and performance degradation and so forth. - The virtual services that are coordinated under ACTN can be split - into two categories: - - . Service-aware Connectivity Services: This category includes all - the network service operations used to provide connectivity - between customer end-points while meeting policies and service - related constraints. The data model for this category would - include topology entities such as virtual nodes, virtual links, - adaptation and termination points and service-related entities - such as policies and service related constraints. (See Section - 4.2.2) - - . Network Function Virtualization (NFV) Services: These kinds of - service are usually set up in NFV (e.g. cloud) providers and - require connectivity between a customer site and the NFV - provider site (e.g., a data center). These NFV services may - include a security function like a firewall, a traffic - optimizer, and the provisioning of storage or computation - capacity. In these cases the customer does not care whether the - service is implemented in one data center or another. This - allows the network provider divert customer requests to the - most suitable data center. This is also known as the "end - points mobility" case (see Section 4.2.3). - The types of controller defined in the ACTN architecture are shown in Figure 3 below and are as follows: . CNC - Customer Network Controller . MDSC - Multi Domain Service Coordinator . PNC - Physical Network Controller Figure 3 also shows the following interfaces: . CMI - CNC-MPI Interface @@ -592,23 +567,23 @@ | MDSC | +-----------------------+ / | \ ------------- |MPI I/F ------------- / | \ +-------+ +-------+ +-------+ | PNC | | PNC | | PNC | +-------+ +-------+ +-------+ | GMPLS / | / \ | trigger / | / \ - -------- ---- +-----+ +-----+ \ - ( ) ( ) | PNC | | PCE | \ - - - ( Phys ) +-----+ +-----+ ----- + -------- ---- | / \ + ( ) ( ) | / \ + - - ( Phys ) | / ----- ( GMPLS ) (Netw) | / ( ) ( Physical ) ---- | / ( Phys. ) ( Network ) ----- ----- ( Net ) - - ( ) ( ) ----- ( ) ( Phys. ) ( Phys ) -------- ( Net ) ( Net ) ----- ----- Figure 3: ACTN Control Hierarchy @@ -630,95 +605,95 @@ that issues connectivity requests and the Physical Network Controllers (PNCs) that manage the physical network resources. The MDSC can be collocated with the PNC, especially in those cases where the service provider and the network provider are the same entity. The internal system architecture and building blocks of the MDSC are out of the scope of ACTN. Some examples can be found in the Application Based Network Operations (ABNO) architecture [RFC7491] and the ONF SDN architecture [ONF-ARCH]. - The MDSC is the only building block of the architecture that is able - to implement all four ACTN main functions, i.e., multi domain + The MDSC is the only building block of the architecture that can + implement all four ACTN main functions, i.e., multi domain coordination, virtualization/abstraction, customer mapping/translation, and virtual service coordination. The first two functions of the MDSC, namely, multi domain coordination and virtualization/abstraction are referred to as network - control/orchestration functions while the last two functions, - namely, customer mapping/translation and virtual service - coordination are referred to as service control/orchestration - functions. + control/coordination functions while the last two functions, namely, + customer mapping/translation and virtual service coordination are + referred to as service control/coordination functions. The key point of the MDSC (and of the whole ACTN framework) is detaching the network and service control from underlying technology to help the customer express the network as desired by business needs. The MDSC envelopes the instantiation of the right technology and network control to meet business criteria. In essence it controls and manages the primitives to achieve functionalities as - desired by CNC + desired by the CNC. A hierarchy of MDSCs can be foreseen for scalability and - administrative choices as shown in Figure 4. + administrative choices. In this case another interface needs to be + defined, the MMI (MDSC-MDSC interface) as shown in Figure 4. +-------+ +-------+ +-------+ | CNC-A | | CNC-B | | CNC-C | +-------+ +-------+ +-------+ \ | / - ---------- | ---------- + ---------- |-CMI I/F ----------- \ | / +-----------------------+ | MDSC | +-----------------------+ / | \ - ---------- | ----------- + ---------- |-MMI I/F ----------- / | \ +----------+ +----------+ +--------+ | MDSC | | MDSC | | MDSC | +----------+ +----------+ +--------+ - | / | / \ + | / |-MPI I/F / \ | / | / \ +-----+ +-----+ +-----+ +-----+ +-----+ | PNC | | PNC | | PNC | | PNC | | PNC | +-----+ +-----+ +-----+ +-----+ +-----+ Figure 4: Controller recursiveness In order to allow for multi-domain coordination a 1:N relationship must be allowed between MDSCs and between MDSCs and PNCs (i.e. 1 - parent MDSC and N child MDSC or 1 MDSC and N PNCs). In the case - where there is a hierarchy of MDSCs, the interface above the top - MDSC (i.e., CMI) and the interface below the bottom MDSCs (i.e., - SBI) remain the same. The recursion of MDSCs in the middle layers - within this hierarchy of MDSCs may take place. + parent MDSC and N child MDSC or 1 MDSC and N PNCs). + + In the case where there is a hierarchy of MDSCs, the interface above + the top MDSC (i.e., CMI) and the interface below the bottom MDSCs + (i.e., SBI) remain the same. The recursion of MDSCs in the middle + layers within this hierarchy of MDSCs may take place via the MMI. + Please see Section 4 for details of the ACTN interfaces. In addition to that, it could also be possible to have an M:1 relationship between MDSCs and PNC to allow for network resource partitioning/sharing among different customers not necessarily connected to the same MDSC (e.g., different service providers). 3.3. Physical Network Controller - The Physical Network Controller (PNC) is in charge of configuring - the network elements, monitoring the physical topology of the - network, and passing information about the topology (either raw or - abstracted) to the MDSC. + The Physical Network Controller (PNC) oversees configuring the + network elements, monitoring the topology (physical or virtual) of + the network, and passing information about the topology (either raw + or abstracted) to the MDSC. The internal architecture of the PNC, its building blocks, and the way it controls its domain are out of the scope of ACTN. Some examples can be found in the Application Based Network Operations (ABNO) architecture [RFC7491] and the ONF SDN architecture [ONF- ARCH] The PNC, in addition to being in charge of controlling the physical network, is able to implement two of the four main ACTN main functions: multi domain coordination and virtualization/abstraction function. - A hierarchy of PNCs can be foreseen for scalability and - administrative choices. 3.4. ACTN Interfaces To allow virtualization and multi domain coordination, the network has to provide open, programmable interfaces, through which customer applications can create, replace and modify virtual network resources and services in an interactive, flexible and dynamic fashion while having no impact on other customers. Direct customer control of transport network elements and virtualized services is not perceived as a viable proposition for transport network @@ -777,33 +752,60 @@ . Interface B: The CNC-MDSC Interface (CMI) is an interface between a CNC and an MDSC. It is used to request the creation of network resources, topology or services for the applications. Note that all service related information conveyed via Interface A (i.e., specific service properties, including service type, topology, bandwidth, and constraint information) needs to be transparently carried over this interface. The MDSC may also report potential network topology availability if queried for current capability from the CNC. + The CMI is the interface with the highest level of abstraction, + where the Virtual Networks are modelled and presented to the + customer/CNC. Most of the information over this interface is + technology agnostic, even if in some cases it should be + possible to explicitly request for a VN to be created at a + given layer in the network (e.g. ODU VN or MPLS VN). . Interface C: The MDSC-PNC Interface (MPI) is an interface between an MDSC and a PNC. It communicates the creation requests for new connectivity or for bandwidth changes in the physical network. In multi-domain environments, the MDSC needs to establish multiple MPIs, one for each PNC, as there is one - PNC responsible for control of each domain. + PNC responsible for control of each domain. The MPI could have + different degrees of abstraction and present an abstracted + topology hiding technology specific aspects of the network or + convey technology specific parameters to allow for path + computation at the MDSC level. Please refer to CCAMP Transport + NBI work for the latter case [Transport NBI]. . Interface D: The provisioning interface for creating forwarding state in the physical network, requested via the Physical Network Controller. - The interfaces within the ACTN scope are B and C. + The interfaces within the ACTN scope are B and C while interfaces A + and D are out of the scope of ACTN and are only shown in Figure 5 to + give a complete context of ACTN. + As previously stated in Section 3.2 there might be a third interface + in ACTN scope, the MMI. The MMI is a special case of the MPI and + behaves similarly to an MPI to support general functions performed + by the MDSCs such as abstraction function and provisioning function. + From an abstraction point of view, the top level MDSC which + interfaces the CNC operates on a higher level of abstraction (i.e., + less granular level) than the lower level MSDCs. As such, the MMI + carries more abstract TE information than the MPI. + + Please note that for all the three interfaces, when technology + specific information needs to be included, this info will be add-ons + on top of the general abstract topology. As far as general topology + abstraction standpoint, all interfaces are still recursive in + nature. 4. VN Creation Process The provider can present different level of network abstraction to the customer, spanning from one extreme (say "black") where nothing except the Access Points (APs) is shown to the other extreme (say "white") where an actual network topology is shown to the customer. There are shades of "gray" in between where a number of abstract links and nodes can be shown. @@ -821,72 +823,142 @@ customer can only see the APs of the network. Implementation: In the case of black topology abstraction, the customers can ask for connectivity with given constraints/SLA between the APs and LSPs/tunnels created by the provider to satisfy the request. What the customer sees is only that his CEs are connected with a given SLA. In the case of grey/white topology the customer creates his own LSPs accordingly to the topology that was presented to him. +4.1. VN Creation Example + + This section illustrates how a VN creation process is conducted over + a hierarchy of MDSCs via MMIs and MPIs, which is shown in Figure 6. + + +-----+ + | CNC | CNC wants to create a VN + +-----+ between CE A and CE B + | + | + +-----------------------+ + | MDSC 1 | --o-o---o-o-- + +-----------------------+ + / \ + .. .. / \ .. .. + ( ) ( ) +--------+ +--------+ ( ) ( ) +--(o--o)-(o--o)-- | MDSC 2 | | MDSC 3 | --(o--o)-(o--o)-- + ( ) ( ) +--------+ +--------+ ( ) ( ) + .. .. / \ / \ .. .. + / \ / \ + +-----+ +-----+ +-----+ +-----+ + |PNC 1| |PNC 2| |PNC 3| |PNC 4| + +-----+ +-----+ +-----+ +-----+ + | | | | + ... ... ... ... + ( ) ( ) ( ) ( ) + CE A o------(o-o-o)--(o-o-o)--------(o-o-o)--(o-o-o)----o CE B + ( ) ( ) ( ) ( ) + ... ... ... ... + + Domain 1 Domain 2 Domain 3 Domain 4 + + Figure 6: Illustration of topology abstraction granularity levels in + the MDSC Hierarchy + + In the example depicted in Figure 6, there are four domains under + control of the respective PNCs, namely, PNC 1, PNC 2, PNC3 and PNC4. + Assume that MDSC 2 is controlling PNC 1 and PNC 2 while MDSC 3 is + controlling PNC 3 and PNC 4. Let us assume that each of the PNCs + provides a grey topology abstraction in which to present only border + nodes and border links. The abstract topology MDSC 2 would operate + is shown on the left side of MDSC 2 in Figure 6. It is basically a + combination of the two topologies the PNCs (PNC 1 and PNC 2) + provide. Likewise, the abstract topology MDSC 3 would operate is + shown on the right side of MDSC 3 in Figure 6. Both MDSC 2 and MDSC + 3 provide a grey topology abstraction in which each PNC domain is + presented as one virtual node to its top level MDSC 1. Then the MDSC + 1 combines these two topologies updated by MDSC 2 and MDSC 3 to + create the abstraction topology to which it operates. MDSC 1 sees + the whole four domain networks as four virtual nodes connected via + virtual links. This illustrates the point discussed in Section 3.4: + The top level MDSC operates on a higher level of abstraction (i.e., + less granular level) than the lower level MSDCs. As such, the MMI + carries more abstract TE information than the MPI. + In the process of creating a VN, the same principle applies. Let us + assume that a customer wants to create a virtual network that + connects its CE A and CE B which is depicted in Figure 6. Upon + receipt of this request generated by the CNC, MDSC 1, based on its + abstract topology at hand, determines that CE A is connected a + virtual node in domain 1 and CE B is connected to a virtual node in + domain 4 and. MDSC 1 further determines that domain 2 and domain 3 + are interconnected to domain 1 and 4 respectively. MDSC 1 then + partitions the original VN request from the CNC into two separate VN + requests and make a VN creation request, respectively to MDSC 2 and + MDSC 3. MDSC 1 for instance make a VN request to MDSC 2 to connect + two virtual nodes. When MDSC 2 receives this VN request from MDSC 1, + it further partitions into two separate requests respectively to PNC + 1 and PNC 2. This illustration shows that VN creation request + process recursively takes place over MMI and MPI. + 5. Access Points and Virtual Network Access Points In order not to share unwanted topological information between the customer domain and provider domain, a new entity is defined which is referred to as the Access Point (AP). See the definition of AP in Section 1.1. A customer node will use APs as the end points for the request of - VNs as shown in Figure 6. + VNs as shown in Figure 7. ------------- ( ) - - +---+ X ( ) Z +---+ |CE1|---+----( )---+---|CE2| +---+ | ( ) | +---+ AP1 - - AP2 ( ) ------------- - Figure 6: APs definition customer view + Figure 7: APs definition customer view - Let's take as an example a scenario shown in Figure 6. CE1 is + Let's take as an example a scenario shown in Figure 7. CE1 is connected to the network via a 10Gb link and CE2 via a 40Gb link. Before the creation of any VN between AP1 and AP2 the customer view can be summarized as shown in Table 1: +----------+------------------------+ |End Point | Access Link Bandwidth | +-----+----------+----------+-------------+ |AP id| CE,port | MaxResBw | AvailableBw | +-----+----------+----------+-------------+ | AP1 |CE1,portX | 10Gb | 10Gb | +-----+----------+----------+-------------+ | AP2 |CE2,portZ | 40Gb | 40Gb | +-----+----------+----------+-------------+ Table 1: AP - customer view - On the other hand, what the provider sees is shown in Figure 7. + On the other hand, what the provider sees is shown in Figure 8. ------- ------- ( ) ( ) - - - - W (+---+ ) ( +---+) Y -+---( |PE1| Dom.X )----( Dom.Y |PE2| )---+- | (+---+ ) ( +---+) | AP1 - - - - AP2 ( ) ( ) ------- ------- - Figure 7: Provider view of the AP + Figure 8: Provider view of the AP Which results in a summarization as shown in Table 2. +----------+------------------------+ |End Point | Access Link Bandwidth | +-----+----------+----------+-------------+ |AP id| PE,port | MaxResBw | AvailableBw | +-----+----------+----------+-------------+ | AP1 |PE1,portW | 10Gb | 10Gb | +-----+----------+----------+-------------+ @@ -924,34 +996,34 @@ +---------+----------+----------+-------------+ Table 3: AP and VNAP - provider view after VN creation 5.1. Dual homing scenario Often there is a dual homing relationship between a CE and a pair of PEs. This case needs to be supported by the definition of VN, APs and VNAPs. Suppose CE1 connected to two different PEs in the operator domain via AP1 and AP2 and that the customer needs 5Gbps of - bandwidth between CE1 and CE2. This is shown in Figure 8. + bandwidth between CE1 and CE2. This is shown in Figure 9. ____________ AP1 ( ) AP3 -------(PE1) (PE3)------- W/ ( ) \X +---+/ ( ) \+---+ |CE1| ( ) |CE2| +---+\ ( ) /+---+ Y\ ( ) /Z -------(PE2) (PE4)------- AP2 (____________) - Figure 8: Dual homing scenario + Figure 9: Dual homing scenario In this case, the customer will request for a VN between AP1, AP2 and AP3 specifying a dual homing relationship between AP1 and AP2. As a consequence no traffic will flow between AP1 and AP2. The dual homing relationship would then be mapped against the VNAPs (since other independent VNs might have AP1 and AP2 as end points). The customer view would be shown in Table 4. +----------+------------------------+ @@ -964,97 +1036,100 @@ +---------+----------+----------+-------------+-----------+ |AP2 |CE1,portY | 40Gbps | 35Gbps | | | -VNAP2.9| | 5Gbps | N.A. | VNAP1.9 | +---------+----------+----------+-------------+-----------+ |AP3 |CE2,portX | 40Gbps | 35Gbps | | | -VNAP3.9| | 5Gbps | N.A. | NONE | +---------+----------+----------+-------------+-----------+ Table 4: Dual homing - customer view after VN creation -6. End Point Selection and Mobility +6. End Point Selection Based On Network Status - Virtual networks could be used as the infrastructure to connect a - number of sites belonging to a customer or to provide connectivity - between customer sites and Virtualized Network Functions (VNF) such - as virtualized firewalls, virtual Broadband Network Gateway (vBNG), - storage, or computational functions. + A further advanced application of ACTN is in the case of Data Center + selection, where the customer requires the Data Center selection to + be based on the network status; this is referred to as Multi- + Destination in [ACTN-REQ]. In terms of ACTN, a CNC could request a + connectivity service (virtual network) between a set of source Aps + and destination APs and leave it up to the network (MDSC) to decide + which source and destination access points to be used to set up the + connectivity service (virtual network). The candidate list of source + and destination APs is decided by a CNC (or an entity outside of + ACTN) based on certain factors which are outside the scope of ACTN. -6.1. End Point Selection + Based on the AP selection as determined and returned by the network + (MDSC), the CNC (or an entity outside of ACTN) should further take + care of any subsequent actions such as orchestration or service + setup requirements. These further actions are outside the scope of + ACTN. - A VNF could be deployed in different places (e.g., data centers A, - B, or C in Figure 9), but the VNF provider (that is, the ACTN - customer) doesn't know which is the best site in which to install - the VNF from a network point of view (e.g., to optimize for low - latency). For example, it is possible to compute a path minimizing - the delay between AP1 and AP2, but the customer doesn't know if the - path with minimum delay is towards DC-A, DC-B, or DC-C. + Consider a case as shown in Figure 10, where three data centers are + available, but the customer requires the data center selection to be + based on the network status and the connectivity service setup + between the AP1 (CE1) and one of the destination APs (AP2 (DC-A), + AP3 (DC-B), and AP4 (DC-C)). The MDSC (in coordination with PNCs) + would select the best destination AP based on the constraints, + optimization criteria, policies, etc., and setup the connectivity + service (virtual network). ------- ------- ( ) ( ) - - - - +---+ ( ) ( ) +----+ |CE1|---+----( Domain X )----( Domain Y )---+---|DC-A| +---+ | ( ) ( ) | +----+ AP1 - - - - AP2 ( ) ( ) ---+--- ---+--- AP3 | AP4 | +----+ +----+ |DC-B| |DC-C| +----+ +----+ - Figure 9: End point selection - - In this case the VNF provider (that is, the ACTN customer) should be - allowed to ask for a VN between AP1 and a set of end points. The - list of end points is supplied by the VNF provider. When the end - point is identified the connectivity can be instantiated and a - notification can be sent to the VNF provider for the instantiation - of the VNF. + Figure 10: End point selection based on network status -6.2. Pre-Planned End Point Migration +6.1. Pre-Planned End Point Migration - A premium SLA for VNF service provisioning consists of offering of a - protected VNF instantiated on two or more sites and with a hot - stand-by protection mechanism. In this case the VN should be - provided so as to switch from one end point to another upon a - trigger from the VNF provider or from an automatic failure detection - mechanism. An example is provided in Figure 10 where the request - from the VNF provider is for connectivity with resiliency between - CE1 and a VNF with primary instantiation in DC-A and a protection - instance in DC-C. + Further in case of Data Center selection, customer could request for + a backup DC to be selected, such that in case of failure, another DC + site could provide hot stand-by protection. As shown in Figure 10 + DC-C is selected as a backup for DC-A. Thus, the VN should be setup + by the MDSC to include primary connectivity between AP1 (CE1) and + AP2 (DC-A) as well as protection connectivity between AP1 (CE1) and + AP4 (DC-C). ------- ------- ( ) ( ) - - __ - - +---+ ( ) ( ) +----+ |CE1|---+----( Domain X )----( Domain Y )---+---|DC-A| +---+ | ( ) ( ) | +----+ AP1 - - - - AP2 | ( ) ( ) | ---+--- ---+--- | AP3 | AP4 | HOT STANDBY +----+ | |DC-C|<------------- +----+ - Figure 10: Preplanned endpoint migration + Figure 10: Pre-planned end point migration -6.3. On the Fly End Point Migration +6.2. On the Fly End Point Migration - On the fly end point migration concept is similar to the end point - selection one. The idea is to give the provider not only the list of - sites where the VNF can be installed, but also a mechanism to notify - changes in the network that have impacts on the SLA. After an - handshake with the customer controller/applications, the bandwidth - in network would be moved accordingly with the moving of the VNFs. + Compared to pre-planned end point migration, on the fly end point + selection is dynamic in that the migration is not pre-planned but + decided based on network condition. Under this scenario, the MDSC + would monitor the network (based on the VN SLA) and notify the CNC + in case where some other destination AP would be a better choice + based on the network parameters. The CNC should instruct the MDSC + when it is suitable to update the VN with the new AP if it is + required. 7. Manageability Considerations The objective of ACTN is to manage traffic engineered resources, and provide a set of mechanism to allow clients to request virtual connectivity across server network resources. ACTN will support multiple clients each with its own view of and control of the server network, the network operator will need to partition (or "slice") their network resources, and manage them resources accordingly. @@ -1168,26 +1243,23 @@ request and control of resources, confidentially of the information, and availability of function, should all be critical security considerations when deploying and operating ACTN platforms. Several distributed ACTN functional components are required, and as a rule implementations should consider encrypting data that flow between components, especially when they are implemented at remote nodes, regardless if these are external or internal network interfaces. - The ACTN security discussion is further split into three specific + The ACTN security discussion is further split into two specific categories described in the following sub-sections: - . Interface between the Application and Customer Network - Controller (CNC) - . Interface between the Customer Network Controller and Multi Domain Service Coordinator (MDSC), CNC-MDSC Interface (CMI) . Interface between the Multi Domain Service Coordinator and Physical Network Controller (PNC), MDSC-PNC Interface (MPI) From a security and reliability perspective, ACTN may encounter many risks such as malicious attack and rogue elements attempting to connect to various ACTN components. Furthermore, some ACTN components represent a single point of failure and threat vector, @@ -1195,51 +1267,21 @@ communication between different ACTN components. The conclusion is that all protocols used to realize the ACTN framework should have rich security features, and customer, application and network data should be stored in encrypted data stores. Additional security risks may still exist. Therefore, discussion and applicability of specific security functions and protocols will be better described in documents that are use case and environment specific. -8.1. Interface between the Application and Customer Network Controller - (CNC) - - This is the external interface between the application and CNC. The - application request for virtual network service connectivity may - also contain data about the application, requested network - connectivity and the service that is eventually delivered to the - customer. It is likely to use external protocols and must be - appropriately secured using session encryption. - - As highlighted in the policy section (see Section 7), it may be - necessary to enable different policies based on identity, and to - manage the application requests of virtual network services. Since - access will be largely be through external protocols, and - potentially across the public Internet, AAA-based controls should - also be used. - - Several additional challenges face the CNC, as the Application to - CNC interface will be used by multiple applications. These include: - - . A need to verify the credibility of customer applications. - - . Malicious applications may tamper with or perform unauthorized - operations, such as obtaining sensitive information, obtaining - higher rights, or request changes to existing virtual network - services. - - . The ability to recognize and respond to spoofing attacks or - buffer overflow attacks will also need to be considered. - -8.2. Interface between the Customer Network Controller and Multi Domain +8.1. Interface between the Customer Network Controller and Multi Domain Service Coordinator (MDSC), CNC-MDSC Interface (CMI) The role of the MDSC is to detach the network and service control from underlying technology to help the customer express the network as desired by business needs. It should be noted that data stored by the MDSC will reveal details of the virtual network services, and which CNC and application is consuming the resource. The data stored must therefore be considered as a candidate for encryption. CNC Access rights to an MDSC must be managed. MDSC resources must be @@ -1248,21 +1290,21 @@ CNCs, should also be considered. A CNC-MDSC protocol interface will likely be an external protocol interface. Again, suitable authentication and authorization of each CNC connecting to the MDSC will be required, especially, as these are likely to be implemented by different organisations and on separate functional nodes. Use of the AAA-based mechanisms would also provide role-based authorization methods, so that only authorized CNC's may access the different functions of the MDSC. -8.3. Interface between the Multi Domain Service Coordinator and +8.2. Interface between the Multi Domain Service Coordinator and Physical Network Controller (PNC), MDSC-PNC Interface (MPI) The function of the Physical Network Controller (PNC) is to configure network elements, provide performance and monitoring functions of the physical elements, and export physical topology (full, partial, or abstracted) to the MDSC. Where the MDSC must interact with multiple (distributed) PNCs, a PKI-based mechanism is suggested, such as building a TLS or HTTPS connection between the MDSC and PNCs, to ensure trust between the @@ -1301,35 +1343,38 @@ Networking: A Perspective from within a Service Provider Environment", RFC 7149, March 2014. [RFC7926] A. Farrel (Ed.), "Problem Statement and Architecture for Information Exchange between Interconnected Traffic- Engineered Networks", RFC 7926, July 2016. [GMPLS] Manning, E., et al., "Generalized Multi-Protocol Label Switching (GMPLS) Architecture", RFC 3945, October 2004. - [ONF-ARCH] Open Networking Foundation, "OpenFlow Switch - Specification Version 1.4.0 (Wire Protocol 0x05)", October - 2013. + [ONF-ARCH] Open Networking Foundation, "SDN architecture" Issue 1 - + TR-502, June 2014. [RFC7491] King, D., and Farrel, A., "A PCE-based Architecture for Application-based Network Operations", RFC 7491, March 2015. + [Transport NBI] Busi, I., et al., "Transport North Bound Interface + Use Cases", draft-tnbidt-ccamp-transport-nbi-use-cases, + work in progress. + 10. Contributors Adrian Farrel Old Dog Consulting Email: adrian@olddog.co.uk - Italo Bush + Italo Busi Huawei Email: Italo.Busi@huawei.com Khuzema Pithewan Infinera Email: kpithewan@infinera.com Authors' Addresses Daniele Ceccarelli (Editor)