TEAS Working Group                              Daniele Ceccarelli (Ed)
Internet Draft                                                 Ericsson
Intended status: Informational                           Young Lee (Ed)
Expires: January June 2017                                           Huawei

                                                       October 25,

                                                      December 22, 2016

  Framework for Abstraction and Control of Traffic Engineered Networks




   Traffic Engineered networks have a variety of mechanisms to
   facilitate the separation of the data plane and control plane.  They
   also have a range of management and provisioning protocols to
   configure and activate network resources.  These mechanisms
   represent key technologies for enabling flexible and dynamic

   Abstraction of network resources is a technique that can be applied
   to a single network domain or across multiple domains to create a
   single virtualized network that is under the control of a network
   operator or the customer of the operator that actually owns
   the network resources.

   This document provides a framework for Abstraction and Control of
   Traffic Engineered Networks (ACTN).

Status of this Memo

   This Internet-Draft is submitted to IETF in full conformance with
   the provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups.  Note that
   other groups may also distribute working documents as Internet-

   Internet-Drafts are draft documents valid for a maximum of six
   months and may be updated, replaced, or obsoleted by other documents
   at any time.  It is inappropriate to use Internet-Drafts as
   reference material or to cite them other than as "work in progress."
   The list of current Internet-Drafts can be accessed at

   The list of Internet-Draft Shadow Directories can be accessed at

   This Internet-Draft will expire on January 25, 22, 2017.

Copyright Notice

   Copyright (c) 2016 IETF Trust and the persons identified as the
   document authors. All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with
   respect to this document.  Code Components extracted from this
   document must include Simplified BSD License text as described in
   Section 4.e of the Trust Legal Provisions and are provided without
   warranty as described in the Simplified BSD License.


   Traffic Engineered networks have a variety of mechanisms to
   facilitate the
   separation of the data plane and control plane.  They also have a
   range of management and provisioning protocols to configure and
   activate network resources.  These mechanisms represent key
   technologies for enabling flexible and dynamic networking.

   Abstraction of network resources is a technique that can be applied
   to a single network domain or across multiple domains to create a
   single virtualized network that is under the control of a network
   operator or the customer of the operator that actually owns
   the network resources.

   This draft provides a framework for Abstraction and Control of
   Traffic Engineered Networks (ACTN).

Table of Contents

   1. Introduction...................................................3
      1.1. Terminology...............................................5 Terminology...............................................6
   2. Business Model of ACTN.........................................8 ACTN.........................................9
      2.1. Customers.................................................8 Customers.................................................9
      2.2. Service Providers........................................10
      2.3. Network Providers........................................11 Providers........................................12
   3. ACTN architecture.............................................12 Architecture.............................................12
      3.1. Customer Network Controller..............................14
      3.2. Multi Domain Service Coordinator.........................15
      3.3. Physical Network Controller..............................16
      3.4. ACTN interfaces..........................................17 Interfaces..........................................17
   4. VN creation process...........................................19 Creation Process...........................................19
   5. Access Points and Virtual Network Access Points...............20
      5.1. Dual homing scenario.....................................22
   6. End point selection & mobility................................23 Point Selection and Mobility..............................23
      6.1. End point selection & mobility...........................23 Point Selection......................................23
      6.2. Preplanned end point migration...........................24 Pre-Planned End Point Migration..........................24
      6.3. On the fly end point migration...........................25 Fly End Point Migration...........................25

   7. Security......................................................25 Manageability Considerations..................................25
      7.1. Policy...................................................26
      7.2. Policy applied to the Customer Network Controller........26
      7.3. Policy applied to the Multi Domain Service Coordinator...27
      7.4. Policy applied to the Physical Network Controller........27
   8. References....................................................25 Security Considerations.......................................28
      8.1. Informative References...................................25
   9. Contributors..................................................28 Interface between the Application and Customer Network
      Controller (CNC)..............................................29
      8.2. Interface between the Customer Network Controller and Multi
      Domain Service Coordinator (MDSC), CNC-MDSC Interface (CMI)...29
      8.3. Interface between the Multi Domain Service Coordinator and
      Physical Network Controller (PNC), MDSC-PNC Interface (MPI)...30
   9. References....................................................30
      9.1. Informative References...................................30
   10. Contributors.................................................31
   Authors' Addresses...............................................28 Addresses...............................................32

1. Introduction

   Traffic Engineered networks have a variety of mechanisms to
   facilitate separation of data plane and control plane including
   distributed signaling for path setup and protection, centralized
   path computation for planning and traffic engineering, and a range
   of management and provisioning protocols to configure and activate
   network resources. These mechanisms represent key technologies for
   enabling flexible and dynamic networking.

   The term Traffic Engineered Network network is used in this draft refers document to any
   refer to a network that has uses any connection-oriented technology
   under the ability control of dynamic
   provisioning, abstracting and orchestrating network resource a distributed or centralized control plane to the
   network's clients.
   support dynamic provisioning of end-to-end connectivity. Some
   examples of networks that are in scope of this definition are
   optical networks, MPLS Transport Profile (MPLS-
   TP), (MPLS-TP) networks
   [RFC5654], and MPLS Traffic Engineering (MPLS-TE), and other emerging
   technologies with connection-oriented behavior. (MPLS-TE) networks

   One of the main drivers for Software Defined Networking (SDN)
   [RFC7149] is a decoupling of the network control plane from the data
   plane. This separation of the control plane from the data plane has
   been already achieved with the development of MPLS/GMPLS [GMPLS] and PCE [PCE]
   the Path Computation Element (PCE) [RFC4655] for TE-based transport networks.
   One of the advantages of SDN is its logically centralized control
   regime that allows a global view of the underlying network under its control. networks.
   Centralized control in SDN helps improve network resources resource
   utilization compared with distributed network control. For TE-based transport network control,
   networks, PCE is essentially equivalent to a logically centralized control for
   path computation function.


   Three key aspects that need to be solved by SDN are:

     . Network and Separation of service abstraction: Detach requests from service delivery so that
        the orchestration of a network and service
        control is transparent from underlying technology and help the point of
        view of the customer express but remains responsive to the network as desired by customer's
        services and business needs.

     . Network abstraction: As described in [RFC7926], abstraction is
        the process of applying policy to a set of information about a
        TE network to produce selective information that represents the
        potential ability to connect across the domain.  The process of
        abstraction presents the connectivity graph in a way that is
        independent of the underlying network technologies,
        capabilities, and topology so that it can be used to plan and
        deliver network services in a uniform way

     . Coordination of resources across multiple domains and multiple
        layers to provide end-to-end services regardless of whether the
        domains use SDN or not.

   As networks evolve, the need to provide resource and separated service
   request/orchestration and resource abstraction has emerged as a key
   requirement for operators; this
   implies in effect the virtualization operators. In order to support multiple clients each
   with its own view of network resources so that
   the network is "sliced" for different tenants shown as a dedicated
   portion and control of the server network, a network resources

  Particular attention
   operator needs to partition (or "slice") the network resources.  The
   resulting slices can be paid assigned to the multi-domain case, where each client for guaranteed usage
   which is a step further than shared use of common network resources.

   Furthermore, each network represented to a client can be built from
   abstractions of the underlying networks so that, for example, a link
   in the client's network is constructed from a path or collection of
   paths in the underlying network.

   We call the set of management and control functions used to provide
   these features Abstraction and Control of Traffic Engineered
   Networks (ACTN) (ACTN).

   Particular attention needs to be paid to the multi-domain case, ACTN
   can facilitate virtual network operation via the creation of a
   single virtualized network or a seamless service. This supports
   operators in viewing and controlling different domains (at any
   dimension: applied technology, administrative zones, or vendor-specific vendor-
   specific technology islands) as a single virtualized network.

   Network virtualization refers to allowing the customers of network
   operators (see Section 2.1) to utilize a certain amount of network
   resources as if they own them and thus control their allocated
   resources with higher layer or application processes that enables
   the resources to be used in the most optimal way. More flexible,
   dynamic customer control capabilities are added to the traditional
   VPN along with a customer specific customer-specific virtual network view. Customers
   control a view of virtual network resources, specifically allocated
   to each one of them. This view is called an abstracted virtual network
   topology. Such a view may be specific to a specific service, the set of
   consumed resources resources, or to a particular customer. Customer

   Network abstraction refers to presenting a customer with a view of
   the operator's network in such a way that the links and nodes in
   that view constitute an aggregation or abstraction of the real
   resources in the operator's network in a way that is independent of
   the underlying network technologies, capabilities, and topology.
   The customer operates an abstract network as if it was their own
   network, but the operational commands are mapped onto the underlying
   network through orchestration.

   The customer controller for a virtual or abstract network is
   envisioned to support a
   plethora of many distinct applications.  This means that
   there may be a further level of virtualization that provides a view
   of resources in the customer's virtual network for use by an
   individual application.

   The ACTN framework described in this draft is named Abstraction and
   Control of Traffic Engineered Network (ACTN) and document facilitates:


     . Abstraction of the underlying network resources to higher-layer
        applications and customers [TE-INFO].

     - [RFC7926].

     . Virtualization of particular underlying resources, whose
        selection criterion is the allocation of those resources to a
        particular customer, application or service. [ONF-ARCH]

     - service [ONF-ARCH].

     . Slicing of infrastructure to connect multiple customers to meet specific customer's customers' service


     . Creation of a virtualized environment allowing operators to
        view and control multi-domain networks into as a single virtualized network;

     - Possibility

     . The possibility of providing a customer with a virtualized network or
        services (totally hiding the network).


     . A virtualization/mapping network function that adapts customer the
        customer's requests to for control of the virtual resources (allocated that
        have been allocated to the customer to them) control commands applied
        to the
        supporting physical underlying network control and resources. Such a function performs
        the necessary mapping, translation, isolation and
        security/policy enforcement, etc.; etc. This function is often
        referred to as orchestration.


     . The presentation to customers of the networks as a virtualized
        topology to
        the customers via open and programmable interfaces. This allows for
        the recursion of controllers in a customer-provider

1.1. Terminology

   The following terms are used in this document. Some of them are
   newly defined, some others reference existing definition:
     . Node: A node is a topological entity describing vertex on the "opaque"
        forwarding aspect graph representation of the topological component which represents
        the opportunity to enable forwarding between points at the edge a TE
        topology. In a physical network a node corresponds to a network
        element (NE). In a sliced network, a node is some subset of the node. It provides
        capabilities of a physical network element. In an abstract
        network, a node (sometimes called an abstract node) is a
        representation as a single vertex in the context for instructing topology of the
        formation, adjustment
        abstract network of one or more nodes and removal their connecting
        links from the physical network. The concept of a node
        represents the forwarding. A ability to connect from any access to the node
        (a link end) to any other access to that node, although
        "limited cross-connect capabilities" may also be defined to
        restrict this functionality. Just as network slicing and
        network abstraction may be applied recursively, so a node in a VN network, can
        topology may be represented created by single physical entity applying slicing or
        by a group of abstraction on
        the nodes moving from physical to virtual network.

     - in the underlying topology.

     . Link: A link is a topological entity describing the effective
        adjacency between two or more forwarding entities, such as two
        or more nodes. In its basic form (i.e., point-to-point Link) it
        associates an edge point on the graph representation of a node with an equivalent edge
        point on another node. Links in virtual network is in fact
        connectivity, realized by bandwidth engineering between any two TE
        topology. Two nodes meeting certain criteria, for example, redundancy,
        protection, latency, not tied to any technology specific
        characteristics like timeslots or wavelengths. The link can be
        dynamic, realized connected by a service link are said to be
        "adjacent" in underlay, or static.

     - PNC domain: A PNC domain includes all the resources under the
        control of TE topology. In a single PNC. It can be composed by different
        routing domains, administrative domains and different layers.
        The interconnection between PNC domains can be physical network, a link or
        corresponds to a physical connection. In a sliced topology, a

        link    -------
                   (       )---------( is some subset of the capabilities of a physical
        connection. In an abstract network, a link (sometimes called an
        abstract link) is a representation as an edge in the topology
        of the abstract network of one or more links and the nodes they
        connect from the physical network. Abstract links may be
        realized by Label Switched Paths (LSPs) across the physical
        network that may be pre-established or could be only
        potentially achievable. Just as network slicing and network
        abstraction may be applied recursively, so a link in a topology
        may be created by applying slicing or abstraction on the links
        in the underlying topology. While most links are point-to-
        point, connecting just two nodes, the concept of a multi-access
        link exists where more than two nodes are collectively adjacent
        and data sent on the link by one node will be equally delivered
        to all other nodes connected by the link.

     . PNC: A Physical Network Controller is a domain controller that
        is responsible for controlling devices or NEs under its direct

     . PNC domain: A PNC domain includes all the resources under the
        control of a single PNC. It can be composed of different
        routing domains and administrative domains, and the resources
        may come from different layers. The interconnection between PNC
        domains can be a link or a node.

                     _______   Border Link    _______
                    _(       )================(       )_
                  _(           )_          _(           )_
                 (               )  ----  (               )
                  -         -    __ -         -
                (       PNC     )+---+(       )|    |(       PNC       )
                (    Domain X   )   (     )|    |(     Domain Y    )
                (           )+---+(           )
                  -         - border-         -
                   (       )   node  (                 )|    |(                 )
                    -------           -------
                 (_             _)  ----  (_             _)
                   (_         _)   Border   (_         _)
                     (_______)      Node      (_______)

                       Figure 1 : 1: PNC domain borders

     - Domain Borders

     . A Virtual Network (VN) is a client customer view (typically a network slice) of the transport TE
        network.  It is presented by the provider as a set of physical
        and/or abstracted resources. Depending on the agreement between
        client and provider various VN operations and VN views are possible.

        possible as follows:

          o VN Creation - VN could be pre-configured and created via
             offline negotiation between customer and provider. In
             other cases, the VN could also be created dynamically
             based on the a request from the customer with given SLA
             attributes which satisfy the customer's objectives.


          o Dynamic Operations - The VN could be further modified and or
             deleted based on a customer request to request changes in the
        network resources reserved for the customer. request. The
             customer can further act upon the virtual network
             resources to perform E2E end-to-end tunnel management (set-up/release/modify). (set-
             up/release/modify). These changes will
        incur result in
             subsequent LSP management on at the operator's level.


          o VN View - (a) View:

               a. The VN can be seen as an (or set of) e2e
        tunnel(s) of end-to-end tunnels from a
                 customer point of view view, where an e2e each tunnel is referred
                 as a VN member. Each VN member (i.e., e2e tunnel) can then be formed by
                 recursive aggregation slicing or abstraction of lower level paths at
        a provider level. in
                 underlying networks. Such end to end end-to-end tunnels may
                 comprise of customer end points, access links, intra
                 domain paths paths, and inter-domain link. links. In this view VN
                 is thus a list set of VN members. (b)

               b. The VN can also be seen as a terms of topology comprising of physical
                 physical, sliced, and abstracted abstract nodes and links. The
                 nodes in this case include physical customer end
                 points, border nodes, and internal nodes as well as
                 abstracted nodes. Similarly the links includes include physical access,
                 access links, inter-domain links, and intra-domain
                 links as well as abstracted abstract links. The abstracted abstract nodes
                 and links in this view can be pre-negotiated or
                 created dynamically.

     - Abstraction is the

     . Abstraction. This process of applying policy to the available
        TE information within a domain, to produce selective
        information that represents the potential ability to connect
        across the domain.  Thus, abstraction does not necessarily
        offer all possible connectivity options, but it presents a
        general view of potential connectivity according to the
        policies that determine how the domain's administrator wants to
        allow the domain resources to be used. [RFC7926]

     - Abstract Link: An abstract link is the representation of the
        characteristics of a path between two nodes defined in a domain
        produced by abstraction. [RFC7926].

     . Abstract Link: The abstract link term "abstract link" is advertised
        outside that domain as a TE link for use in signaling defined in other
        domains.  Thus, an abstract link represents the potential to
        connect between a pair of nodes. [RFC7926]


     . Abstract Topology: Every lower controller in the provider
        network, when is representing its network The topology to an higher
        layer, it may want to hide details of abstract nodes and abstract
        links presented through the actual process of abstraction by a lower
        layer network
        topology. In such case, an abstract topology may be used for
        this purpose. Abstract topology enhances scalability for the
        MDSC to operate multi-domain networks

     - use by a higher layer network.

     . Access link: A link between a customer node and a provider

     - Inter domain

     . Inter-domain link: A link between domains managed by different
        PNCs. The MDSC is in charge of managing inter-domain links.

     - Border node: A node whose interfaces belong to different
        domains. It may be managed by different PNCs or by the MDSC.


     . Access Point (AP): An access point is defined on an access
        link. It is used to keep
        confidentiality between the customer and the provider. It is an a
        logical identifier shared between the customer and the
        provider, used to map the end points of the border node in both
        the customer and the provider NW. The AP can be used by the
        customer when requesting connectivity VN service to the provider.
        A number of parameters, e.g. available bandwidth, need to be
        associated to the AP to qualify it.


     . VN Access Point (VNAP): A VNAP is defined within as the binding
        between an AP as part
        of and a given VN and is used to identify the
        portion of the AP,
        and hence of the access link) and/or inter-domain link dedicated to a
        given VN.

2. Business Model of ACTN

   The Virtual Private Network (VPN) [RFC4026] and Overlay Network (ON)
   models [RFC4208] are built on the premise that one single the network provider
   provides all virtual private or overlay networks to its customers.
   These models are simple to operate but have some disadvantages in
   accommodating the increasing need for flexible and dynamic network
   virtualization capabilities.

   The ACTN model is built upon entities that reflect the current
   landscape of network virtualization environments.

   There are three key entities in the ACTN model [ACTN-PS]: model:

     - Customers
     - Service Providers
     - Network Providers

   These are described in the following sections.

    2.1. Customers

   Within the ACTN framework, different types of customers may be taken
   into account depending on the type of their resource needs, and on
   their number and type of access. As For example, it is possible to
   group them into two main categories:

   Basic Customer: Basic customers include fixed residential users,
   mobile users and small enterprises. Usually Usually, the number of basic
   customers for a service provider is high; high: they require small amounts
   of resources and are characterized by steady requests (relatively
   time invariant). A typical request for a basic customer is for a
   bundle of voice services and internet access. Moreover Moreover, basic
   customers do not modify their services themselves; themselves: if a service
   change is needed, it is performed by the provider as a proxy and they the
   services generally have very few dedicated resources (subscriber (such as for
   subscriber drop), with everything else shared on the basis of some SLA,
   Service Level Agreement (LSA), which is usually best-efforts.

   Advanced Customer: Advanced customers typically include enterprises,
   governments and utilities. Such customers can ask for both point to point-to
   point and multipoint connectivity with high resource demand
   significantly demands varying
   significantly in time and from customer to customer. This is one of
   the reasons why a bundled service offering is not enough and it is
   desirable to provide each of them advanced customer with a customized
   virtual network service.

   Advanced customers may own dedicated virtual resources, or share
   resources. They may also have the ability to modify their service
   parameters within the scope of their virtualized environments. The
   primary focus of ACTN is Advanced Customers.

   As customers are geographically spread over multiple network
   provider domains, they have to interface to multiple providers and
   may have to support multiple virtual network services with different
   underlying objectives set by the network providers. To enable these
   customers to support flexible and dynamic applications they need to
   control their allocated virtual network resources in a dynamic
   fashion, and that means that they need an abstracted a view of the
   topology that spans all of the network providers.

   ACTN's primary focus is Advanced Customers.

   Customers of a given service provider can in turn offer a service to
   other customers in a recursive way. An example of recursiveness with
   2 service providers is shown below.

     - Customer (of service B)
     - Customer (of service A) & Service Provider (of service B)
     - Service Provider (of service A)
     - Network Provider

   +------------------------------------------------------------+   ---
   |                                                            |    ^
   |                                     Customer (of service B)|    .
   | +--------------------------------------------------------+ |    B
   | |                                                        | |--- .
   | |Customer (of service A) & Service Provider(of service B)| | ^  .
   | | +---------------------------------------------------+  | | .  .
   | | |                                                   |  | | .  .
   | | |                    Service Provider (of service A)|  | | A  .
   | | |+------------------------------------------+       |  | | .  .
   | | ||                                          |       |  | | .  .
   | | ||                          Network provider|       |  | | v  v
   | | |+------------------------------------------+       |  | |------
   | | +---------------------------------------------------+  | |
   | +--------------------------------------------------------+ |

                     Figure 2 : Service Recursiveness.

    2.2. Service Providers

   Service providers are the providers of virtual network services to
   their customers. Service providers may or may not own physical
   network resources. resources (i.e, may or may not be network providers as
   described in Section 2.3). When a service provider is the same as
   the network provider, this is similar to traditional existing VPN models. models applied
   to a single provider. This
   model approach works well when the customer
   maintains a single interface with a single provider.  When customer location
   spans across multiple independent network provider domains, then it becomes
   hard to facilitate the creation of end-to-end virtual network
   services with this model.

   A more interesting case arises when network providers only provide
   infrastructure, while distinct service providers directly interface their to the
   customers. In this case, service providers are, themselves are customers
   of the network infrastructure providers. One service provider may
   need to keep multiple independent network providers as its end-users
   span geographically across multiple network provider domains as
   shown domains.

   The ACTN network model is predicated upon this three tier model and
   is summarized in Figure 2 where Service Provider A uses resources from
   Network Provider A and Network Provider B to offer a virtualized
   network to its customer.

   Customer            X -----------------------------------X

   Service Provider A  X -----------------------------------X

   Network Provider B                     X-----------------X

   Network Provider A  X------------------X

   Figure 3 : A service Provider as Customer of Two Network Providers.

   The ACTN network model is predicated upon this three tier model and
   is summarized in Figure 3: 2:

                       |       customer       |
                                 |   /\  Service/Customer specific
                                 |   ||  Abstract Topology
                                 |   ||
                       +----------------------+  E2E abstract
                       |  Service Provider    | topology creation
                       /         |            \
                      /          |             \  Network Topology
                     /           |              \ (raw or abstract)
                    /            |               \
   +------------------+   +------------------+   +------------------+
   |Network Provider 1|   |Network Provider 2|   |Network Provider 3|
   +------------------+   +------------------+   +------------------+

                      Figure 4 : 2: Three tier model.

   There can be multiple service providers to which a customer may

   There are multiple types of service providers. providers:

     . Data Center providers: providers can be viewed as a service provider type
        as they own and operate data center resources to for various WAN
        customers, and they can lease physical network resources from
        network providers.
     . Internet Service Providers (ISP): can be a (ISP) are service provider providers of
        internet services to their customers while leasing physical
        network resources from network providers.
     . Mobile Virtual Network Operators (MVNO): (MVNO) provide mobile services
        to their end-users without owning the physical network

    2.3. Network Providers

   Network Providers are the infrastructure providers that own the
   physical network resources and provide network resources to their
   customers. The layered model proposed by described in this draft architecture
   separates the concerns of network providers and customers, with
   service providers acting as aggregators of customer requests.

3. ACTN architecture Architecture

   This section provides a high-level control and interface model of
   ACTN. ACTN showing the
   interfaces and the flow of control between components.

   The ACTN architecture, while being architecture is aligned with the ONF SDN architecture [ONF-ARCH], is presenting [ONF-
   ARCH] and presents a 3-tiers reference model. It allows for
   hierarchy and recursiveness not only of SDN controllers but also of
   traditionally controlled domains. domains that use a control plane. It
   defines three types of controllers depending on the functionalities
   they implement. The main functionalities that are identified are:

     . Multi domain Multi-domain coordination function: This function oversees the
        specific aspects of the different domains and builds a single
        abstracted end-to-end network topology in order to coordinate
        end-to-end path computation and path/service provisioning.
        Domain sequence path calculation/determination is also a part
        of this function.

     . Virtualization/Abstraction function: This function provides an
        abstracted view of the underlying network resources towards
        customer, being it for use by
        the customer - a customer may be the client or a higher level
        controller entity. It This function includes network path
        computation based on customer service connectivity request
        constraints, path computation based on the global network-wide
        abstracted topology topology, and the creation of an abstracted view of
        network slices allocated to each customer,
        according to customer. These operations
        depend on customer-specific network objective functions, functions and
        to the
        customer traffic profile. profiles.

     . Customer mapping/translation function: This function is to map
        customer intent-like commands requests/commands into network provisioning requests
        that can be sent to the Physical Network Controller (PNC)
        according to business OSS/NMS policies provisioned static statically or dynamic policy.
        dynamically at the OSS/NMS. Specifically, it provides mapping and
        translation of a customer's service request into a set of
        parameters that are specific to a network type and technology
        such that network configuration process is made possible.

     . Virtual service coordination: coordination function: This function translates
        customer service-related information into the virtual network
        service operations in order to seamlessly operate virtual
        networks while meeting a customer's service requirements. In
        the context of ACTN, service/virtual service coordination
        includes a number of service orchestration functions such as
        multi-destination load balancing, guarantees of service
        quality, bandwidth and
        throughput and notification throughput. It also includes
        notifications for service fault and performance degradation and
        so forth.

   The virtual services that are coordinated under ACTN can be split
   into two categories:

     . Service-aware Connectivity Services: This category includes all
        the network service operations used to provide connectivity
        between customer end-points while meeting policies and service
        related constraints. The data model for this category would
        include topology entities such as virtual nodes, virtual links,
        adaptation and termination points and service-related entities
        such as policies and service related constraints. (See Section

     . Network Function Virtualization (NFV) Services: These kinds of
        service are usually setup set up in NFV (e.g. cloud) providers and
        require connectivity between a customer site and the NFV
        provider site (e.g. (e.g., a data center). These VNF NFV services may
        include a security function like a firewall, a traffic
        optimizer, and the provisioning of storage or computation
        capacity. In these cases the customer does not care whether the
        service is implemented in a given one data center or another. This
        allows the network provider divert customer requests where to the
        most suitable. suitable data center. This is also known as the "end
        points mobility" case. (See case (see Section
        4.2.3) 4.2.3).

   The types of controller defined in the ACTN architecture are shown
   in Figure 4 3 below and are
   the following: as follows:

     . CNC - Customer Network Controller
     . MDSC - Multi Domain Service Coordinator
     . PNC - Physical Network Controller

   Figure 3 also shows the following interfaces:

     . CMI - CNC-MPI Interface
     . MPI - MDSC-PNC Interface

   VPN customer         NW Mobile Customer     ISP NW service Customer
       |                         |                           |
   +-------+                 +-------+                   +-------+
   | CNC-A |                 | CNC-B |                   | CNC-C |
   +-------+                 +-------+                   +-------+
        \                        |                           /
          -----------            |CMI I/F      --------------
                     \           |            /
                      |         MDSC          |
                      /          |            \
         -------------           |MPI I/F      -------------
        /                        |                          \
   +-------+                 +-------+                   +-------+
   |  PNC  |                 |  PNC  |                   |  PNC  |
   +-------+                 +-------+                   +-------+
        | GMPLS             /      |                      /    \
        | trigger          /       |                     /      \
       --------         ----      +-----+            +-----+     \
      (        )       (    )     | PNC |            | PCE |      \
      -        -      ( Phys )    +-----+            +-----+    -----
     (  GMPLS   )      (Netw)        |                /        (     )
    (  Physical  )      ----         |               /        ( Phys. )
     (  Network )                 -----        -----           ( Net )
      -        -                 (     )      (     )           -----
      (        )                ( Phys. )    ( Phys  )
       --------                  ( Net )      ( Net )
                                  -----        -----

                     Figure 5 : 3: ACTN Control Hierarchy

3.1. Customer Network Controller

   A Virtual Network Service is instantiated by the Customer Network
   Controller via the CMI (CNC-MDSC Interface). CNC-MDSC Interface (CMI). As the Customer Network
   Controller directly interfaces to the applications, it understands
   multiple application requirements and their service needs. It is
   assumed that the Customer Network Controller and the MDSC have a
   common knowledge on of the end-point interfaces based on their business
   negotiations prior to service instantiation. End-point interfaces
   refer to customer-network physical interfaces that connect customer
   premise equipment to network provider equipment.

   In addition to abstract networks, ACTN allows to provide

3.2. Multi Domain Service Coordinator

   The Multi Domain Service Coordinator (MDSC) sits between the CNC
   with services. Example of services include
   that issues connectivity between one
   of the customer's end points with a given set of resources in a data
   center from the service provider.

    3.2. Multi Domain Service Coordinator

   The MDSC (Multi Domain Service Coordinator) sits between the CNC
   (the one issuing connectivity requests) requests and the PNCs (Physical Physical Network Controllersr - the ones managing
   Controllers (PNCs) that manage the physical network
   resources). resources. The
   MDSC can be collocated with the PNC, especially in those cases where
   the service provider and the network provider are the same entity.

   The internal system architecture and building blocks of the MDSC are
   out of the scope of ACTN. Some examples can be found in the
   Application Based Network Operations (ABNO) architecture [ABNO] [RFC7491]
   and the ONF SDN architecture [ONF-ARCH].

   The MDSC is the only building block of the architecture that is able
   to implement all the four ACTN main functionalities, i.e. functions, i.e., multi domain
   coordination, virtualization/abstraction, customer
   mapping/translation, and virtual service coordination. The first two
   functions of the MDSC, namely, multi domain coordination function, and
   virtualization/abstraction function, are referred to as network
   control/orchestration functions while the last two functions,
   namely, customer mapping function mapping/translation and virtual service coordination.
   coordination are referred to as service control/orchestration
   The key point of the MDSC and (and of the whole ACTN framework framework) is
   detaching the network and service control from underlying technology and
   to help the customer express the network as desired by business
   needs. The MDSC envelopes the instantiation of the right technology
   and network control to meet business criteria. In essence it
   controls and manages the primitives to achieve functionalities as
   desired by CNC
   A hierarchy of MDSCs can be foreseen for scalability and
   administrative choices. choices as shown in Figure 4.

   +-------+                 +-------+                 +-------+
   | CNC-A |                 | CNC-B |                 | CNC-C |
   +-------+                 +-------+                 +-------+
         \                       |                        /
          ----------             |             ----------
                     \           |            /
                      |         MDSC          |
                      /          |            \
            ----------           |             -----------
           /                     |                        \
   +----------+              +----------+             +--------+
   |   MDSC   |              |   MDSC   |             |  MDSC  |
   +----------+              +----------+             +--------+
        |                    /     |                     /    \
        |                   /      |                    /      \
     +-----+           +-----+  +-----+            +-----+  +-----+
     | PNC |           | PNC |  | PNC |            | PNC |  | PNC |
     +-----+           +-----+  +-----+            +-----+  +-----+

                    Figure 6 : 4: Controller recursiveness

   A key requirement for allowing recursion of MDSCs is that a single
   interface needs to be defined both for the north and the south

   In order to allow for multi-domain coordination a 1:N relationship
   must be allowed between MDSCs and between MDSCs and PNCs (i.e. 1
   parent MDSC and N child MDSC or 1 MDSC and N PNCs). In the case
   where there is a hierarchy of MDSCs, the interface above the top
   MDSC (i.e., CMI) and the interface below the bottom MDSCs (i.e.,
   SBI) remain the same. The recursion of MDSCs in the middle layers
   within this hierarchy of MDSCs may take place.

   In addition to
   that that, it could also be possible to have also a an M:1
   relationship between
   MDSC MDSCs and PNC to allow for network resource
   partitioning/sharing among different customers not necessarily
   connected to the same MDSC
   (e.g. (e.g., different service providers).

3.3. Physical Network Controller

   The Physical Network Controller (PNC) is the one in charge of configuring
   the network elements, monitoring the physical topology of the
   network, and passing it, either information about the topology (either raw or abstracted,
   abstracted) to the MDSC.

   The internal architecture of the PNC, his its building blocks blocks, and the
   way it controls its domain, domain are out of the scope of ACTN. Some
   examples can be found in the Application Based Network Operations
   (ABNO) architecture [ABNO] [RFC7491] and the ONF SDN architecture [ONF-ARCH] [ONF-

   The PNC, in addition to being in charge of controlling the physical
   network, is able to implement two of the four main ACTN main
   functions: multi domain coordination function and virtualization/abstraction function
   A hierarchy of PNCs can be foreseen for scalability and
   administrative choices.

3.4. ACTN interfaces Interfaces

   To allow virtualization and multi domain coordination, the network
   has to provide open, programmable interfaces, in through which customer
   applications can create, replace and modify virtual network
   resources and services in an interactive, flexible and dynamic
   fashion while having no impact on other customers. Direct customer
   control of transport network elements and virtualized services is
   not perceived as a viable proposition for transport network
   providers due to security and policy concerns among other reasons.
   In addition, as discussed in the previous section, Section 3.3, the network control plane
   for transport networks has been separated from the data plane and as
   such it is not viable for the customer to directly interface with
   transport network elements.

   Figure 5 depicts a high-level control and interface architecture for
   ACTN. A number of key ACTN interfaces exist for deployment and
   operation of ACTN-based networks. These are highlighted in Figure 5
   (ACTN Interfaces) below: Interfaces).

               -------------   |
              | Application |--
                     | I/F A                 --------
                     v                      (        )
                --------------             -          -
               | Customer     |           (  Customer  )
               |  Network     |--------->(    Network   )
               |   Controller |           (            )
                --------------             -          -
                     ^                      (        )
                     | I/F B                 --------
               | MultiDomain  |
               |  Service     |
               |   Coordinator|            --------
                --------------            (        )
                     ^                   -          -
                     | I/F C            (  Physical  )
                     v                 (    Network   )
                  ---------------       (            )     --------
                 |               |<----> -          -     (        )
                --------------   |        (        )     -         -
               | Physical     |--          --------     (  Physical  )
               |  Network     |<---------------------->(    Network   )
               |   Controller |         I/F D           (            )
                --------------                           -         -
                                                          (        )

                         Figure 7 : 5: ACTN Interfaces

   The interfaces and functions are described below:

     . Interface A: A north-bound interface (NBI) that will
        communicate communicates
        the service request or application demand. A request will include includes
        specific service properties, including:
        services, including service type, topology, bandwidth
        bandwidth, and constraint information.

     . Interface B: The CNC-MDSC Interface (CMI) is an interface
        between a Customer Network Controller CNC and a Multi Service
        Domain Controller. an MDSC. It requests is used to request the creation
        of the network resources, topology or services for the
        applications. Note that all service related information
        conveyed via Interface A (i.e., specific service properties,
        including service type, topology, bandwidth, and constraint
        information) needs to be transparently carried over this
        interface.  The
        Virtual Network Controller MDSC may also report potential network topology
        availability if queried for current capability from the Customer Network Controller. CNC.

     . Interface C: The MDSC-PNC Interface (MPI) is an interface
        between a Multi Domain Service Coordinator an MDSC and a Physical
        Network Controller. PNC. It communicates the creation request, if
        required, of
        requests for new connectivity of or for bandwidth changes in the
        physical network, via the PNC. network. In multi-domain environments, the MDSC needs
        to establish multiple MPIs, one for each PNC, as there are multiple PNCs is one
        PNC responsible for its domain control. control of each domain.

     . Interface D: The provisioning interface for creating forwarding
        state in the physical network, requested via the Physical
        Network Controller.

   The interfaces within the ACTN scope are B and C.

4. VN creation process Creation Process

   The provider can present to the customer different level of network
   abstraction, abstraction to
   the customer, spanning from one extreme (say "black") where nothing
   is shown, just
   except the APs, Access Points (APs) is shown to the other extreme (say
   "white") where a
   slice of the an actual network topology is shown to the customer.
   There are shades of
   gray "gray" in between where a number of abstract
   links and nodes can be shown.


   VN creation is composed by of two phases: Negotiation and

   Negotiation: In the case of grey/white gray/white topology abstraction, there
   is an a priori initial phase in which the customer agrees with the provider
   on the type of topology to be shown, e.q. shown (e.g., 10 virtual links and 5
   virtual nodes nodes) with a given interconnectivity. This is something
   that is assumed to be preconfigured by the operator off-line, what off-line. What
   online on-line is the capability of modifying/deleting to modify/delete something (e.g. (e.g., a
   virtual link). In the case of "black" abstraction this negotiation
   phase does not happen, in the sense that happen because there is nothing to negotiate: the
   customer can only see the APs of the network.

   Implementation: In the case of black topology abstraction, the
   customers can ask for connectivity with given constraints/SLA
   between the APs and LSPs/tunnels are created by the provider to satisfy
   the request. What the customer sees is only that his CEs are
   connected with a given SLA. In the case of grey/white topology the
   customer creates his own LSPs accordingly to the topology that was
   presented to him.

5. Access Points and Virtual Network Access Points

   In order not to share unwanted topological information between the
   customer domain and provider domain, a new entity is defined and
   associated which
   is referred to an access link, as the Access Point (AP). See the definition of AP in
   Section 1.1.

   A customer node will use APs as the end points for the request of

   A number of parameters need to be associated to the APs. Such
   parameters need to include at least: the maximum reservable
   bandwidth on the link, the available bandwidth and the link
   characteristics (e.g. switching capability, type of mapping).

   Editor note: it is not appropriate to define link characteristics
   like bandwidth against a point (AP). A solution needs to be found.
   VNs as shown in Figure 6.

                        (             )
                       -               -
        +---+ X       (                 )      Z +---+
        |CE1|---+----(                   )---+---|CE2|
        +---+   |     (                 )    |   +---+
               AP1     -               -    AP2
                        (             )

                  Figure 8 : 6: APs definition customer view

   Let's take as an example a scenario shown in which Figure 6. CE1 is
   connected to the network via a 10Gb link and CE2 via a 40Gb link.
   Before the creation of any VN between AP1 and AP2 the customer view
   can be summarized as

                +-----+----------+-------------+----------+ shown in Table 1:

                      |End Point | Access Link Bandwidth  |
                |AP id| CE,port  | MaxResBw | AvailableBw | CE,port  |
                | AP1 |CE1,portX |   10Gb   |    10Gb     |CE1,portX     |
                | AP2 |CE2,portZ |   40Gb   |    40Gb     |CE2,portZ     |

                      Table 1: AP - customer view

   On the other side hand, what the provider sees is: is shown in Figure 7.

                        -------            -------
                       (       )          (       )
                      -         -        -         -
                 W  (+---+       )      (       +---+)  Y
              -+---( |PE1| Dom.X  )----(  Dom.Y |PE2| )---+-
               |    (+---+       )      (       +---+)    |
               AP1    -         -        -         -     AP2
                       (       )          (       )
                        -------            -------

                     Figure 9 : 7: Provider view of the AP

   Which in the example above ends up results in a summarization as follows:

                +-----+----------+-------------+----------+ shown in Table 2.

                      |End Point | Access Link Bandwidth  |
                |AP id| PE,port  | MaxResBw | AvailableBw | PE,port  |
                | AP1 |PE1,portW |   10Gb   |    10Gb     |PE1,portW     |
                | AP2 |PE2,portY |   40Gb   |    40Gb     |PE2,portY     |

                        Table 2: AP - provider view

   The second entity that

   A Virtual Network Access Point (VNAP) needs to be defined is a structure within as binding
   between the AP that is linked to a VN and that is used to allow for
   different VN VNs to be provided starting start from the same AP. It also allows reserving
   the bandwidth for the VN traffic
   engineering on the access link. Such entity is called
   Virtual Network Access Point. For each virtual network is defined on
   an AP, a and/or inter-domain links (e.g., keeping
   track of bandwidth allocation). A different VNAP is created. created on an AP
   for each VN.

   In the simple scenario depicted above we suppose we want to create
   two virtual networks. The first one has with VN identifier 9 between AP1 and
   AP2 with and bandwidth of 1Gbps, while the second one with VN id 5, again
   between AP1 and AP2 and with bandwidth 2Gbps.

   The customer provider view would evolve as follows:

              +---------+----------+-------------+----------+ shown in Table 3.

                        |End Point |  Access Link/VNAP Bw   |
              |AP/VNAPid| PE,port  | MaxResBw | AvailableBw | PE,port  |
              |AP1      |PE1,portW |  10Gbps  |    7Gbps    |PE1,portW    |
              | -VNAP1.9|          |   1Gbps  |     N.A.    |
              | -VNAP1.5|          |   2Gbps  |     N.A     |          |
              |AP2      |PE2,portY |   40Gb  40Gbps  |    37Gb     |PE2,portY    37Gbps   |
              | -VNAP2.9|          |   1Gbps  |     N.A.    |
              | -VNAP2.5|          |   2Gbps  |     N.A     |          |

         Table 3: AP and VNAP - provider view after VN creation

5.1. Dual homing scenario

   Often there is a dual homing relationship between a CE and a pair of
   PEs. This case needs to be supported also by the definition of VN, AP APs
   and VNAP. VNAPs. Suppose to have CE1 connected to two different PE PEs in the
   operator domain via AP1 and AP2 and that the customer needing needs 5Gbps of
   bandwidth between CE1 and CE2. This is shown in Figure 8.

                        AP1    --------------   (            )    AP3
                      -------(PE1)       (PE3) -------
                   W /      -                -       (PE3)-------
                    W/      (                 )      \X
             +---+ /
               +---+/      (                   )        \ +---+      \+---+
               |CE1|      (                     )      |CE2|
             +---+ \
               +---+\      (                   )      /+---+
                    Y\      (                 )        / +---+
                   Y \      -                -      /Z
                      -------(PE2)       (PE4) -------       (PE4)-------
                        AP2    --------------   AP4   (____________)

                      Figure 10   : 8: Dual homing scenario

   In this case case, the customer will request for a VN between AP1, AP2
   and AP3 specifying a dual homing relationship between AP1 and AP2.
   As a consequence no traffic will be flowing flow between AP1 and AP2. The dual
   homing relationship would then be mapped against the VNAPs (since
   other independent VNs might have AP1 and AP2 as end points).

   The customer view would be as follows:

        +---------+----------+-------------+----------+-----------+ shown in Table 4.

                  |End Point |  Access Link/VNAP Bw   |
        |AP/VNAPid| CE,port  | MaxResBw | AvailableBw | CE,port |Dual Homing|
        |AP1      |CE1,portW |  10Gbps  |    5Gbps    |CE1,portW    |           |
        | -VNAP1.9|          |   5Gbps  |     N.A.    |          | VNAP2.9   |
        |AP2      |CE1,portY |  40Gbps  |    35Gbps   |CE1,portY   |           |
        | -VNAP2.9|          |   5Gbps  |     N.A.    |          | VNAP1.9   |
        |AP3      |CE2,portX |  40Gbps  |   35Gbps    |CE2,portZ    |           |
        | -VNAP3.9|          |   5Gbps  |     N.A.    |          |   NONE    |

          Table 4: Dual homing - customer view after VN creation

6. End point selection & mobility Point Selection and Mobility

   Virtual networks could be used as the infrastructure to connect a
   number of sites of belonging to a customer among them or to provide connectivity
   between customer sites and virtualized network functions Virtualized Network Functions (VNF) like
   for example such
   as virtualized firewall, vBNG, firewalls, virtual Broadband Network Gateway (vBNG),
   storage, or computational functions.

6.1. End point selection & mobility Point Selection

   A VNF could be deployed in different places (e.g. (e.g., data centers A, B
   B, or C in figure below) Figure 9), but the VNF provider (=ACTN (that is, the ACTN
   customer) doesn't know which is the best site where in which to install
   the VNF from a network point of view (e.g. (e.g., to optimize for low
   latency). For example example, it is possible to compute
   the a path minimizing
   the delay between AP1 and AP2, but the customer doesn't know a priori if the
   path with minimum delay is towards A, B DC-A, DC-B, or C. DC-C.

                    -------            -------
                   (       )          (       )
                  -         -        -         -
   +---+         (           )      (           )        +----+
   |CE1|---+----(  Domain X   )----(  Domain Y   )---+---|DC-A|
   +---+   |     (           )      (           )    |   +----+
           AP1    -         -        -         -    AP2
                   (       )          (       )
                    ---+---            ---+---
                   AP3 |              AP4 |
                    +----+              +----+
                    |DC-B|              |DC-C|
                    +----+              +----+

                       Figure 11   : 9: End point selection

   In this case the VNF provider (=ACTN (that is, the ACTN customer) should be
   allowed to ask for a VN between AP1 and a set of end points. The
   list of end points is provided supplied by the VNF provider. When the end
   point is identified the connectivity can be instantiated and a
   notification can be sent to the VNF provider for the instantiation
   of the VNF.

6.2. Preplanned end point migration Pre-Planned End Point Migration

   A premium SLA for VNF service provisioning consists on the of offering of a
   protected VNF instantiated on two or more sites and with a hot
   stand-by protection mechanism. In this case the VN should be
   provided so as to switch from one end point to another upon a
   trigger from the VNF provider or from an automatic failure detection
   mechanism.  An example is provided in figure below Figure 10 where the request
   from the VNF provider is for connectivity with given constraint and resiliency between
   CE1 and a VNF with primary installation instantiation in DC-A and a protection
   instance in DC-C.

                    -------            -------
                   (       )          (       )
                  -         -    __  -         -
   +---+         (           )      (           )        +----+
   |CE1|---+----(  Domain X   )----(  Domain Y   )---+---|DC-A|
   +---+   |     (           )      (           )    |   +----+
           AP1    -         -        -         -    AP2    |
                   (       )          (       )            |
                    ---+---            ---+---             |
                   AP3 |              AP4 |         HOT STANDBY
                                       +----+              |

                 Figure 12   : 10: Preplanned endpoint migration

6.3. On the fly end point migration

   The one Fly End Point Migration

   On the fly end point migration concept is very similar to the end point
   selection one. The idea is to give the provider not only the list of
   sites where the VNF can be installed, but also a mechanism to notify changes in notify
   changes in the network that have impacts on the SLA. After an
   handshake with the customer controller/applications, the bandwidth
   in network would be moved accordingly with the moving of the VNFs.

7. Manageability Considerations

   The objective of ACTN is to manage traffic engineered resources, and
   provide a set of mechanism to allow clients to request virtual
   connectivity across server network resources. ACTN will support
   multiple clients each with its own view of and control of the server
   network, the network operator will need to partition (or "slice")
   their network resources, and manage them resources accordingly.

   The ACTN platform will, itself, need to support the request,
   response, and reservations of client and network layer connectivity.
   It will also need to provide performance monitoring and control of
   traffic engineered resources. The management requirements may be
   categorized as follows:

     . Management of external ACTN protocols
     . Management of internal ACTN protocols
     . Management and monitoring of ACTN components
     . Configuration of policy to be applied across the ACTN system

7.1. Policy

   It is expected that a policy will be an important aspect of ACTN
   control and management. Typically, policies are used via the
   components and interfaces, during deployment of the service, to
   ensure that the service is compliant with agreed policy factors
   (often described in Service Level Agreements - SLAs), these include,
   but are not limited to: connectivity, bandwidth, geographical
   transit, technology selection, security, resilience, and economic

   Depending on the deployment the ACTN deployment architecture, some
   policies may have local or global significance. That is, certain
   policies may be ACTN component specific in scope, while others may
   have broader scope and interact with multiple ACTN components. Two
   examples are provided below:

     . A local policy might limit the number, type, size, and
       scheduling of virtual network services a customer may request
       via its CNC. This type of policy would be implemented locally on
       the MDSC.

     . A global policy might constrain certain customer types (or
       specific customer applications) to only use certain MDSCs, and
       be restricted to physical network types managed by the PNCs. A
       global policy agent would govern these types of policies.

   This objective of this section is to discuss the applicability of
   ACTN policy: requirements, components, interfaces, and examples.
   This section provides an analysis and does not mandate a specific
   method for enforcing policy, or the type of policy agent that would
   be responsible for propagating policies across the ACTN components.
   It does highlight examples of how policy may be applied in the
   context of ACTN, but it is expected further discussion in an
   applicability or solution specific document, will be required.

7.2. Policy applied to the Customer Network Controller

   A virtual network service for a customer application will be
   requested from the CNC. It will reflect the application requirements
   and specific service policy needs, including bandwidth, traffic type
   and survivability. Furthermore, application access and type of
   virtual network service requested by the CNC, will be need adhere to
   specific access control policies.

7.3. Policy applied to the Multi Domain Service Coordinator

   A key objective of the MDSC is to help the customer express the
   application connectivity request via its CNC as set of desired
   business needs, therefore policy will play an important role.

   Once authorised, the virtual network service will be instantiated
   via the CNC-MDSC Interface (CMI), it will reflect the customer
   application and connectivity requirements, and specific service
   transport needs. The CNC and the MDSC components will have agreed
   connectivity end-points, use of these end-points should be defined
   as a policy expression when setting up or augmenting virtual network
   services. Ensuring that permissible end-points are defined for CNCs
   and applications will require the MDSC to maintain a registry of
   permissible connection points for CNCs and application types.

   It may also be necessary for the MDSC to resolve policy conflicts,
   or at least flag any issues to administrator of the MDSC itself.
   Conflicts may occur when virtual network service optimisation
   criterion are in competition. For example, to meet objectives for
   service reachability a request may require an interconnection point
   between multiple physical networks; however, this might break a
   confidentially policy requirement of specific type of end-to-end
   service. This type of situation may be resolved using hard and soft
   policy constraints.

7.4. Policy applied to the Physical Network Controller

   The PNC is responsible for configuring the network elements,
   monitoring physical network resources, and exposing connectivity
   (direct or abstracted) to the MDSC. It is therefore expected that
   policy will dictate what connectivity information will be exported
   between the PNC, via the MDSC-PNC Interface (MPI), and MDSC.

   Policy interactions may arise when a PNC determines that it cannot
   compute a requested path from the MDSC, or notices that (per a
   locally configured policy) the network is low on resources (for
   example, the capacity on key links become exhausted).  In either
   case, the PNC will be required to notify the MDSC, which may (again
   per policy) act to construct a virtual network service across
   another physical network topology.

   Furthermore, additional forms of policy-based resource management
   will be required to provide virtual network service performance,
   security and resilience guarantees. This will likely be implemented
   via a local policy agent and subsequent protocol methods.

8. Security Considerations

   The ACTN framework described in this document defines key components
   and interfaces for managed traffic engineered networks. Securing the
   request and control of resources, confidentially of the information,
   and availability of function, should all be critical security
   considerations when deploying and operating ACTN platforms.

   Several distributed ACTN functional components are required, and as
   a rule implementations should consider encrypting data that flow
   between components, especially when they are implemented at remote
   nodes, regardless if these are external or internal network

   The ACTN security discussion is further split into three specific
   categories described in the following sub-sections:

     . Interface between the Application and Customer Network
       Controller (CNC)

     . Interface between the Customer Network Controller and Multi
       Domain Service Coordinator (MDSC), CNC-MDSC Interface (CMI)

     . Interface between the Multi Domain Service Coordinator and
       Physical Network Controller (PNC), MDSC-PNC Interface (MPI)

   From a security and reliability perspective, ACTN may encounter many
   risks such as malicious attack and rogue elements attempting to
   connect to various ACTN components. Furthermore, some ACTN
   components represent a single point of failure and threat vector,
   and must also manage policy conflicts, and eavesdropping of
   communication between different ACTN components.

   The conclusion is that all protocols used to realize the ACTN
   framework should have rich security features, and customer,
   application and network data should be stored in encrypted data
   stores. Additional security risks may still exist. Therefore,
   discussion and applicability of specific security functions and
   protocols will be better described in documents that are use case
   and environment specific.

8.1. Interface between the Application and Customer Network Controller

   This is the external interface between the application and CNC. The
   application request for virtual network service connectivity may
   also contain data about the application, requested network
   connectivity and the service that is eventually delivered to the
   customer. It is likely to use external protocols and must be
   appropriately secured using session encryption.

   As highlighted in the policy section (see Section 7), it may be
   necessary to enable different policies based on identity, and to
   manage the application requests of virtual network services. Since
   access will be largely be through external protocols, and
   potentially across the public Internet, AAA-based controls should
   also be used.

   Several additional challenges face the CNC, as the Application to
   CNC interface will be used by multiple applications. These include:

     . A need to verify the credibility of customer applications.

     . Malicious applications may tamper with or perform unauthorized
       operations, such as obtaining sensitive information, obtaining
       higher rights, or request changes to existing virtual network

     . The ability to recognize and respond to spoofing attacks or
       buffer overflow attacks will also need to be considered.

8.2. Interface between the Customer Network Controller and Multi Domain
   Service Coordinator (MDSC), CNC-MDSC Interface (CMI)

   The role of the MDSC is to detach the network and service control
   from underlying technology to help the customer express the network
   as desired by business needs. It should be noted that data stored by
   the MDSC will reveal details of the virtual network services, and
   which CNC and application is consuming the resource. The data stored
   must therefore be considered as a candidate for encryption.

   CNC Access rights to an MDSC must be managed. MDSC resources must be
   properly allocated, and methods to prevent policy conflicts,
   resource wastage and denial of service attacks on the MDSC by rogue
   CNCs, should also be considered.

   A CNC-MDSC protocol interface will likely be an external protocol
   interface. Again, suitable authentication and authorization of each
   CNC connecting to the MDSC will be required, especially, as these
   are likely to be implemented by different organisations and on
   separate functional nodes. Use of the AAA-based mechanisms would
   also provide role-based authorization methods, so that only
   authorized CNC's may access the different functions of the MDSC.

8.3. Interface between the Multi Domain Service Coordinator and
   Physical Network Controller (PNC), MDSC-PNC Interface (MPI)

   The function of the Physical Network Controller (PNC) is to
   configure network elements, provide performance and monitoring
   functions of the physical elements, and export physical topology
   (full, partial, or abstracted) to the MDSC.

   Where the MDSC must interact with multiple (distributed) PNCs, a
   PKI-based mechanism is suggested, such as building a TLS or HTTPS
   connection between the MDSC and PNCs, to ensure trust between the
   physical network that have impacts on the
   SLA. After an handshake with layer control components and the customer controller/applications, MDSC.

   Which MDSC the bandwidth in network would be moved accordingly with PNC exports topology information to, and the moving level of the VNFs.

7. Security


   detail (full or abstracted) should also be authenticated and
   specific access restrictions and topology views, should be
   configurable and/or policy-based.

9. References


9.1. Informative References

   [PCE]     Farrel, A., Vasseur, J.-P., and J. Ash, "A Path
             Computation Element (PCE)-Based Architecture", IETF

   [RFC2702] Awduche, D., et. al., "Requirements for Traffic
             Engineering Over MPLS", RFC
             4655, August 2006. 2702, September 1999.

   [RFC4026] L. Andersson, T. Madsen, "Provider Provisioned Virtual
             Private Network (VPN) Terminology", RFC 4026, March 2005.

   [RFC4208] G. Swallow, J. Drake, H.Ishimatsu, Y. Rekhter,
             "Generalized Multiprotocol Label Switching (GMPLS) User-
             Network Interface (UNI): Resource ReserVation Protocol-
             Traffic Engineering (RSVP-TE) Support for the Overlay
             Model", RFC 4208, October 2005.

   [RFC4655] Farrel, A., Vasseur, J.-P., and J. Ash, "A Path
             Computation Element (PCE)-Based Architecture", IETF RFC
             4655, August 2006.

   [RFC5654] Niven-Jenkins, B. (Ed.), D. Brungard (Ed.), and M. Betts
             (Ed.), "Requirements of an MPLS Transport Profile", RFC
             5654, September 2009.

   [RFC7149] Boucadair, M. and Jacquenet, C., "Software-Defined
             Networking: A Perspective from within a Service Provider
             Environment", RFC 7149, March 2014.

   [RFC7926] A. Farrel (Ed.), "Problem Statement and Architecture for
             Information Exchange between Interconnected Traffic-
             Engineered Networks", RFC 7926, July 2016.

   [PCE-S]   Crabbe, E, et. al., "PCEP extension for stateful
             PCE",draft-ietf-pce-stateful-pce, work in progress.

   [GMPLS]   Manning, E., et al., "Generalized Multi-Protocol Label
             Switching (GMPLS) Architecture", RFC 3945, October 2004.

   [NFV-AF]  "Network Functions Virtualization (NFV); Architectural
             Framework", ETSI GS NFV 002 v1.1.1, October 2013.

   [ACTN-PS] Y. Lee, D. King, M. Boucadair, R. Jing, L. Contreras
             Murillo, "Problem Statement for Abstraction and Control of
             Transport Networks", draft-leeking-actn-problem-statement,
             work in progress.


   [ONF-ARCH] Open Networking Foundation, "OpenFlow Switch
             Specification Version 1.4.0 (Wire Protocol 0x05)", October

   [TE-INFO] A. Farrel, Editor, "Problem Statement and Architecture for
             Information Exchange Between Interconnected Traffic
             Engineered Networks", draft-ietf-teas-interconnected-te-
             info-exchange, work in progress.


   [RFC7491] King, D., and Farrel, A., "A PCE-based Architecture for
             Application-based Network Operations", draft-farrkingel-
             pce-abno-architecture, work in progress.

   [ACTN-Info] Y. Lee, S. Belotti, D. Dhody, "Information Model for
             Abstraction and Control of Transport Networks", draft-
             leebelotti-teas-actn-info, work in progress.

   [Cheng] W. Cheng, et. al., "ACTN Use-cases for Packet Transport
             Networks in Mobile Backhaul Networks", draft-cheng-actn-
             ptn-requirements, work in progress.

   [Dhody] D. Dhody, et. al., "Packet Optical Integration (POI) Use
             Cases for Abstraction and Control of Transport Networks
             (ACTN)", draft-dhody-actn-poi-use-case, work in progress.

   [Fang] L. Fang, "ACTN Use Case for Multi-domain Data Center
             Interconnect", draft-fang-actn-multidomain-dci, work in

   [Klee] K. Lee, H. Lee, R. Vilata, V. Lopez, "ACTN Use-case for On-
             demand E2E Connectivity Services in Multiple Vendor Domain
             Transport Networks", draft-klee-actn-connectivity-multi-
             vendor-domains, work in progress.

   [Kumaki] K. Kumaki, T. Miyasaka, "ACTN : Use case for Multi Tenant
             VNO ", draft-kumaki-actn-multitenant-vno, work in

   [Lopez] D. Lopez (Ed), "ACTN Use-case for Virtual Network Operation
             for Multiple Domains in a Single Operator Network", draft-
             lopez-actn-vno-multidomains, work in progress.

   [Shin] J. Shin, R. Hwang, J. Lee, "ACTN Use-case for Mobile Virtual
             Network Operation for Multiple Domains in a Single
             Operator Network", draft-shin-actn-mvno-multi-domain, work
             in progress.

   [Xu] Y. Xu, et. al., "Use Cases and Requirements of Dynamic Service
             Control based on Performance Monitoring in ACTN
             Architecture", draft-xu-actn-perf-dynamic-service-control,
             work in progress.

9. RFC 7491, March

10. Contributors

   Adrian Farrel
   Old Dog Consulting
   Email: adrian@olddog.co.uk

   Italo Bush
   Email: Italo.Busi@huawei.com

   Khuzema Pithewan
   Email: kpithewan@infinera.com

Authors' Addresses

   Daniele Ceccarelli (Editor)
   Stockholm, Sweden
   Email: daniele.ceccarelli@ericsson.com

   Young Lee (Editor)
   Huawei Technologies
   5340 Legacy Drive
   Plano, TX 75023, USA
   Phone: (469)277-5838
   Email: leeyoung@huawei.com

   Luyuan Fang
   Email: luyuanf@gmail.com

   Diego Lopez
   Telefonica I+D
   Don Ramon de la Cruz, 82
   28006 Madrid, Spain
   Email: diego@tid.es

   Sergio Belotti
   Alcatel Lucent
   Via Trento, 30
   Vimercate, Italy
   Email: sergio.belotti@alcatel-lucent.com sergio.belotti@nokia.com
   Daniel King
   Lancaster University
   Email: d.king@lancaster.ac.uk

   Dhruv Dhoddy
   Huawei Technologies

   Gert Grammel
   Juniper Networks