Network Working Group                                        B. Sarikaya
Internet-Draft                                       Denpel Informatique
Intended status: Best Current Practice                                      L. Dunbar
Internet Draft                                             Futurewei
Intended status: Informational                           B. Sarikaya
Expires: February 10, Dec 2019                                    Huawei USA
                                                           B. Khasnabish
                                                           ZTE (TX) Inc.                                Denpel Informatique
                                                       B.Khasnabish
                                                        Independent
                                                          T. Herbert
                                                              Quantonium
                                                               Intel
                                                          S. Dikshit
                                                           Cisco Systems
                                                           Aruba-HPE
                                                     August 9, 2018 22, 2019

     Virtual Machine Mobility Protocol Solutions for L2 and L3 Overlay Networks
                       draft-ietf-nvo3-vmm-04.txt
                          draft-ietf-nvo3-vmm-05

Abstract

   This document describes a virtual machine mobility protocol solutions commonly
   used in data centers built with overlay-based network virtualization
   approach. network. This document
   is intended for describing the solutions and the impact of moving
   VMs (or applications) from one Rack to another connected by the
   Overlay networks.

   For layer 2, it is based on using a Network an NVA (Network Virtualization
   Authority (NVA)-Network
   Authority) - NVE (Network Virtualization Edge (NVE) Edge) protocol to update
   Address
   ARP (Address Resolution Protocol (ARP) Protocol) table or neighbor cache entries at
   the NVA and the source NVEs tunneling in-flight packets to the
   destination NVE
   after the virtual machine VM (virtual machine) moves from source Old NVE to the destination New NVE.  For
   Layer 3, it is based on address and connection migration after the
   move.

Status of This this Memo

   This Internet-Draft is submitted in full conformance with the
   provisions of BCP 78 and BCP 79.

   This Internet-Draft is submitted in full conformance with the
   provisions of BCP 78 and BCP 79. This document may not be modified,
   and derivative works of it may not be created, except to publish it
   as an RFC and to translate it into languages other than English.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF). (IETF), its areas, and its working groups.  Note that
   other groups may also distribute working documents as Internet-Drafts.  The list of current Internet-
   Drafts is at https://datatracker.ietf.org/drafts/current/.
   Drafts.

   Internet-Drafts are draft documents valid for a maximum of six
   months and may be updated, replaced, or obsoleted by other documents
   at any time.  It is inappropriate to use Internet-Drafts as
   reference material or to cite them other than as "work in progress."

   The list of current Internet-Drafts can be accessed at
   http://www.ietf.org/ietf/1id-abstracts.txt

   The list of Internet-Draft Shadow Directories can be accessed at
   http://www.ietf.org/shadow.html

   This Internet-Draft will expire on February 10, 2019. 22, 2009.

Copyright Notice

   Copyright (c) 2018 2019 IETF Trust and the persons identified as the
   document authors. All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (https://trustee.ietf.org/license-info)
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document. Please review these documents
   carefully, as they describe your rights and restrictions with
   respect to this document. Code Components extracted from this
   document must include Simplified BSD License text as described in
   Section 4.e of the Trust Legal Provisions and are provided without
   warranty as described in the Simplified BSD License.

Table of Contents

   1.  Introduction  . . . . . . . . . . . . . . . . . . . . . . . .   2 Introduction...................................................3
   2. Conventions and Terminology . . . . . . . . . . . . . . . . .   3 used in this document..............................4
   3.  Requirements  . . . . . . . . . . . . . . . . . . . . . . . .   4 Requirements...................................................5
   4. Overview of the protocol  . . . . . . . . . . . . . . . . . .   4 VM Mobility Solutions..........................6
      4.1. VM Migration  . . . . . . . . . . . . . . . . . . . . . .   5 in Layer 2 Network...........................6
      4.2. Task Migration  . . . . . . . . . . . . . . . . . . . . .   6 in Layer-3 Network.........................7
         4.2.1. Address and Connection Migration in Task Migration  .   7 Migration...8
   5. Handling Packets in Flight  . . . . . . . . . . . . . . . . .   8 Flight.....................................9
   6. Moving Local State of VM  . . . . . . . . . . . . . . . . . .   9 VM......................................10
   7. Handling of Hot, Warm and Cold Virtual Machine Mobility . . .   9 VM Mobility....................10
   8.  Virtual Machine Operation . . . . . . . . . . . . . . . . . .  10
     8.1.  Virtual Machine Lifecycle Management  . . . . . . . . . .  10 VM Operation..................................................11
   9. Security Considerations . . . . . . . . . . . . . . . . . . .  10 Considerations.......................................12
   10. IANA Considerations . . . . . . . . . . . . . . . . . . . . .  11 Considerations..........................................12
   11. Acknowledgements  . . . . . . . . . . . . . . . . . . . . . .  11 Acknowledgments..............................................12
   12. Change Log  . . . . . . . . . . . . . . . . . . . . . . . . .  11 Log...................................................12
   13. References  . . . . . . . . . . . . . . . . . . . . . . . . .  11 References...................................................13
      13.1. Normative References . . . . . . . . . . . . . . . . . .  11 References....................................13
      13.2. Informative references . . . . . . . . . . . . . . . . .  12
   Authors' Addresses  . . . . . . . . . . . . . . . . . . . . . . .  12 References..................................14

1. Introduction

   Data center networks are being increasingly used by telecom operators
   as well as by enterprises.  In this
     This document we are interested in
   overlay-based data center networks supporting multitenancy.  These
   networks are organized as one large Layer 2 network geographically
   distributed describes the overlay-based data center networks
     solutions in several buildings.  In some cases geographical
   distribution can span across Layer 2 boundaries.  In that case need
   arises supporting multitenancy and VM (Virtual Machine)
     mobility. Many large DCs, especially Cloud DCs, host tasks (or
     workloads) for connectivity between Layer 2 boundaries multiple tenants, which can be
   achieved by the network virtualization edge (NVE) functioning as
   Layer 3 gateway routing across bridging domain such as in Warehouse
   Scale Computers (WSC).

   Virtualization multiple departments
     of one organization or multiple organizations. There is
     communication among tasks belonging to one tenant and
     communications among tasks belonging to different tenants or with
     external entities.
     Server Virtualization, which is being used in almost all of
     today's data
   centers centers, enables many virtual machines VMs to run on a single physical
     computer or compute server.  Virtual machines (VM) need hypervisor
   running on the physical compute server to provide them shared sharing the processor/memory/storage.
     Network connectivity among VMs is provided by the network
     virtualization edge (NVE) [RFC8014].  Being able  It is highly desirable
     [RFC7364] to move allow VMs
   dynamically, to be moved dynamically (live, hot, or live migration, cold
     move) from one server to another allows for dynamic load balancing or
     optimized work distribution and thus it is a highly
   desirable feature [RFC7364]. distribution.
     There are many challenges and requirements related to migration,
   mobility, and interconnection of Virtual Machines (VMs)and VM mobility
     in large data centers, including dynamic attaching/detaching VMs
     to/from Virtual Network Elements Edges (VNEs).  Retaining IP addresses
     after a move is a key requirement [RFC7364].  Such a requirement
     is needed in order to maintain existing transport connections.
     In L3 traditional Layer-3 based data networks, retaining IP addresses
     after a move is
   simply generally not possible.  This recommended because the frequent
     move will cause non-aggregated IP addresses (a.k.a. fragmented IP
     addresses), which introduces complexity in IP address
   management and as a result transport connections need to be
   reestablished. management.

     In view of many virtual machine VM mobility schemes that exist today, there is a
     desire to define a standard control plane protocol for
   virtual machine mobility. document comprehensive VM mobility solutions that cover
     both IPv4 and IPv6. The protocol should large Data Center networks can be based on IPv4
     organized as one large Layer-2 network geographically distributed
     in several buildings/cities or
   IPv6.  In this document we specify such a protocol for Layer-3 networks with large number
     of host routes that cannot be aggregated as the result of frequent
     move from one location to another without changing their IP
     addresses.  The connectivity between Layer 2 and boundaries can be
     achieved by the network virtualization edge (NVE) functioning as
     Layer 3 data networks. gateway routing across bridging domain such as in
     Warehouse Scale Computers (WSC).

2. Conventions and Terminology used in this document

      The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL
      NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and
      "OPTIONAL" in this document are to be interpreted as described in
      RFC 2119 [RFC2119] and [RFC8014].

      This document uses the terminology defined in [RFC7364].  In addition
      addition, we make the following definitions:

   Tasks.  Tasks are the generalization of

      VM:    Virtual Machine

      Tasks:  Task is a program instantiated or running on a virtual machines.
               machine or container.  Tasks in virtual machines or
               containers that can be migrated correspond from one server to the virtual machines
   that can be migrated. another.
               We use task task, workload and virtual machine
               interchangeably in this document.

      Hot VM Mobility. Mobility: A given VM could be moved from one server to
               another in running state.

     Warm VM Mobility. Mobility:  In case of warm VM mobility, the VM states are
               mirrored to the secondary server (or domain) at a
               predefined (configurable) regular intervals.  This
               reduces the overheads and
   complexity complexity, but this may also
               lead to a situation when both servers may not contain
               the exact same data (state information)

      Cold VM Mobility. Mobility:  A given VM could be moved from one server to
               another in stopped or suspended state.

   Source NVE

      Old NVE:  refers to the old NVE where packets were forwarded to
               before migration.

   Destination NVE

      New NVE: refers to the new NVE after migration.

      Packets in flight flight: refers to the packets received by the source Old NVE
               sent by the correspondents that have old ARP or neighbor
               cache entry before VM or task migration.

      Users of VMs in diskless systems or systems not using
               configuration files are called end user clients.

      Cloud DC:  Third party data centers that host applications,
               tasks or workloads owned by different organizations or
               tenants.

3. Requirements

   This section states requirements on data center network virtual
   machine mobility.

   Data center network SHOULD should support virtual machine mobility in IPv6. both IPv4 SHOULD also be supported in virtual machine and IPv6 VM mobility.

   Virtual machine mobility protocol MAY support host routes should not require changing their IP
   addresses after the move.

   There is "Hot Migration" with transport service continuing, and
   there is a "Cold Migration" with transport service restarted, i.e.
   stop the task running on the Old NVE and move to
   accomplish virtualization.

   Virtual machine the New NVE before
   restart as described in the Task Migration.

   VM mobility protocol SHOULD not support solutions/procedures should minimize triangular routing
   except for handling packets in flight.

   Virtual machine

   VM mobility protocol SHOULD solutions/procedures should not need to use tunneling
   except for handling packets in flight.

4. Overview of the protocol VM Mobility Solutions

     Layer 2 and Layer 3 protocols mobility solutions are described next.  In respectively
     in the following
   sections, we examine more advanced features. sections.

4.1. VM Migration in Layer 2 Network

     Being able to move Virtual Machines VMs dynamically, from one server to
   another allows another,
     makes it possible for dynamic load balancing or work distribution and
   thus distribution.
     Therefore, it is a highly desirable feature. for large scale multi-tenants
     data centers.

     In a Layer-2 based data
   center approach, virtual machine VM moving to another server does not
     change its IP address.  Because of this an IP based virtual machine
   mobility protocol is not needed.  However, when a virtual machine
   moves, NVEs need to change their caches associating VM Layer 2 or
   Medium Access Control (MAC) address with NVE's IP address.  Such a
   change enables NVE to send outgoing MAC frames addressed to the
   virtual machine.  VM movement across Layer 3 boundaries is not
   typical but the same solution applies if the VM moves in the same
   link such as in WSCs.

   Virtual machine moves from its source NVE to a new, destination NVE.
   After the move the virtual machine IP address(es) do not change address, but this virtual machine VM is now under a new NVE,
     previously communicating NVEs will continue to send their packets
     to the source Old NVE.  To solve this problem, Address Resolution
     Protocol (ARP) cache in IPv4 [RFC0826] or neighbor cache in IPv6
     [RFC4861] in the NVEs need need to be updated. NVEs need to change
     their caches associating the VM Layer-2 or Medium Access Control
     (MAC) address with the NVE's IP address. Such a change enables
     NVEs to be updated. encapsulate the outgoing MAC frames with the current
     target NVE address. It may take some time to refresh ARP/ND cache
     when a VM is moved to a
   new destination New NVE.  During this period, a tunnel is
     needed so that
   source Old NVE can forwards packets destined to the destination VM to
     the New NVE.

     In IPv4, the virtual machine VM immediately after the move should send a
     gratuitous ARP request message containing its IPv4 and Layer 2 or MAC
     address in its new NVE, destination NVE.  This message's destination address is the
     broadcast address.  Source  Old NVE receives this message.
   source NVE Both Old and
     New NVEs should update VM's ARP entry in the central directory at
     the NVA.  Source NVE asks NVA NVA, to update its mappings to record the IPv4 address & MAC
     address of the moving VM along with MAC address of VM, and the new NVE IPv4 address.  An
     NVE-to-NVA protocol is used for this purpose [RFC8014].

     Reverse ARP (RARP) which enables the host to discover its IPv4
     address when it boots from a local server [RFC0903] [RFC0903], is not used
     by VMs because the VM already knows its IPv4 address.  IPv4/v6 address
   is assigned to a newly created VM, possibly using Dynamic Host
   Configuration Protocol (DHCP). Next, we
     describe a case where RARP is used.

     There are some vendor deployments (diskless systems or systems
     without configuration files) wherein VM users, i.e. end-user
     clients ask for the same MAC address upon migration.  This can be
     achieved by the clients sending RARP request reverse message which carries
     the old MAC address looking for an IP address allocation.  The
     server, in this case the new NVE needs to communicate with NVA,
     just like in the gratuitous ARP case to ensure that the same IPv4
     address is assigned to the VM.  NVA uses the MAC address as the
     key in the search of ARP cache to find the IP address and informs
     this to the new NVE which in turn sends RARP reply reverse message.  This
     completes IP address assignment to the migrating VM.

   All

     Other NVEs communicating with this virtual machine uses VM could have the old ARP
     entry. If any VM VMs in those NVEs need to talk to communicate with the new VM in
     attached to the
   destination New NVE, it uses the old ARP entry.  Thus entries might be used.  Thus, the
     packets are delivered to the source Old NVE.  The source Old NVE MUST tunnel
     these in-
   flight in-flight packets to the destination New NVE.

     When an ARP entry in for those VMs times out, their corresponding
     NVEs should access the NVA for an update.

     IPv6 operation is slightly different:

     In IPv6, the virtual machine immediately after the move move, the VM immediately sends an unsolicited
     neighbor advertisement message containing its IPv6 address and
     Layer-2 MAC address in to its new NVE, the destination NVE. This message is sent to the
     IPv6 Solicited Node Multicast Address corresponding to the target
     address which is the VM's IPv6 address. The NVE
   receives receiving this message.  NVE
     message should send request to update VM's neighbor cache entry in
     the central directory of the NVA.  The NVA's neighbor cache entry
     should include IPv6 address of the VM, MAC address of the VM and
     the NVE IPv6 address are recorded in the entry. address.  An NVE-to-NVA protocol is used for this
     purpose [RFC8014].

   All

     Other NVEs communicating with this virtual machine uses VM might still use the old
     neighbor cache entry.  If any VM in those NVEs need to talk to communicate
     with the
   new VM in attached to the destination New NVE, it uses could use the old neighbor
     cache entry.
   Thus Thus, the packets are delivered to the source Old NVE.  The source
     Old NVE MUST tunnel these in-flight packets to the destination New NVE.

     When a neighbor cache entry in those VMs times out, their
     corresponding NVEs should access the NVA for an update.

4.2. Task Migration

   Virtualization in L2 Layer-3 Network

     Layer-2 based data center networks becomes become quickly prohibitive
     because ARP/neighbor caches don't scale.  Scaling can be
     accomplished seamlessly in L3 Layer-3 data center networks by just
     giving each virtual network an IP subnet and a default route that
     points to NVE.  This means no explosion of ARP/ neighbor cache in
     VMs and NVEs (just one ARP/ neighbor cache entry for default
     route) and there is no need to have Ethernet header in
     encapsulation [RFC7348] which saves at least 16 bytes.

   In L3

     Even though the term VM and Task are used interchangeably in this
     document, the term Task is used in the context of Layer-3
     migration mainly to have slight emphasis on the moving an entity
     (Task) that is instantiated on a VM or a container.

     Traditional Layer-3 based data center networks, since networks require IP address
     of the task has to change after move, moving because the prefixes of the IP
     address usually reflect the locations. It is necessary to have an
     IP based task VM migration protocol is needed.
   The protocol mostly used is solution that can allow IP addresses staying
     the identifier locator addressing same after moving to different locations. The Identifier
     Locator Addressing or ILA
   [I-D.herbert-nvo3-ila].  Address and connection migration introduce
   complications in task migration protocol as we discuss below.
   Especially informing the communicating hosts [I-D.herbert-nvo3-ila] is one of the migration becomes
   a major issue.  Also, in L3 based networks, because such
     solutions.

     Because broadcasting is not available, available in Layer-3 based networks,
     multicast of neighbor solicitations in IPv6 would need to be
     emulated.

   Task migration

     Cold task migration, which is a common practice in many data
     centers, involves the following steps:

     - Stop running the task.
     - Package the runtime state of the job.
     - Send the runtime state of the task to the destination New NVE where the
        task is to run.
     - Instantiate the task's state on the new machine.
     - Start the tasks for the task continuing from the point at which
        it was stopped.

     Address migration and connection migration in moving tasks or VMs
     are addressed next.

 4.2.1. Address and Connection Migration in Task Migration

     Address migration is achieved as follows:

     - Configure IPv4/v6 address on the target host. Task.
     - Suspend use of the address on the old host. Task.  This includes
        handling established connections.  A state may be established
        to drop packets or send ICMPv4 or ICMPv6 destination
        unreachable message when packets to the migrated address are
        received.

     - Push the new mapping to hosts. VM.  Communicating hosts VMs will learn of
        the new mapping via a control plane either by participation in
        a protocol for mapping propagation or by getting the new
        mapping from a central database such as Domain Name System
        (DNS).

     Connection migration involves reestablishing existing TCP
     connections of the task in the new place.

     The simplest course of action is to drop TCP connections across a
     migration.  Since  It the migrations should be are relatively rare events, it is
     conceivable that TCP connections could be automatically closed in
     the network stack during a migration event.  If the applications
     running are known to handle this gracefully (i.e. reopen dropped
     connections) then this may be viable.

     More involved approach to connection migration entails pausing the
     connection, packaging connection state and sending to target,
     instantiating connection state in the peer stack, and restarting
     the connection.  From the time the connection is paused to the
     time it is running again in the new stack, packets received for
     the connection
   should could be silently dropped.  For some period of
     time, the old stack will need to keep a record of the migrated
     connection.  If it receives a packet, it should can either silently drop
     the packet or forward it to the new location, similarly as in
     Section 5.

5. Handling Packets in Flight

   Source hypervisor

     The Old NVE may receive packets from the virtual machine's VM's ongoing
     communications and these packets should not be lost lost, and they
     should be sent to the destination hypervisor New NVE to be delivered to the
   virtual machine. VM.  The
     steps involved in handling packets in flight are as follows:

     Preparation Step Step:  It takes some time, possibly a few seconds for
     a VM to move from its source hypervisor Old NVE to a new destination one. New NVE. During this period, a
     tunnel needs to be established so that the
      source Old NVE forwards can forward
     packets to the destination New NVE. Old NVE gets New NVE address from NVA in
     the request to move the VM. The Old NVE can store the New NVE
     address for the VM with a timer. When the timer expired, the entry
     for the New NVE for the VM can be deleted.

     Tunnel Establishment - IPv6 IPv6:  Inflight packets are tunneled to the
      destination
     New NVE using the encapsulation protocol such as VXLAN in IPv6.  Source NVE gets destination NVE address from NVA in the
      request to move the virtual machine.

     Tunnel Establishment - IPv4 IPv4:  Inflight packets are tunneled to the
      destination
     New NVE using the encapsulation protocol such as VXLAN in IPv4.  Source NVE gets destination NVE address from NVA when NVA
      requests NVE to move the virtual machine.

     Tunneling Packets - IPv6 IPv6:  IPv6 packets are received for the migrating
      virtual machine
     VM are encapsulated in an IPv6 header at the source Old NVE.
      Destination  New NVE
     decapsulates the packet and sends IPv6 packet to the migrating VM.

     Tunneling Packets - IPv4 IPv4:  IPv4 packets are received for the migrating
      virtual machine
     VM are encapsulated in an IPv4 header at the source Old NVE.
      Destination New NVE
     decapsulates the packet and sends IPv4 packet to the migrating VM.

     Stop Tunneling Packets Packets:  When source Old NVE stops receiving packets
     destined to the virtual machine VM that has just moved to the
      destination New NVE. The Timer
     for storing the New NVE address for the VM should be long enough
     for all other NVEs that need to communicate with the VM to get
     their NVE-VM cache entries updated.

6. Moving Local State of VM

   After
     In addition to the VM mobility related signaling (VM Mobility
     Registration Request/Reply), the virtual machine VM state needs to be transferred
     to the destination Hypervisor. New NVE.  The state includes its memory and file
   system.  Source system if
     the VM cannot access the memory and the file system after moved to
     the New NVE.  Old NVE opens a TCP connection with destination New NVE over
     which VM's memory state is transferred.

     File system or local storage is more complicated to transfer.  The
     transfer should ensure consistency, i.e. the VM at the destination New NVE
     should find the same file system it had at the source.  Precopying Old NVE.  Pre-
     copying is a commonly used technique for transferring the file
     system.  First the whole disk image is transferred while VM
     continues to run.  After the VM is moved any changes in the file
     system are packaged together and sent to the destination New NVE Hypervisor
     which reflects these changes to the file system locally at the
     destination.

7. Handling of Hot, Warm and Cold Virtual Machine VM Mobility
     Both Cold Virtual Machine and Warm VM mobility is facilitated by (or migration) refers to the VM initially
   sending
     being completely shut down at the Old NVE before restarted at the
     New NVE. Therefore, all transport services to the VM are
     restarted.

     Upon starting at the New NVE, the VM should send an ARP or
     Neighbor Discovery message at the destination NVE
   but the source NVE not receiving any packets inflight. message. Cold VM mobility also allows all previous source NVEs the Old
     NVE and all communicating NVEs to time out ARP/neighbor cache
     entries of the VM and then get VM.  It is necessary for the NVA to push the
     updated ARP/neighbor cache entry to NVEs or get for NVEs to pull the
     updated ARP/neighbor cache entry from NVA.

     The VMs that are used for Cold VM mobility can be facilitated by cold standby receive entity
     receiving scheduled backup
   information but less frequently than that would be for warm information. The cold standby
   option.  Therefore, entity
     can be a VM or can be other form factors which is beyond the scope
     of this document. The cold mobility option can be used for non-
     critical applications and services.

   In cases of warm standby option, services that can tolerate interrupted
     TCP connections.

     The Warm VM mobility refers the backup VMs entities receive backup
     information at regular more frequent intervals.  The duration of the
     interval determines the warmth of the standby option.  The larger the
     duration, the less warm (and hence cold) the standby Warm VM mobility
     option becomes.

   In case of hot standby option,

     There is also a Hot Standby option in addition to the Hot
     Mobility, where there are VMs in both primary and secondary
   domains have NVEs
     and they identical information and can provide services
     simultaneously as in load-share mode of operation.  If the VMs in
     the primary domain NVE fails, there is no need to actively move the VMs
     to the secondary domain NVE because the VMs in the secondary domain NVE already
     contain identical information.  The hot standby option is the most
     costly mechanism for providing redundancy, mechanism, and hence this option is utilized only for
     mission-critical applications and services.  In hot standby
     option, regarding TCP connections, one option is to start with and
     maintain TCP connections to two different VMs at the same time.
     The least loaded VM responds first and pickup providing service
     while the sender (origin) still continues to receive Ack from the
     heavily loaded (secondary) VM and chooses not use the service of
     the secondary responding VM.  If the situation (loading condition
     of the primary responding VM) changes the secondary responding VM
     may start providing service to the sender (origin).

8.  Virtual Machine VM Operation

   Virtual machines are not involved in any mobility signalling.
     Once VM moves to the destination a New NVE, VM IP address does not change and VM
     should be able to continue to receive packets to its address(es).
   This happens in hot

     VM mobility scenarios.

   Virtual machine sends needs to send a gratuitous Address Resolution Protocol message or
     unsolicited Neighbor Advertisement message upstream after each
     move.

8.1.  Virtual Machine Lifecycle Management

   Managing the lifecycle of

     The VM includes creating lifecycle management is a VM with all of complicated task, which is beyond
     the
   required resources, and managing them scope of this document. Not only it involves monitoring server
     utilization, balanced distribution of workload, etc., but also
     needs to manage seamlessly as the VM migrates migration from one service to another during its lifetime.  The on-boarding
   process includes the following steps:

   1.  Sending an allowed (authorized/authenticated) request to Network
       Virtualization Authority (NVA) in an acceptable format with
       mandatory/optional virtualized resources {cpu, memory, storage,
       process/thread support, etc.} and interface information

   2.  Receiving an acknowledgement from the NVA regarding availability
       and usability of virtualized resources and interface package

   3.  Sending a confirmation message to the NVA with request for
       approval server to adapt/adjust/modify the virtualized resources and
       interface package for utilization in a service.
     another.

9. Security Considerations
     Security threats for the data and control plane for overlay
     networks are discussed in [RFC8014].  There are several issues in
     a multi-tenant environment that create problems.  In L2 Layer-2 based
     overlay data center networks, lack of security in VXLAN,
     corruption of VNI can lead to delivery to wrong tenant.  Also, ARP
     in IPv4 and ND in IPv6 are not secure secure, especially if we accept
     gratuitous versions.  When these are done over a UDP
     encapsulation, like VXLAN, the problem is worse since it is
     trivial for a non trusted application non-trusted entity to spoof UDP packets.

     In L3 Layer-3 based overlay data center networks, the problem of
     address spoofing may arise.  As a result the destinations  An NVE may contain have untrusted hosts. tasks
     attached. This usually happens in cases like the virtual machines VMs (tasks)
     running third
   part party applications.  This requires the usage of
     stronger security mechanisms.

10. IANA Considerations

       This document makes no request to IANA.

11.  Acknowledgements Acknowledgments

   The authors are grateful to Bob Briscoe, David Black, Dave R.
   Worley, Qiang Zu, Andrew Malis for helpful comments.

12. Change Log

   o

  . submitted version -00 as a working group draft after adoption

   o

  . submitted version -01 with these changes: references are updated,
       o added packets in flight definition to Section 2

   o

  . submitted version -02 with updated address.

   o

  . submitted version -03 to fix the nits.

   o

  . submitted version -04 in reference to the WG Last call comments.

  . Submitted version - 05 to address IETF LC comments from TSV area.

13. References

13.1. Normative References

   [RFC0826]  Plummer, D., "An Ethernet Address Resolution Protocol: Or
             Converting Network Protocol Addresses to 48.bit Ethernet
             Address for Transmission on Ethernet Hardware", STD 37,
             RFC 826, DOI 10.17487/RFC0826, November 1982,
             <https://www.rfc-editor.org/info/rfc826>.

    [RFC0903]  Finlayson, R., Mann, T., Mogul, J., and M. Theimer, "A
             Reverse Address Resolution Protocol", STD 38, RFC 903,
             DOI 10.17487/RFC0903, June 1984,
              <https://www.rfc-editor.org/info/rfc903>. <https://www.rfc-
             editor.org/info/rfc903>.

    [RFC2119]  Bradner, S., "Key words for use in RFCs to Indicate
             Requirement Levels", BCP 14, RFC 2119,
              DOI 10.17487/RFC2119, March 1997,
              <https://www.rfc-editor.org/info/rfc2119>. 1997.

    [RFC2629]  Rose, M., "Writing I-Ds and RFCs using XML", RFC 2629,
             DOI 10.17487/RFC2629, June 1999,
              <https://www.rfc-editor.org/info/rfc2629>.  <https://www.rfc-
             editor.org/info/rfc2629>.

    [RFC4861]  Narten, T., Nordmark, E., Simpson, W., and H. Soliman,
             "Neighbor Discovery for IP version 6 (IPv6)", RFC 4861,
             DOI 10.17487/RFC4861, September 2007,
              <https://www.rfc-editor.org/info/rfc4861>.  <https://www.rfc-
             editor.org/info/rfc4861>.

    [RFC7348]  Mahalingam, M., Dutt, D., Duda, K., Agarwal, P.,
             Kreeger,  L., Sridhar, T., Bursell, M., and C. Wright,
             "Virtual  eXtensible Local Area Network (VXLAN): A
             Framework for Overlaying Virtualized Layer 2 Networks over
             Layer 3 Networks", RFC 7348, DOI 10.17487/RFC7348, August
             2014, <https://www.rfc-editor.org/info/rfc7348>.

    [RFC7364]  Narten, T., Ed., Gray, E., Ed., Black, D., Fang, L.,
             Kreeger, L., and M. Napierala, "Problem Statement:
             Overlays for Network Virtualization", RFC 7364,  DOI
             10.17487/RFC7364, October 2014,
              <https://www.rfc-editor.org/info/rfc7364>.  <https://www.rfc-
             editor.org/info/rfc7364>.

    [RFC8014]  Black, D., Hudson, J., Kreeger, L., Lasserre, M., and T.
             Narten, "An Architecture for Data-Center Network
             Virtualization over Layer 3 (NVO3)", RFC 8014,  DOI
             10.17487/RFC8014, December 2016,
              <https://www.rfc-editor.org/info/rfc8014>. <https://www.rfc-
             editor.org/info/rfc8014>.

13.2. Informative references References

    [I-D.herbert-nvo3-ila] Herbert, T. and P. Lapukhov, "Identifier-locator "Identifier-
             locator addressing for IPv6", draft-herbert-nvo3-ila-04
             (work in progress), March 2017.

Authors' Addresses

   Linda Dunbar
   Futurewei
   Email: ldunbar@futurewei.com

   Behcet Sarikaya
   Denpel Informatique
   Email: sarikaya@ieee.org
   Linda Dunbar
   Huawei USA
   5340 Legacy Dr. Building 3
   Plano, TX  75024

   Email: linda.dunbar@huawei.com

   Bhumip Khasnabish
   ZTE (TX) Inc.
   Independent
   55 Madison Avenue, Suite 160
   Morristown, NJ  07960
   Email: vumip1@gmail.com, bhumip.khasnabish@ztetx.com vumip1@gmail.com

   Tom Herbert
   Quantonium
   Intel
   Email: tom@herbertland.com

   Saumya Dikshit
   Cisco Systems
   Cessna Business Park
   Aruba-HPE
   Bangalore, Karnataka, India  560 087
   Email: sadikshi@cisco.com saumya.dikshit@hpe.com