Network Working Group L. Dunbar Internet Draft Futurewei Intended status: Informational B. Sarikaya Expires:October 1,December 15, 2020 Denpel Informatique B.Khasnabish Independent T. Herbert Intel S. Dikshit Aruba-HPEApril 1,June 15, 2020 Virtual Machine Mobility Solutions for L2 and L3 Overlay Networksdraft-ietf-nvo3-vmm-14draft-ietf-nvo3-vmm-15 Abstract This document describes virtual machine (VM) mobility solutions commonly used in data centers built withoverlay-basedan overlay network. This document is intended for describing the solutions and the impact of movingVMs (or applications)VMs, or applications, from oneRackrack to another connected by theOverlay networks.overlay network. For layer 2, it is based on using an NVA (Network Virtualization Authority)-to NVE (Network Virtualization Edge) protocol to update ARP (Address Resolution Protocol)tabletables or neighbor cache entries after a VM(virtual machine)moves from anOldold NVE to aNewnew NVE. For Layer 3, it is based on address and connection migration after the move. Status of this Memo This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. This document may not be modified, and derivative works of it may not be created, except to publish it as an RFC and to translate it into languages other than English. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet- Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html This Internet-Draft will expire onOctober 1,December 10, 2020. Copyright Notice Copyright (c) 2020 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Table of Contents 1. Introduction...................................................3 2. Conventions used in this document..............................4 3. Requirements...................................................5 4. Overview of the VM Mobility Solutions..........................6 4.1.Inter-VNsInter-VN and ExternalCommunication......................6Communication.......................6 4.2. VM Migration in a Layer 2Network...........................6Network.........................6 4.3. VM Migration in Layer-3 Network...........................8 4.4. Address and Connection Management in VM Migration.........9 5. Handling Packets in Flight....................................10 6. Moving Local State ofVM......................................11VM......................................10 7. Handling of Hot, Warm and Cold VM Mobility....................11 8. Other Options.................................................12 9. VM Lifecycle Management.......................................13 10. Security Considerations......................................13 11. IANA Considerations..........................................14 12. Acknowledgments..............................................14 13. Change Log...................................................14 14. References...................................................14 14.1. NormativeReferences....................................15References....................................14 14.2. Informative References..................................16 1. Introduction This document describes the overlay-based data centernetworksnetwork solutions insupportingsupport of multitenancy and VM(Virtual Machine)mobility. Being able to move VMs dynamically, from one server to another, makes it possible for dynamic load balancing or work distribution. Therefore, dynamic VM Mobility is highly desirable for large scale multi-tenant DCs. This document is strictly within the DCVPN, as defined by the NVO3 Framework [RFC 7365]. The intent is to describe Layer 2 and Layer 3 Network behavior when VMs are moved from one NVE to another. This document assumes that theVMsVM's move is initiated by the VM management system, i.e. planed move. How and when to moveVM areVMs is out of the scope of this document. RFC7666 already has the description of the MIB for VMs controlled by Hypervisor. The impact of VM mobility on higher layer protocols and applications is outside its scope. Many large DCs (Data Centers), especially Cloud DCs, host tasks (or workloads) for multiple tenants. A tenant can be an organization or a department ofone organization oran organization. There are communications among tasks belonging to one tenant and communications among tasks belonging to different tenants or with external entities. Server Virtualization, which is being used in almost all of today's data centers, enables many VMs to run on a single physical computer or server sharing the processor/memory/storage. Network connectivity among VMs is provided by the network virtualization edge (NVE) [RFC8014]. It is highly desirable [RFC7364] to allow VMs to be moved dynamically (live, hot, or cold move) from one server to another for dynamic load balancing or optimized work distribution. There are many challenges and requirements related to VM mobility in large data centers, including dynamicattaching/detachingattachment and detachment of VMs to/from Virtual Network Edges (VNEs). In addition, retaining IP addresses after a move is a key requirement [RFC7364]. Such a requirement is needed in order to maintain existing transport layer connections. In traditional Layer-3 based networks, retaining IP addresses after a move is generally not recommended becausethefrequentmovemoves will cause fragmented IP addresses, which introduces complexity in IP address management. In view of the many VM mobility schemes that exist today, there is a desire to document comprehensive VM mobility solutions that cover both IPv4 and IPv6. The large Data Center networks can be organized as one large Layer-2 network geographically distributed in several buildings/cities or Layer-3 networks with large number of host routes that cannot be aggregated as the result of frequent moves from one location to another without changing their IP addresses. The connectivity between Layer 2 boundaries can be achieved by thenetwork virtualization edge (NVE)NVE functioning as a Layer 3 gatewayroutingrouter across bridgingdomain such as in Warehouse Scale Computers (WSC).domains. 2. Conventions used in this document This document uses the terminology defined in [RFC7364]. In addition, we make the following definitions: VM: Virtual MachineTasks: TaskTask: A task is a program instantiated or running on avirtual machineVM or a container. Tasks running invirtual machinesVMs or containers can be migrated from one server to another. We use task, workload andvirtual machineVM interchangeably in this document. Hot VM Mobility: A given VM could be moved from one server to another in a runningstate.state without terminating the VM. Warm VM Mobility: In case of warm VM mobility, the VM states are mirrored to the secondary server (or domain) atapredefined(configurable)regular intervals. This reduces the overheads and complexity, but this may also lead to a situation when both servers may not contain the exact same data (state information) Cold VM Mobility: A given VM could be moved from one server to another in stopped or suspended state. Old NVE: refers to the old NVE where packets were forwarded to before migration. New NVE: refers to the new NVE after migration. Packets in flight: refers to the packets received by theOldold NVE sent by the correspondents that have old ARP or neighbor cache entry before VM or task migration. Users of VMs in diskless systems or systems not using configuration files are called end user clients. Cloud DC: Third party data centers that host applications, tasks or workloads owned by different organizations or tenants. 3. Requirements This section states requirements on data center networkvirtual machineVM mobility. - Data center network should support both IPv4 and IPv6 VM mobility.Virtual machine (VM)- VM mobility should not require changingVMs'an VM's IPaddressesaddress(es) after the move.There is- "Hot Migration"withrequires the transport servicecontinuing, andcontinuity across the move, while in "Cold Migration"withthe transport service is restarted, i.e. the taskrunningis stopped on theOldold NVE, is moved to theNewnew NVE andthe task isthen restarted. Not all DCs support "Hot Migration. DCs that only support Cold Migration should make their customers aware of the potential service interruption duringthea Cold Migration. - VM mobility solutions/procedures should minimize triangular routing except for handling packets in flight. - VM mobility solutions/procedures should not need to use tunneling except for handling packets in flight. 4. Overview of the VM Mobility SolutionsLayer 2 and Layer 3 mobility solutions are described respectively in the following sections.4.1.Inter-VNsInter-VN and External Communication InterVNsVN (VirtualNetworks)Network) communication refers to communication among tenants (or hosts) belonging to different VNs. Those tenants can be attached to the NVEs co-located in the same Data Center or in different Data centers. When a VM communicates with an external entity, the VM is effectively communicating with a peer in a different network or a globally reachable host. This document assumes that the inter-VNs communication and the communication with external entities are via NVO3 Gateway functionality as described in Section 5.3 of RFC 8014 [RFC8014]. NVO3 Gateways relay traffic onto and off of a virtual network, enabling communication both across different VNs and with external entities. NVO3 Gateway functionality enforces appropriate policies to control communication among VNs and with external entities (e.g., hosts). Moving a VM to a new NVE may move the VM away from the NVO3 Gateway(s) used by the VM's traffic, e.g., some traffic may be better handled by an NVO3 Gateway that is closer to the new NVE than the NVO3 Gateway that was used before the VM move. If NVO3 Gateway changes are not possible for some reason, then the VM's traffic can continue to use the prior NVO3 Gateway(s), which may have some drawbacks, e.g., longer network paths. 4.2. VM Migration in a Layer 2 Network In a Layer-2 based approach, a VM moving to another NVE does not change its IP address. But this VM is now under a new NVE, previously communicating NVEs may continue sending their packets to theOldold NVE. Therefore, the previously communicating NVEs need to promptly update their Address Resolution Protocol (ARP)cache incaches of IPv4 [RFC0826] or neighborcache incaches of IPv6 [RFC4861]in the NVEs that have attached VMs communicating with the VM being moved need to be updated promptly.. If the VM being moved has communication with external entities, the NVO3 gateway needs to be notified of the new NVE where the VM is moved to. In IPv4, the VM immediately after the move should send a gratuitous ARP request message containing its IPv4 and Layer 2 MAC address in its new NVE. Upon receiving this message, theNewnew NVE can update its ARP cache. TheNewnew NVE should send a notification of the newly attached VM to the central directory [RFC7067] embedded in the NVA to update the mapping of the IPv4 address & MAC address of the moving VM along with the new NVE address. An NVE-to-NVA protocol is used for this purpose [RFC8014]. The old NVE, upon a VM is moved away, should send an ARP scan to all its attached VMs to refresh its ARP Cache. Reverse ARP (RARP) which enables the host to discover its IPv4 address when it boots from a local server [RFC0903], is not used by VMs if the VM already knows its IPv4 address (most common scenario). Next, we describe a case where RARP is used. There are some vendor deployments (diskless systems or systems without configuration files) wherein the VM's user, i.e. end-user client askes for the same MAC address upon migration. This can be achieved by the clients sending RARP request message which carries the MAC address looking for an IP address allocation. The server, in this case the new NVE needs to communicate with NVA, just like in the gratuitous ARP case to ensure that the same IPv4 address is assigned to the VM. NVA uses the MAC address as the key in the search of ARP cache to find the IP address and informs this to the new NVE which in turn sends RARP reply message. This completes IP address assignment to the migrating VM. Other NVEs that have attached VMs or the NVO3 Gateway that have external entities communicating with this VM may still have the old ARP entry. To avoid old ARP entries being used by other NVEs, the old NVE upon discovering a VM is detached should send a notification to all other NVEs and its NVO3 Gateway to time out the ARP cache for the VM [RFC8171]. When an NVE (including the old NVE) receives packet or ARP request destined towards a VM (its MAC or IP address) that is not in the NVE's ARP cache, the NVE should send query to NVA's Directory Service to get the associated NVE address for the VM. This is how theOldold NVE tunneling these in- flight packets to theNewnew NVE to avoid packets loss. When VM address is IPv6, the operation is similar: In IPv6, after the move, the VM immediately sends an unsolicited neighbor advertisement message containing its IPv6 address and Layer-2 MAC address to its new NVE. This message is sent to the IPv6 Solicited Node Multicast Address corresponding to the target address which is the VM's IPv6 address. The NVE receiving this message should send request to update VM's neighbor cache entry in the central directory of the NVA. The NVA's neighbor cache entry should include IPv6 address of the VM, MAC address of the VM and the NVE IPv6 address. An NVE-to-NVA protocol is used for this purpose [RFC8014]. To avoid other NVEs communicating with this VM using the old neighbor cache entry, the old NVE upon discovering a VM being moved or VM management system which initiates the VM move should send a notification to all NVEs to timeout the ND cache for the VM being moved. When a ND cache entry for those VMs times out, their corresponding NVEs should send query to the NVA for an update. 4.3. VM Migration in Layer-3 Network Traditional Layer-3 based data center networks usually have all hosts (tasks) within one subnet attached to one NVE. By this design, the NVE becomes the default route for all hosts (tasks) within the subnet. But this design requires IP address of a host (task) to change after the move to comply with the prefixes of the IP address under the new NVE. A VM migration in Layer 3 Network solution is to allow IP addresses staying the same after moving to different locations. The Identifier Locator Addressing or ILA [I-D.herbert-intarea-ila] is one of such solutions. Because broadcasting is not available in Layer-3 based networks, multicast of neighbor solicitations in IPv6 and ARP for IPv4 would need to be emulated. Scalability of the multicast (such as IPv6 ND and IPv4 ARP) can become problematic because the hosts belonging to one subnet (or one VLAN) can span across many NVEs. Sending broadcast traffic to all NVEs can cause unnecessary traffic in the DCN if the hosts belonging to one subnet are only attached to a very small number of NVEs. It is preferable to have a directory [RFC7067] or NVA to manage the updates to an NVE of the potential other NVEs a specific subnet may be attached and get periodic reports from an NVE of all the subnets being attached/detached, as described by RFC8171. Hot VM Migration in Layer 3 involves coordination among many entities, such as VM management system and NVA. Cold task migration, which is a common practice in many data centers, involves the following steps: - Stop running the task. - Package the runtime state of the job. - Send the runtime state of the task to theNewnew NVE where the task is to run. - Instantiate the task's state on the new machine. - Start the tasks for the task continuing from the point at which it was stopped. RFC7666 has the more detailed description of the State Machine of VMs controlled by Hypervisor 4.4. Address and Connection Management in VM Migration Since the VM attached to theNewnew NVE needs to be assigned with the same address as VM attached to theOldold NVE, extra processing or configuration is needed, such as: - Configure IPv4/v6 address on the target VM/NVE. - Suspend use of the address on the old NVE. This includes the old NVE sending query to NVA upon receiving packets destined towards the VM being moved away. If there is no response from NVA for the new NVE for the VM, the old NVE can only drop the packets. Referring to the VM State Machine described in RFC7666. - Trigger NVA to push the new NVE-VM mapping to other NVEs which have the attached VMs communicating with the VM being moved. Connection management for the applications running on the VM being moved involves reestablishing existing TCP connections in the new place. The simplest course of action is to drop all TCP connections to the applications running on the VM during a migration. If the migrations are relatively rare events in a data center, impact is relatively small when TCP connections are automatically closed in the network stack during a migration event. If the applications running are known to handle this gracefully (i.e. reopen dropped connections) then this approach may be viable. More involved approach to connection migration entails a proxy to the application (or the application itself) to pause the connection, package connection state and send to target, instantiate connection state in the peer stack, and restarting the connection. From the time the connection is paused to the time it is running again in the new stack, packets received for the connection could be silently dropped. For some period of time, the old stack will need to keep a record of the migrated connection. If it receives a packet, it can either silently drop the packet or forward it to the new location, as described in Section 5. 5. Handling Packets in Flight TheOldold NVE may receive packets from the VM's ongoing communications. These packets should not be lost; they should be sent to theNewnew NVE to be delivered to the VM. The steps involved in handling packets in flight are as follows: Preparation Step: It takes some time, possibly a few seconds for a VM to move from itsOldold NVE to aNewnew NVE. During this period, a tunnel needs to be established so that theOldold NVE can forward packets to theNewnew NVE.OldThe old NVE getsNewthe new NVE address from its NVA assuming that the NVA gets the notification when a VM is moved from one NVE to another. It is out of the scope of this document on which entity manages the VM move and how NVA gets notified of the move. TheOldold NVE can store theNewnew NVE address for the VM with a timer. When the timer expired, the entry for theNewnew NVE for the VM can be deleted. Tunnel Establishment - IPv6: Inflight packets are tunneled to theNewnew NVE using the encapsulation protocol such as VXLAN in IPv6. Tunnel Establishment - IPv4: Inflight packets are tunneled to theNewnew NVE using the encapsulation protocol such as VXLAN in IPv4. Tunneling Packets - IPv6: IPv6 packets received for the migrating VM are encapsulated in an IPv6 header at theOldold NVE.NewThe new NVE decapsulates the packet and sends IPv6 packet to the migrating VM. Tunneling Packets - IPv4: IPv4 packets received for the migrating VM are encapsulated in an IPv4 header at theOldold NVE.NewThe new NVE decapsulates the packet and sends IPv4 packet to the migrating VM. Stop Tunneling Packets: When the Timer for storing theNewnew NVE address for the VM expires. The Timer should be long enough for all other NVEs that need to communicate with the VM to get their NVE-VM cache entries updated. 6. Moving Local State of VM In addition to the VM mobility related signaling (VM Mobility Registration Request/Reply), the VM state needs to be transferred to theNewnew NVE. The state includes its memory and file system if the VM cannot access the memory and the file system after moving to theNewnew NVE. The mechanism of transferring VM States and file system is out of the scope of this document. Referring to RFC7666 for detailed information. 7. Handling of Hot, Warm and Cold VM Mobility Both Cold and Warm VM mobility (or migration)refersrefer to theVM being completelycomplete shut down of the VM at theOldold NVE beforerestartedrestarting the VM at theNewnew NVE. Therefore, all transport services to the VMareneed to be restarted. In this document, all VM mobility is initiated by VM Management System.TheIn case of Cold VMmobility only exchangemobility, theneededexchange of states between theOldold NVE and theNewnew NVE occurs after the VM attached to theOldold NVE is completely shut down. There is a time delay before the new VM is launched. The cold mobility option can be used fornon- criticalnon-mission-critical applications and services that can tolerateinterruptedinterruptions of TCP connections. For Hot VM Mobility, a VM moving to a new NVE does not change its IP address and the service running on the VM is not interrupted. TheWarmVMmobility refersneeds tohavingsend a gratuitous Address Resolution message or unsolicited Neighbor Advertisement message upstream after each move. In case of Warm VM mobility, the functional componentsunderof the new NVEtoreceive the running status of the VM at frequentintervals, so thatintervals., Consequently itcan taketakes less time to launch the VM under the newNVE and otherNVE.Other NVEs that communicate with the VM can be notifiedofpromptly about the VMmove more promptly.migration c. The duration of the time interval determines the effectiveness (or benefit) of Warm VM mobility. The larger the time duration, the less effective the Warm VM mobilityoptionbecomes.For Hot VM Mobility, once aIn case of Cold VMmoves to a New NVE,mobility, the VMIP address does not changeon the old NVE is completely shut down and the VMshould be able to continue to receive packets to its address(es). The VM needs to send a gratuitous Address Resolution message or unsolicited Neighbor Advertisement message upstream after each move. Upon starting atis launched on theNew NVE,new NVE. To minimize theVM should send an ARP or Neighbor Discovery message. Cold VM mobility also allowschance of theOld NVE and allpreviously communicating NVEs sending packets totime out ARP/neighbor cache entries oftheVM. It is necessary forold NVE, the NVAtoshould push the updated ARP/neighbor cache entry to all previously communicating NVEsor forwhen the VM is started on the new NVE. Alternatively, all NVEstocan periodically pull the updated ARP/neighbor cache entry fromNVA. 8. Other Options VM Hot mobility isthe NVA toenable uninterrupted running ofshorten theapplication or workload instantiated ontime span that packets are sent to theVM whenold NVE. Upon starting at the new NVE, the VMrunning conditions changes, such as utilization overload, hardware running condition changes,should send an ARP orothers.Neighbor Discovery message. 8. Other Options Hot, Warm and Cold mobility are planned activities which are managed by VM management system. For unexpected events, such asunexpectedoverloads and failure, a VM might need to move to a newNVE, whichNVE without any service interruption, and this is called Hot VM Failover in this document.For Hot VM Failover,In such case, there are redundant primary and secondary VMs whose states are continuously synchronized bymeansusing methods that are outside the scope of this draft. If the VM in the primary NVE fails, there is no need to actively move the VM to the secondary NVE because the VM in the secondary NVE can immediately pick up and continue processing theprocessing.applications/services. The Hot VM Failoverto the new NVEis transparent to the peers that communicate with this VM. This can be achievedbyvia distributed load balancing when both active VM and standby VM share the same TCP port and same IP address,and using distributed load balancing functionality that controls which VM responds to each service request.. In the absence of a failure, the new VM can pick up providing service while the sender (peer)stillcontinues to receive Ack from the old VM. If the situation (loading condition of the primary responding VM) changes the secondary responding VM may start providing service to the sender (peers).On failure,When a failure occurs, the sender (peer) may have to retry the request, so this structure is limited to requests that can be safely retried. If the load balancing functionality is not used, the Hot VM Failover can be made transparent to the sender (peers) without relying on request retry and by using the techniques that are described in section4 that do4. This does not depend on the primary VM or its associated NVE doing anything after the failure. This restriction is necessary because a failure that affects the primary VM may also cause its associated NVE tofail (e.g., if the NVE is located in the hypervisor that hosts the primary VM and the underlyingfail. For example, a physical serverfails, bothfailure can cause theprimaryVM andthe hypervisor that contains theits NVEfail as a consequence).to fail. The Hot VM Failover option is the costliest mechanism, and hence this option is utilized only for mission-critical applications and services. 9. VM Lifecycle Management The VM lifecycle management is a complicated task, which is beyond the scope of this document. Not only it involves monitoring server utilization,balancedbalancing the distribution of workload, etc., but also needs tomanage seamlessly VMsupport seamless migration of VM from one server to another. 10. Security Considerations Security threats for the data and control plane for overlay networks are discussed in [RFC8014]. ARP (IPv4) and ND (IPv6) are not secure, especially if they can be sent gratuitously across tenant boundaries in a multi-tenant environment. In overlay data center networks, ARP and ND messages can be used to mount address spoofing attacks from untrusted VMs and/or other untrusted sources. Examples of untrusted VMsinclude runningare the VMs instantiated with the third party applications(i.e., applicationsthat are not written by the tenantwho controlsof theVM).VMs. Those untrusted VMs can sendfalsifiedfalse ARP (IPv4) and ND (IPv6) messages, causingNVE,significant overloads in NVEs, NVO3Gateway,Gateways, andNVA to be overwhelmed and not able to perform legitimate functions.NVAs. The attacker can intercept, modify, or even stop data in-transit ARP/ND messages intended for other VNs and initiate DDOS attacks to other VMs attached to the same NVE. A simple black-hole attacks can be mounted by sending afalsifiedfalse ARP/ND message to indicate that the victim's IP address has moved to the attacker's VM. That technique can also be used to mount man-in-the-middleattacks with some moreattacks. Additional effort are required to ensure that the intercepted trafficiscan be eventually delivered to thevictim.impacted VMs. The locator-identifier mechanism given as an example (ILA) doesn't include secure binding. Itdoesn'tdoes not discuss how to securely bind the new locator to the identifier. Because of those threats, VM management system needs to apply stronger security mechanisms whenaddadding a VM to an NVE. Some tenants may haverequirementrequirements that prohibit their VMs to beco-attachedco- attached to the NVEs with other tenants. Some Data Centers deploy additional functionality in their NVO3 Gatewaysfor mitigation ofto mitigate the ARP/NDthreats, e.g.,threats. These may include periodically sending each Gateway's ARP/ND cache contents to the NVA or other central controlsystemsystem. The objective is tocheck foridentify the ARP/ND cache entries that are not consistent with the locations of VMs and their IP addresses indicated by the VM Management System. 11. IANA Considerations This document makes no request to IANA. 12. Acknowledgments The authors are grateful to Bob Briscoe, David Black, Dave R. Worley, Qiang Zu, Andrew Malis for helpful comments. 13. Change Log . submitted version -00 as a working group draft after adoption . submitted version -01 with these changes: references are updated, o added packets in flight definition to Section 2 . submitted version -02 with updated address. . submitted version -03 to fix the nits. . submitted version -04 in reference to the WG Last call comments. . Submitted version - 05, 06, 07, 08, 09, 10, 11,1212, 13, 14 to address IETF LC comments from TSV area. 14. References 14.1. Normative References [RFC0826] Plummer, D., "An Ethernet Address Resolution Protocol: Or Converting Network Protocol Addresses to 48.bit Ethernet Address for Transmission on Ethernet Hardware", STD 37, RFC 826, DOI 10.17487/RFC0826, November 1982, <https://www.rfc-editor.org/info/rfc826>. [RFC0903] Finlayson, R., Mann, T., Mogul, J., and M. Theimer, "A Reverse Address Resolution Protocol", STD 38, RFC 903, DOI 10.17487/RFC0903, June 1984, <https://www.rfc- editor.org/info/rfc903>. [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997. [RFC2629] Rose, M., "Writing I-Ds and RFCs using XML", RFC 2629, DOI 10.17487/RFC2629, June 1999, <https://www.rfc- editor.org/info/rfc2629>. [RFC4861] Narten, T., Nordmark, E., Simpson, W., and H. Soliman, "Neighbor Discovery for IP version 6 (IPv6)", RFC 4861, DOI 10.17487/RFC4861, September 2007, <https://www.rfc- editor.org/info/rfc4861>. [RFC7067] L. Dunbar, D. Eastlake, R. Perlman, I. Gashinsky, "directory Assistance Problem and High Level Design Proposal", RFC7067, Nov. 2013 [RFC7348] Mahalingam, M., Dutt, D., Duda, K., Agarwal, P., Kreeger, L., Sridhar, T., Bursell, M., and C. Wright, "Virtual eXtensible Local Area Network (VXLAN): A Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks", RFC 7348, DOI 10.17487/RFC7348, August 2014, <https://www.rfc-editor.org/info/rfc7348>. [RFC7364] Narten, T., Ed., Gray, E., Ed., Black, D., Fang, L., Kreeger, L., and M. Napierala, "Problem Statement: Overlays for Network Virtualization", RFC 7364, DOI 10.17487/RFC7364, October 2014, <https://www.rfc- editor.org/info/rfc7364>. [RFC7666] H. Asai, et al, "Management Information Base for Virtual Machines Controlled by a Hypervisor", RFC7666, Oct 2015. [RFC8014] Black, D., Hudson, J., Kreeger, L., Lasserre, M., and T. Narten, "An Architecture for Data-Center Network Virtualization over Layer 3 (NVO3)", RFC 8014, DOI 10.17487/RFC8014, December 2016, <https://www.rfc- editor.org/info/rfc8014>. [RFC8171] D. Eastlake, L. Dunbar, R. Perlman, Y. Li, "Edge Directory Assistance Mechanisms", RFC 8171, June 2017 14.2. Informative References [I-D.herbert-intarea-ila] Herbert, T. and P. Lapukhov, "Identifier- locator addressing for IPv6", draft-herbert-intarea-ila - 04 (work in progress), March 2017. Authors' Addresses Linda Dunbar Futurewei Email: ldunbar@futurewei.com Behcet Sarikaya Denpel Informatique Email: sarikaya@ieee.org Bhumip KhasnabishIndependentInfo.: https://about.me/bhumip Email: vumip1@gmail.com Tom Herbert Intel Email: tom@herbertland.com Saumya Dikshit Aruba-HPE Bangalore, India Email: saumya.dikshit@hpe.com