--- 1/draft-ietf-mboned-dc-deploy-03.txt 2019-02-07 14:13:11.767949738 -0800 +++ 2/draft-ietf-mboned-dc-deploy-04.txt 2019-02-07 14:13:11.803950628 -0800 @@ -1,19 +1,19 @@ MBONED M. McBride Internet-Draft Huawei Intended status: Informational O. Komolafe -Expires: December 31, 2018 Arista Networks - June 29, 2018 +Expires: August 11, 2019 Arista Networks + February 07, 2019 Multicast in the Data Center Overview - draft-ietf-mboned-dc-deploy-03 + draft-ietf-mboned-dc-deploy-04 Abstract The volume and importance of one-to-many traffic patterns in data centers is likely to increase significantly in the future. Reasons for this increase are discussed and then attention is paid to the manner in which this traffic pattern may be judiously handled in data centers. The intuitive solution of deploying conventional IP multicast within data centers is explored and evaluated. Thereafter, a number of emerging innovative approaches are described before a @@ -27,57 +27,57 @@ Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet- Drafts is at https://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." - This Internet-Draft will expire on December 31, 2018. + This Internet-Draft will expire on August 11, 2019. Copyright Notice - Copyright (c) 2018 IETF Trust and the persons identified as the + Copyright (c) 2019 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 3 2. Reasons for increasing one-to-many traffic patterns . . . . . 3 2.1. Applications . . . . . . . . . . . . . . . . . . . . . . 3 2.2. Overlays . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3. Protocols . . . . . . . . . . . . . . . . . . . . . . . . 5 - 3. Handling one-to-many traffic using conventional multicast . . 5 + 3. Handling one-to-many traffic using conventional multicast . . 6 3.1. Layer 3 multicast . . . . . . . . . . . . . . . . . . . . 6 3.2. Layer 2 multicast . . . . . . . . . . . . . . . . . . . . 6 3.3. Example use cases . . . . . . . . . . . . . . . . . . . . 8 3.4. Advantages and disadvantages . . . . . . . . . . . . . . 9 4. Alternative options for handling one-to-many traffic . . . . 9 - 4.1. Minimizing traffic volumes . . . . . . . . . . . . . . . 9 + 4.1. Minimizing traffic volumes . . . . . . . . . . . . . . . 10 4.2. Head end replication . . . . . . . . . . . . . . . . . . 10 4.3. BIER . . . . . . . . . . . . . . . . . . . . . . . . . . 11 4.4. Segment Routing . . . . . . . . . . . . . . . . . . . . . 12 5. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 12 - 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12 + 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 7. Security Considerations . . . . . . . . . . . . . . . . . . . 13 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 13 9. References . . . . . . . . . . . . . . . . . . . . . . . . . 13 9.1. Normative References . . . . . . . . . . . . . . . . . . 13 9.2. Informative References . . . . . . . . . . . . . . . . . 13 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 15 1. Introduction The volume and importance of one-to-many traffic patterns in data @@ -150,39 +150,38 @@ gathering pace. A possible outcome of this transition will be the building of IP data centers in broadcast plants. Traffic flows in the broadcast industry are frequently one-to-many and so if IP data centers are deployed in broadcast plants, it is imperative that this traffic pattern is supported efficiently in that infrastructure. In fact, a pivotal consideration for broadcasters considering transitioning to IP is the manner in which these one-to-many traffic flows will be managed and monitored in a data center with an IP fabric. - Arguably one of the (few?) success stories in using conventional IP - multicast has been for disseminating market trading data. For - example, IP multicast is commonly used today to deliver stock quotes - from the stock exchange to financial services provider and then to - the stock analysts or brokerages. The network must be designed with - no single point of failure and in such a way that the network can - respond in a deterministic manner to any failure. Typically, - redundant servers (in a primary/backup or live-live mode) send - multicast streams into the network, with diverse paths being used - across the network. Another critical requirement is reliability and - traceability; regulatory and legal requirements means that the - producer of the marketing data must know exactly where the flow was - sent and be able to prove conclusively that the data was received - within agreed SLAs. The stock exchange generating the one-to-many - traffic and stock analysts/brokerage that receive the traffic will - typically have their own data centers. Therefore, the manner in - which one-to-many traffic patterns are handled in these data centers - are extremely important, especially given the requirements and - constraints mentioned. + One of the few success stories in using conventional IP multicast has + been for disseminating market trading data. For example, IP + multicast is commonly used today to deliver stock quotes from the + stock exchange to financial services provider and then to the stock + analysts or brokerages. The network must be designed with no single + point of failure and in such a way that the network can respond in a + deterministic manner to any failure. Typically, redundant servers + (in a primary/backup or live-live mode) send multicast streams into + the network, with diverse paths being used across the network. + Another critical requirement is reliability and traceability; + regulatory and legal requirements means that the producer of the + marketing data must know exactly where the flow was sent and be able + to prove conclusively that the data was received within agreed SLAs. + The stock exchange generating the one-to-many traffic and stock + analysts/brokerage that receive the traffic will typically have their + own data centers. Therefore, the manner in which one-to-many traffic + patterns are handled in these data centers are extremely important, + especially given the requirements and constraints mentioned. Many data center cloud providers provide publish and subscribe applications. There can be numerous publishers and subscribers and many message channels within a data center. With publish and subscribe servers, a separate message is sent to each subscriber of a publication. With multicast publish/subscribe, only one message is sent, regardless of the number of subscribers. In a publish/ subscribe system, client applications, some of which are publishers and some of which are subscribers, are connected to a network of message brokers that receive publications on a number of topics, and @@ -227,21 +226,29 @@ Furthermore, when these protocols are running within an overlay network, then it essential to ensure the messages are delivered to all the hosts on the emulated layer 2 segment, regardless of physical location within the data center. The challenges associated with optimally delivering ARP and ND messages in data centers has attracted lots of attention [RFC6820]. Popular approaches in use mostly seek to exploit characteristics of data center networks to avoid having to broadcast/multicast these messages, as discussed in Section 4.1. + There are networking protocols that are being modified/developed to + specifically target working in a data center CLOS environment. BGP + has been extended to work in these type of DC environments and well + supports multicast. RIFT (Routing in Fat Trees) is a new protocol + being developed to work efficiently in DC CLOS environments and also + is being specified to support multicast addressing and forwarding. + 3. Handling one-to-many traffic using conventional multicast + 3.1. Layer 3 multicast PIM is the most widely deployed multicast routing protocol and so, unsurprisingly, is the primary multicast routing protocol considered for use in the data center. There are three potential popular flavours of PIM that may be used: PIM-SM [RFC4601], PIM-SSM [RFC4607] or PIM-BIDIR [RFC5015]. It may be said that these different modes of PIM tradeoff the optimality of the multicast forwarding tree for the amount of multicast forwarding state that must be maintained at routers. SSM provides the most efficient forwarding between sources