Network Working Group                                  I. Minei (Editor)
Internet-Draft                                               K. Kompella
Intended status: Standards Track                        Juniper Networks
Expires: December 3, 2006 January 10, 2008                           I. Wijnands (Editor)
                                                               B. Thomas
                                                     Cisco Systems, Inc.
                                                            July 9, 2007

   Label Distribution Protocol Extensions for Point-to-Multipoint and
             Multipoint-to-Multipoint Label Switched Paths
                      draft-ietf-mpls-ldp-p2mp-02
                      draft-ietf-mpls-ldp-p2mp-03

Status of this Memo

   By submitting this Internet-Draft, each author represents that any
   applicable patent or other IPR claims of which he or she is aware
   have been or will be disclosed, and any of which he or she becomes
   aware will be disclosed, in accordance with Section 6 of BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups.  Note that
   other groups may also distribute working documents as Internet-
   Drafts.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   The list of current Internet-Drafts can be accessed at
   http://www.ietf.org/ietf/1id-abstracts.txt.

   The list of Internet-Draft Shadow Directories can be accessed at
   http://www.ietf.org/shadow.html.

   This Internet-Draft will expire on December 3, 2006. January 10, 2008.

Copyright Notice

   Copyright (C) The Internet Society (2006). IETF Trust (2007).

Abstract

   This document describes extensions to the Label Distribution Protocol
   (LDP) for the setup of point to multi-point (P2MP) and multipoint-to-
   multipoint (MP2MP) Label Switched Paths (LSPs) in Multi-Protocol
   Label Switching (MPLS) networks.  The solution relies on LDP without
   requiring a multicast routing protocol in the network.  Protocol
   elements and procedures for this solution are described for building
   such LSPs in a receiver-initiated manner.  There can be various
   applications for P2MP/MP2MP LSPs, for example IP multicast or support
   for multicast in BGP/MPLS L3VPNs.  Specification of how such
   applications can use a LDP signaled P2MP/MP2MP LSP is outside the
   scope of this document.

Table of Contents

   1.  Introduction . . . . . . . . . . . . . . . . . . . . . . . . .  3  4
     1.1.  Conventions used in this document  . . . . . . . . . . . .  3  4
     1.2.  Terminology  . . . . . . . . . . . . . . . . . . . . . . .  3  4
   2.  Setting up P2MP LSPs with LDP  . . . . . . . . . . . . . . . .  4  5
     2.1.  Support for P2MP LSP setup with LDP  . . . . . . . . . . .  5
     2.2.  The P2MP FEC Element . . . . . . . . . . . . . . . . . . .  4
     2.2.  6
     2.3.  The LDP MP Opaque Value Element  . . . . . . . . . . . . .  6
       2.2.1.  7
       2.3.1.  The Generic LSP Identifier . . . . . . . . . . . . . .  6
     2.3.  8
     2.4.  Using the P2MP FEC Element . . . . . . . . . . . . . . . .  7
       2.3.1.  8
       2.4.1.  Label Map  . . . . . . . . . . . . . . . . . . . . . .  8
       2.3.2.  9
       2.4.2.  Label Withdraw . . . . . . . . . . . . . . . . . . . .  9 11
   3.  Shared Trees . . . . . . . . . . . . . . . . . . . . . . . . . 10 11
   4.  Setting up MP2MP LSPs with LDP . . . . . . . . . . . . . . . . 11 12
     4.1.  Support for MP2MP LSP setup with LDP . . . . . . . . . . . 13
     4.2.  The MP2MP downstream and upstream FEC elements. Elements.  . . . . . 11
     4.2. 13
     4.3.  Using the MP2MP FEC elements Elements . . . . . . . . . . . . . . . 11
       4.2.1. 14
       4.3.1.  MP2MP Label Map upstream and downstream  . . . . . . . 13
       4.2.2. 15
       4.3.2.  MP2MP Label Withdraw . . . . . . . . . . . . . . . . . 15 17
   5.  Upstream label allocation on Ethernet networks  The LDP MP Status TLV  . . . . . . . . 16
   6.  Root node redundancy for MP2MP LSPs . . . . . . . . . . . . 18
     5.1.  The LDP MP Status Value Element  . . . 16
     6.1.  Root node redundancy procedure . . . . . . . . . . 19
     5.2.  LDP Messages containing LDP MP Status messages . . . . . . 16
   7.  Make before break 20
       5.2.1.  LDP MP Status sent in LDP notification messages  . . . 20
       5.2.2.  LDP MP Status TLV in Label Mapping Message . . . . . . 20
   6.  Upstream label allocation on a LAN . . . . . . . . . . . . . 17
     7.1.  Protocol event . 21
     6.1.  LDP Multipoint-to-Multipoint on a LAN  . . . . . . . . . . 21
       6.1.1.  MP2MP downstream forwarding  . . . . . . . . . . . 18
   8.  Security Considerations . . 21
       6.1.2.  MP2MP upstream forwarding  . . . . . . . . . . . . . . 22
   7.  Root node redundancy . . . 18
   9.  IANA considerations . . . . . . . . . . . . . . . . . . 22
     7.1.  Root node redundancy - procedures for P2MP LSPs  . . . 18
   10. Acknowledgments . . 23
     7.2.  Root node redundancy - procedures for MP2MP LSPs . . . . . 23
   8.  Make Before Break (MBB)  . . . . . . . . . . . . . . . . 19
   11. Contributing authors . . . 24
     8.1.  MBB overview . . . . . . . . . . . . . . . . . . 19
   12. References . . . . . 24
     8.2.  The MBB Status code  . . . . . . . . . . . . . . . . . . . 25
     8.3.  The MBB capability . . 21
     12.1. Normative References . . . . . . . . . . . . . . . . . . 26
     8.4.  The MBB procedures . 21
     12.2. Informative References . . . . . . . . . . . . . . . . . . 21
   Authors' Addresses . 26
       8.4.1.  Terminology  . . . . . . . . . . . . . . . . . . . . . 26
       8.4.2.  Accepting elements . . 21
   Intellectual Property and Copyright Statements . . . . . . . . . . 23

1.  Introduction

   The LDP protocol is described in [1].  It defines mechanisms . . . . . . 27
       8.4.3.  Procedures for
   setting up point-to-point upstream LSR change . . . . . . . . . . 27
       8.4.4.  Receiving a Label Map with MBB status code . . . . . . 28
       8.4.5.  Receiving a Notification with MBB status code  . . . . 28
       8.4.6.  Node operation for MP2MP LSPs  . . . . . . . . . . . . 29
   9.  Security Considerations  . . . . . . . . . . . . . . . . . . . 29
   10. IANA considerations  . . . . . . . . . . . . . . . . . . . . . 29
   11. Acknowledgments  . . . . . . . . . . . . . . . . . . . . . . . 30
   12. Contributing authors . . . . . . . . . . . . . . . . . . . . . 30
   13. References . . . . . . . . . . . . . . . . . . . . . . . . . . 32
     13.1. Normative References . . . . . . . . . . . . . . . . . . . 32
     13.2. Informative References . . . . . . . . . . . . . . . . . . 32
   Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 33
   Intellectual Property and Copyright Statements . . . . . . . . . . 34

1.  Introduction

   The LDP protocol is described in [1].  It defines mechanisms for
   setting up point-to-point (P2P) and multipoint-to-point (MP2P) LSPs multipoint-to-point (MP2P) LSPs
   in the network.  This document describes extensions to LDP for
   setting up point-to-multipoint (P2MP) and multipoint-to-multipoint
   (MP2MP) LSPs.  These are collectively referred to as multipoint LSPs
   (MP LSPs).  A P2MP LSP allows traffic from a single root (or ingress)
   node to be delivered to a number of leaf (or egress) nodes.  A MP2MP
   LSP allows traffic from multiple ingress nodes to be delivered to
   multiple egress nodes.  Only a single copy of the packet will be sent
   on any link traversed by the MP LSP (see note at end of
   Section 2.4.1).  This is accomplished without the use of a multicast
   protocol in the network.  There can be several MP LSPs rooted at a
   given ingress node, each with its own identifier.

   The solution assumes that the leaf nodes of the MP LSP know the root
   node and identifier of the MP LSP to which they belong.  The
   mechanisms for the distribution of this information are outside the
   scope of this document.  The specification of how an application can
   use a MP LSP signaled by LDP is also outside the scope of this
   document.

   Interested readers may also wish to peruse the requirements draft [9]
   and other documents [8] and [10].

1.1.  Conventions used in this document

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in RFC 2119 [2].

1.2.  Terminology

   The following terminology is taken from [9].

   P2P LSP:  An LSP that has one Ingress LSR and one Egress LSR.

   P2MP LSP:  An LSP that has one Ingress LSR and one or more Egress
      LSRs.

   MP2P LSP:  An LSP that has one or more Ingress LSRs and one unique
      Egress LSR.

   MP2MP LSP:  An LSP that connects a set of leaf nodes, acting
      indifferently as ingress or egress.

   MP LSP:  A multipoint LSP, either a P2MP or an MP2MP LSP.

   Ingress LSR:  Source of the P2MP LSP, also referred to as root node.

   Egress LSR:  One of potentially many destinations of an LSP, also
      referred to as leaf node in the case of P2MP and MP2MP LSPs.

   Transit LSR:  An LSR that has one or more directly connected
      downstream LSRs.

   Bud LSR:  An LSR that is an egress but also has one or more directly
      connected downstream LSRs.

2.  Setting up P2MP LSPs with LDP

   A P2MP LSP consists of a single root node, zero or more transit nodes
   and one or more leaf nodes.  Leaf nodes initiate P2MP LSP setup and
   tear-down.  Leaf nodes also install forwarding state to deliver the
   traffic received on a P2MP LSP to wherever it needs to go; how this
   is done is outside the scope of this document.  Transit nodes install
   MPLS forwarding state and propagate the P2MP LSP setup (and tear-
   down) toward the root.  The root node installs forwarding state to
   map traffic into the P2MP LSP; how the root node determines which
   traffic should go over the P2MP LSP is outside the scope of this
   document.

2.1.  Support for P2MP LSP setup with LDP

   Support for the setup of P2MP LSPs is advertised using LDP
   capabilities as defined in [6].  An implementation supporting the
   P2MP procedures specified in this document MUST implement the
   procedures for Capability Parameters in Initialization Messages.

   A new Capability Parameter TLV is defined, the P2MP Capability.
   Following is the format of the P2MP Capability Parameter.

        0                   1                   2                   3
        0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       |1|0| P2MP Capability (TBD IANA) |     Length (= 1)             |
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       |1| Reserved    |
       +-+-+-+-+-+-+-+-+

   The P2MP Capability TLV MUST be supported in the LDP Initialization
   Message.  Advertisement of the P2MP Capability indicates support of
   the procedures for P2MP LSP setup detailed in this document.  If the
   peer has not advertised the corresponding capability, then no label
   messages using the P2MP FEC Element should be sent to the peer.

2.2.  The P2MP FEC Element

   For the setup of a P2MP LSP with LDP, we define one new protocol
   entity, the P2MP FEC Element to be used as a FEC Element in the FEC
   TLV.  Note that the P2MP FEC Element does not necessarily identify
   the traffic that must be mapped to the LSP, so from that point of
   view, the use of the term FEC is a misnomer.  The description of the
   P2MP FEC Element follows.

   The P2MP FEC Element consists of the address of the root of the P2MP
   LSP and an opaque value.  The opaque value consists of one or more
   LDP MP Opaque Value Elements.  The opaque value is unique within the
   context of the root node.  The combination of (Root Node Address,
   Opaque Value) uniquely identifies a P2MP LSP within the MPLS network.

   The P2MP FEC Element is encoded as follows:

       0                   1                   2                   3
       0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |P2MP Type (TBD)|        Address Family         | Address Length|
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                       Root Node Address                       |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |    Opaque Length              |    Opaque Value ...           |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+                               +
      ~                                                               ~
      |                                                               |
      |                               +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                               |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   Type:  The type of the P2MP FEC Element is to be assigned by IANA.

   Address Family:  Two octet quantity containing a value from ADDRESS
      FAMILY NUMBERS in [3] that encodes the address family for the Root
      LSR Address.

   Address Length:  Length of the Root LSR Address in octets.

   Root Node Address:  A host address encoded according to the Address
      Family field.

   Opaque Length:  The length of the Opaque Value, in octets.

   Opaque Value:  One or more MP Opaque Value elements, uniquely
      identifying the P2MP LSP in the context of the Root Node.  This is
      described in the next section.

   If the Address Family is IPv4, the Address Length MUST be 4; if the
   Address Family is IPv6, the Address Length MUST be 16.  No other
   Address Lengths are defined at present.

   If the Address Length doesn't match the defined length for the
   Address Family, the receiver SHOULD abort processing the message
   containing the FEC Element, and send an "Unknown FEC" Notification
   message to its LDP peer signaling an error.

   If a FEC TLV contains a P2MP FEC Element, the P2MP FEC Element MUST
   be the only FEC Element in the FEC TLV.

2.3.  The LDP MP Opaque Value Element

   The LDP MP Opaque Value Element is used in the P2MP and MP2MP FEC
   Elements defined in subsequent sections.  It carries information that
   is meaningful to leaf (and bud) LSRs, but need not be interpreted by
   non-leaf LSRs.

   The LDP MP Opaque Value Element is encoded as follows:

       0                   1                   2                   3
       0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       | Type(TBD)     | Length                        | Value ...     |
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+               |
       ~                                                               ~
       |                                                               |
       |                               +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       |                               |

       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

   Type:  The type of the LDP MP Opaque Value Element is to be assigned
      by IANA.

   Length:  The length of the Value field, in octets.

   Value:  String of Length octets, to be interpreted as specified by
      the Type field.

2.3.1.  The Generic LSP Identifier

   The generic LSP identifier is a type of Opaque Value Element encoded
   as follows:

   Type:  1 (to be assigned by IANA)

   Length:  4

   Value:  A 32bit integer, unique in the context of the root, as
      identified by the root's address.

   This type of Opaque Value Element is recommended when mapping of
   traffic to LSPs is non-algorithmic, and done by means outside LDP.

2.4.  Using the P2MP FEC Element

   This section defines the rules for the processing and propagation of
   the P2MP FEC Element.  The following notation is used in the
   processing rules:

   1.  P2MP FEC Element <X, Y>: a FEC Element with Root Node Address X
       and Opaque Value Y.

   2.  P2MP Label Map <X, Y, L>: a Label Map message with a FEC TLV with
       a single P2MP FEC Element <X, Y> and Label TLV with label L.

   3.  P2MP Label Withdraw <X, Y, L>: a Label Withdraw message with a
       FEC TLV with a single P2MP FEC Element <X, Y> and Label TLV with
       label L.

   4.  P2MP LSP <X, Y> (or simply <X, Y>): a P2MP LSP with Root Node
       Address X and Opaque Value Y.

   5.  The notation L' -> {<I1, L1> <I2, L2> ..., <In, Ln>} on LSR X
       means that on receiving a packet with label L', X makes n copies
       of the packet.  For copy i of the packet, X swaps L' with Li and
       sends it out over interface Ii.

   The procedures below are organized by the role which the node plays
   in the P2MP LSP.  Node Z knows that it is a leaf node by a discovery
   process which is outside the scope of this document.  During the
   course of protocol operation, the root node recognizes its role
   because it owns the Root Node Address.  A transit node is any node
   (other than the root node) that receives a P2MP Label Map message
   (i.e., one that has leaf nodes downstream of it).

   Note that a transit node (and indeed the root node) may also be a
   leaf node.

2.4.1.  Label Map

   The following lists procedures for generating and processing P2MP
   Label Map messages for nodes that participate in a P2MP LSP.  An LSR
   should apply those procedures that apply to it, based on its role in
   the P2MP LSP.

   For the approach described here we use downstream assigned labels.
   On Ethernet networks this may be less optimal, see Section 6.

2.4.1.1.  Determining one's 'upstream LSR'

   A node Z that is part of P2MP LSP <X, Y> determines the LDP peer U
   which lies on the best path from Z to the root node X. If there are
   more than one such LDP peers, only one of them is picked.  U is Z's
   "Upstream LSR" for <X, Y>.

   When there are several candidate upstream LSRs, the LSR MAY select
   one upstream LSR using the following procedure:

   1.  The candidate upstream LSRs are numbered from lower to higher IP
       address

   2.  The following hash is performed: H = (Sum Opaque value) modulo N,
       where N is the number of candidate upstream LSRs

   3.  The selected upstream LSR U is the LSR that has the number H.

   This allows for load balancing of a set of LSPs among a set of
   candidate upstream LSRs, while ensuring that on a LAN interface a
   single upstream LSR is selected.

2.4.1.2.  Leaf Operation

   A leaf node Z of P2MP LSP <X, Y> determines its upstream LSR U for
   <X, Y> as per Section 2.4.1.1, allocates a label L, and sends a P2MP
   Label Map <X, Y, L> to U.

2.4.1.3.  Transit Node operation

   Suppose a transit node Z receives a P2MP Label Map <X, Y, L> from LDP
   peer T. Z checks whether it already has state for <X, Y>.  If not, Z
   allocates a label L', and installs state to swap L' with L over
   interface I associated with peer T. Z also determines its upstream
   LSR U for <X, Y> as per Section 2.4.1.1, and sends a P2MP Label Map
   <X, Y, L'> to U.

   If Z already has state for <X, Y>, then Z does not send a Label Map
   message for P2MP LSP <X, Y>.  All that Z needs to do in this case is
   update its forwarding state.  Assuming its old forwarding state was
   L'-> {<I1, L1> <I2, L2> ..., <In, Ln>}, its new forwarding state
   becomes L'-> {<I1, L1> <I2, L2> ..., <In, Ln>, <I, L>}.

2.4.1.4.  Root Node Operation

   Suppose the root node Z receives a P2MP Label Map <X, Y, L> from peer
   T. Z checks whether it already has forwarding state for <X, Y>.  If
   not, Z creates forwarding state to push label L onto the traffic that
   Z wants to forward over the P2MP LSP (how this traffic is determined
   is outside the scope of this document).

   If Z already has forwarding state for <X, Y>, then Z adds "push label
   L, send over interface I" to the nexthop, where I is the interface
   associated with peer T.

2.4.2.  Label Withdraw

   The following lists procedures for generating and processing P2MP
   Label Withdraw messages for nodes that participate in a P2MP LSP.  An
   LSR should apply those procedures that apply to it, based on its role
   in the P2MP LSP.

2.4.2.1.  Leaf Operation

   If a leaf node Z discovers (by means outside the scope of this
   document) that it is no longer a leaf of the P2MP LSP, it SHOULD send
   a Label Withdraw <X, Y, L> to its upstream LSR U for <X, Y>, where L
   is the label it had previously advertised to U for <X, Y>.

2.4.2.2.  Transit Node Operation

   If a transit node Z receives a Label Withdraw message <X, Y, L> from
   a node W, it deletes label L from its forwarding state, and sends a
   Label Release message with label L to W.

   If deleting L from Z's forwarding state for P2MP LSP <X, Y> results
   in no state remaining for <X, Y>, then Z propagates the network.  This document describes extensions Label
   Withdraw for <X, Y>, to LDP its upstream T, by sending a Label Withdraw
   <X, Y, L1> where L1 is the label Z had previously advertised to T for
   setting up point-to-multipoint (P2MP) and multipoint-to-multipoint
   (MP2MP) LSPs.  These
   <X, Y>.

2.4.2.3.  Root Node Operation

   The procedure when the root node of a P2MP LSP receives a Label
   Withdraw message are collectively referred to the same as multipoint LSPs
   (MP LSPs). for transit nodes, except that it
   would not propagate the Label Withdraw upstream (as it has no
   upstream).

2.4.2.4.  Upstream LSR change

   If, for a given node Z participating in a P2MP LSP <X, Y>, the
   upstream LSR changes, say from U to U', then Z MUST update its
   forwarding state by deleting the state for label L, allocating a new
   label, L', for <X,Y>, and installing the forwarding state for L'.  In
   addition Z MUST send a Label Map <X, Y, L'> to U' and send a Label
   Withdraw <X, Y, L> to U.

3.  Shared Trees

   The mechanism described above shows how to build a tree with a single
   root and multiple leaves, i.e., a P2MP LSP.  One can use essentially
   the same mechanism to build Shared Trees with LDP.  A P2MP LSP allows Shared Tree can
   be used by a group of routers that want to multicast traffic from among
   themselves, i.e., each node is both a single root (or ingress) node to be delivered to (when it sources
   traffic) and a number of leaf (or egress) nodes. node (when any other member of the group sources
   traffic).  A MP2MP
   LSP allows traffic from multiple ingress nodes to be delivered Shared Tree offers similar functionality to
   multiple egress nodes.  Only a single copy of the packet will be sent
   on any link traversed by MP2MP LSP,
   but the MP LSP (see note at end of
   Section 2.3.1).  This underlying multicasting mechanism uses a P2MP LSP.  One
   example where a Shared Tree is accomplished without the use useful is video-conferencing.  Another
   is Virtual Private LAN Service (VPLS) [7], where for some types of
   traffic, each device participating in a multicast
   protocol VPLS must send packets to
   every other device in the network.  There can be several MP LSPs that VPLS.

   One way to build a Shared Tree is to build an LDP P2MP LSP rooted at
   a
   given ingress node, each with its own identifier.

   The solution assumes that common point, the leaf nodes of Shared Root (SR), and whose leaves are all the MP LSP know
   members of the root
   node and identifier group.  Each member of the MP LSP Shared Tree unicasts
   traffic to which they belong.  The
   mechanisms the SR (using, for example, the distribution MP2P LSP created by the
   unicast LDP FEC advertised by the SR); the SR then splices this
   traffic into the LDP P2MP LSP.  The SR may be (but need not be) a
   member of this information are outside the
   scope multicast group.

   A major advantage of this document.  The specification of how an application can
   use a MP LSP signaled by LDP approach is also outside that no further protocol
   mechanisms beyond the scope of this
   document.

   Interested readers may also wish one already described are needed to peruse set up a
   Shared Tree.  Furthermore, a Shared Tree is very efficient in terms
   of the requirement draft [4]
   and other documents [8] and [9].

1.1.  Conventions used multicast state in this document

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", the network, and "OPTIONAL" is reasonably efficient in this
   document are
   terms of the bandwidth required to be interpreted as described in RFC 2119 [2].

1.2.  Terminology

   The following terminology send traffic.

   A property of this approach is taken from [4].

   P2P LSP:  An LSP that has one Ingress LSR and one Egress LSR.

   P2MP LSP:  An LSP that has one Ingress LSR a sender will receive its own
   packets as part of the multicast; thus a sender must be prepared to
   recognize and one or more Egress
      LSRs.

   MP2P LSP:  A LSP discard packets that it itself has one or more Ingress LSRs and one unique
      Egress LSR.

   MP2MP LSP:  A LSP that connects a set of leaf nodes, acting
      indifferently as ingress or egress.

   MP LSP:  A multipoint LSP, either sent.  For a P2MP or an MP2MP LSP.

   Ingress LSR:  Source number
   of applications (for example, VPLS), this requirement is easy to
   meet.  Another consideration is the P2MP LSP, also referred various techniques that can be
   used to as root node.

   Egress LSR:  One of potentially many destinations of an LSP, also
      referred splice unicast LDP MP2P LSPs to as leaf node in the case of LDP P2MP and MP2MP LSPs.

   Transit LSR:  An LSR that has one or more directly connected
      downstream LSRs.

   Bud LSR:  An LSR that is an egress but also has one or more directly
      connected downstream LSRs.

2. LSP; these will
   be described in a later revision.

4.  Setting up P2MP MP2MP LSPs with LDP

   A

   An MP2MP LSP is much like a P2MP LSP in that it consists of a single
   root node, zero or more transit nodes and one or more leaf nodes.  Leaf nodes initiate P2MP LSP setup and
   tear-down.  Leaf nodes also install forwarding state to deliver the
   traffic received on a P2MP LSP to wherever it needs to go; how this
   is done is outside LSRs
   acting equally as Ingress or Egress LSR.  A leaf node participates in
   the scope setup of this document.  Transit nodes install
   MPLS forwarding state and propagate the P2MP an MP2MP LSP setup (and tear-
   down) toward the root.  The root node installs forwarding state to
   map traffic into the P2MP LSP; how the root node determines by establishing both a downstream LSP,
   which
   traffic should go over the P2MP LSP is outside the scope of this
   document.

   For the setup of much like a P2MP LSP with LDP, we define one new protocol
   entity, from the P2MP FEC Element to be root, and an upstream LSP
   which is used in the FEC TLV.  The
   description of the P2MP FEC Element follows.

2.1.  The P2MP FEC Element

   The P2MP FEC Element consists of to send traffic toward the address of root and other leaf nodes.
   Transit nodes support the root of setup by propagating the P2MP upstream and
   downstream LSP setup toward the root and an opaque value.  The opaque value consists of one or more
   LDP MP Opaque Value Elements.  The opaque value is unique within installing the
   context necessary
   MPLS forwarding state.  The transmission of packets from the root node.  The combination
   node of (Root Node Address,
   Opaque Value) uniquely identifies a P2MP MP2MP LSP within the MPLS network.

   The P2MP FEC element is encoded as follows:

       0                   1                   2                   3
       0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |P2MP Type (TBD)|        Address Family         | Address Length|
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                       Root Node Address                       |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |    Opaque Length              |    Opaque Value ...           |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+                               +
      ~                                                               ~
      |                                                               |
      |                               +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                               |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

   Type:  The type of to the P2MP FEC element receivers is identical to be assigned by IANA.

   Address Family:  Two octet quantity containing that for a value P2MP
   LSP.  Traffic from ADDRESS
      FAMILY NUMBERS in [3] that encodes a leaf node follows the address family for upstream LSP toward the Root
      LSR Address.

   Address Length:  Length of
   root node and branches downward along the Root LSR Address in octets.

   Root Node Address:  A host address encoded according downstream LSP as required
   to reach other leaf nodes.  Mapping traffic to the Address
      Family field.

   Opaque Length:  The length of the Opaque Value, in octets.

   Opaque Value:  One or more MP Opaque Value elements, uniquely
      identifying the P2MP MP2MP LSP in may
   happen at any leaf node.  How that mapping is established is outside
   the context scope of the Root Node.  This this document.

   Due to how a MP2MP LSP is
      described in the next section.

   If the Address Family built a leaf LSR that is IPv4, the Address Length MUST be 4; if sending packets on
   the
   Address Family MP2MP LSP does not receive its own packets.  There is IPv6, the Address Length MUST be 16.  No other
   Address Lengths are defined at present.

   If also no
   additional mechanism needed on the Address Length doesn't root or transit LSR to match the defined length for the
   Address Family, the receiver SHOULD abort processing the message
   containing the FEC Element, and send an "Unknown FEC" Notification
   message
   upstream traffic to its LDP peer signaling an error.

   If the downstream forwarding state.  Packets that
   are forwarded over a FEC TLV contains MP2MP LSP will not traverse a P2MP FEC Element, the P2MP FEC Element MUST
   be link more than
   once, with the only FEC Element exception of LAN links which are discussed in the FEC TLV.

2.2.  The LDP MP Opaque Value Element

   The
   Section 4.3.1

4.1.  Support for MP2MP LSP setup with LDP MP Opaque Value Element

   Support for the setup of MP2MP LSPs is used advertised using LDP
   capabilities as defined in [6].  An implementation supporting the P2MP and
   MP2MP FEC
   elements defined procedures specified in subsequent sections.  It carries information that this document MUST implement the
   procedures for Capability Parameters in Initialization Messages.

   A new Capability Parameter TLV is meaningful to leaf (and bud) LSRs, but need not be interpreted by
   non-leaf LSRs.

   The LDP MP Opaque Value Element defined, the MP2MP Capability.
   Following is encoded as follows: the format of the MP2MP Capability Parameter.

        0                   1                   2                   3
        0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       | Type(TBD)
       |1|0| MP2MP Capability (TBD IANA) |    Length (= 1)             | Value ...     |
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+               |
       ~                                                               ~
       |                                                               |
       |                               +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       |
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       |1| Reserved    |

       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

   Type:
       +-+-+-+-+-+-+-+-+

   The type of MP2MP Capability TLV MUST be supported in the LDP MP Opaque Value Element is to be assigned
      by IANA.

   Length:  The length Initialization
   Message.  Advertisement of the Value field, in octets.

   Value:  String MP2MP Capability indicates support of Length octets, to be interpreted as specified by
   the Type field.

2.2.1.  The Generic procedures for MP2MP LSP Identifier setup detailed in this document.  If the
   peer has not advertised the corresponding capability, then no label
   messages using the MP2MP upstream and downstream FEC Elements should
   be sent to the peer.

4.2.  The generic LSP identifier is a type MP2MP downstream and upstream FEC Elements.

   For the setup of Opaque Value Element encoded
   as follows:

   Type:  1 (to a MP2MP LSP with LDP we define 2 new protocol
   entities, the MP2MP downstream FEC and upstream FEC Element.  Both
   elements will be assigned by IANA)
   Length:  4

   Value:  A 32bit integer, unique used as FEC Elements in the context of FEC TLV.  Note that the root, as
      identified by
   MP2MP FEC Elements do not necessarily identify the root's address.

   This type traffic that must
   be mapped to the LSP, so from that point of opaque value element view, the use of the term
   FEC is recommended when mapping a misnomer.  The description of
   traffic to LSPs the MP2MP FEC Elements follow.

   The structure, encoding and error handling for the MP2MP downstream
   and upstream FEC Elements are the same as for the P2MP FEC Element
   described in Section 2.2.  The difference is non-algorithmic, that two new FEC types
   are used: MP2MP downstream type (TBD) and done by means outside LDP.

2.3.  Using MP2MP upstream type (TBD).

   If a FEC TLV contains an MP2MP FEC Element, the P2MP MP2MP FEC Element
   MUST be the only FEC Element in the FEC TLV.

4.3.  Using the MP2MP FEC Elements

   This section defines the rules for the processing and propagation of
   the P2MP MP2MP FEC Element. Elements.  The following notation is used in the
   processing rules:

   1.  P2MP  MP2MP downstream LSP <X, Y> (or simply downstream <X, Y>): an
       MP2MP LSP downstream path with root node address X and opaque
       value Y.

   2.  MP2MP upstream LSP <X, Y, D> (or simply upstream <X, Y, D>): a
       MP2MP LSP upstream path for downstream node D with root node
       address X and opaque value Y.

   3.  MP2MP downstream FEC Element <X, Y>: a FEC Element with Root Node Address root node
       address X and Opaque Value Y.

   2.  P2MP opaque value Y used for a downstream MP2MP LSP.

   4.  MP2MP upstream FEC Element <X, Y>: a FEC Element with root node
       address X and opaque value Y used for an upstream MP2MP LSP.

   5.  MP2MP Label Map downstream <X, Y, L>: a A Label Map message with a
       FEC TLV with a single P2MP MP2MP downstream FEC Element <X, Y> and Label
       label TLV with label L.

   3.  P2MP

   6.  MP2MP Label Map upstream <X, Y, Lu>: A Label Map message with a
       FEC TLV with a single MP2MP upstream FEC Element <X, Y> and label
       TLV with label Lu.

   7.  MP2MP Label Withdraw downstream <X, Y, L>: a Label Withdraw
       message with a FEC TLV with a single P2MP MP2MP downstream FEC Element
       <X, Y> and Label label TLV with label L.

   4.  P2MP LSP <X, Y> (or simply

   8.  MP2MP Label Withdraw upstream <X, Y>): Y, Lu>: a P2MP LSP Label Withdraw
       message with Root Node
       Address X and Opaque Value Y.

   5.  The notation L' -> {<I1, L1> <I2, L2> ..., <In, Ln>} on LSR X
       means that on receiving a packet with label L', X makes n copies
       of the packet.  For copy i of the packet, X swaps L' FEC TLV with Li a single MP2MP upstream FEC Element
       <X, Y> and
       sends it out over interface Ii. label TLV with label Lu.

   The procedures below are organized by the role which the node plays
   in the P2MP MP2MP LSP.  Node Z knows that it is a leaf node by a discovery
   process which is outside the scope of this document.  During the
   course of the protocol operation, the root node recognizes its role
   because it owns the Root Node Address. root node address.  A transit node is any node
   (other than then the root node) that receives a P2MP MP2MP Label Map message
   (i.e., one that has leaf nodes downstream of it).

   Note that a transit node (and indeed the root node) may also be a
   leaf node.

2.3.1. node and the root node does not have to be an ingress LSR or
   leaf of the MP2MP LSP.

4.3.1.  MP2MP Label Map upstream and downstream

   The following lists procedures for generating and processing P2MP MP2MP
   Label Map messages for nodes that participate in a P2MP MP2MP LSP.  An LSR
   should apply those procedures that apply to it, based on its role in
   the P2MP MP2MP LSP.

   For the approach described here we use downstream assigned labels.
   On Ethernet networks this if there are several receivers for a
   MP2MP LSP on a LAN, packets are replicated over the LAN.  This may
   not be less optimal, optimal; optimizing this case is for further study, see Section 5.

2.3.1.1. [4].

4.3.1.1.  Determining one's 'upstream LSR'

   A node Z that is part of P2MP LSP <X, Y> determines upstream MP2MP LSR

   Determining the upstream LDP peer U
   which lies on the best path from Z to the root node X. If there are
   more than one such LDP peers, only one of them is picked.  U is Z's
   "Upstream LSR" for a MP2MP LSP <X, Y>.

   When there are several candidate upstream LSRs, the LSR MAY select
   one upstream LSR using the following procedure:

   1.  The candidate upstream LSRs are numbered from lower to higher IP
       address

   2.  The following hash is performed: H = (Sum Opaque value) modulo N,
       where N is the number of candidate upstream LSRs

   3.  The selected upstream LSR U is the LSR that has Y> follows
   the number H.

   This allows procedure for load balancing of a set of LSPs among a set of
   candidate upstream LSRs, while ensuring that on P2MP LSP described in Section 2.4.1.1.

4.3.1.2.  Determining one's downstream MP2MP LSR

   A LDP peer U which receives a LAN interface MP2MP Label Map downstream from a
   single upstream LSR is selected.

2.3.1.2.  Leaf Operation LDP
   peer D will treat D as downstream MP2MP LSR.

4.3.1.3.  MP2MP leaf node operation

   A leaf node Z of P2MP a MP2MP LSP <X, Y> determines its upstream LSR U for
   <X, Y> as per Section 2.3.1.1, 4.3.1.1, allocates a label L, and sends a P2MP MP2MP
   Label Map downstream <X, Y, L> to U.

2.3.1.3.  Transit Node operation

   Suppose a

   Leaf node Z expects an MP2MP Label Map upstream <X, Y, Lu> from node
   U in response to the MP2MP Label Map downstream it sent to node U. Z
   checks whether it already has forwarding state for upstream <X, Y>.
   If not, Z creates forwarding state to push label Lu onto the traffic
   that Z wants to forward over the MP2MP LSP.  How it determines what
   traffic to forward on this MP2MP LSP is outside the scope of this
   document.

4.3.1.4.  MP2MP transit node operation

   When node Z receives a P2MP MP2MP Label Map downstream <X, Y, L> from LDP peer T. Z
   D associated with interface I, it checks whether it already has forwarding
   state for downstream <X, Y>.  If not, Z allocates a label L', L' and
   installs downstream forwarding state to swap label L' with label L
   over interface I associated with peer T. I. Z also determines its upstream LSR U for <X, Y> as
   per Section 2.3.1.1, 4.3.1.1, and sends a P2MP MP2MP Label Map downstream <X, Y,
   L'> to U.

   If Z already has forwarding state for downstream <X, Y>, then Z does not send a Label Map
   message for P2MP LSP <X, Y>.  All all that Z
   needs to do in this case is update its forwarding state.  Assuming its old
   forwarding state was L'-> {<I1, L1> <I2, L2> ..., <In, Ln>}, its new
   forwarding state becomes L'-> {<I1, L1> <I2, L2> ..., <In, Ln>, <I,
   L>}.

2.3.1.4.  Root

   Node Operation

   Suppose the root node Z receives a P2MP Label Map <X, Y, L> from peer
   T. Z checks whether it already has forwarding state for upstream <X, Y>. Y,
   D>.  If it does, then no further action needs to happen.  If it does
   not, Z it allocates a label Lu and creates forwarding state to push a new label L onto the traffic that
   Z wants to forward over the P2MP LSP (how this traffic is determined
   is outside the scope of this document).

   If Z already has swap for Lu from
   the label swap(s) from the forwarding state for downstream <X, Y>, then Z adds "push label
   L, send over interface I" to the nexthop, where I is
   omitting the swap on interface
   associated with peer T.

2.3.2.  Label Withdraw

   The following lists procedures for generating and processing P2MP
   Label Withdraw messages I for nodes that participate in a P2MP LSP.  An
   LSR should apply those procedures that apply node D. This allows upstream
   traffic to it, based on its role
   in follow the MP2MP tree down to other node(s) except the P2MP LSP.

2.3.2.1.  Leaf Operation

   If a leaf
   node from which Z discovers (by means outside the scope of this
   document) that it is no longer a leaf of received the P2MP LSP, it SHOULD send
   a MP2MP Label Withdraw Map downstream <X, Y, L> to its upstream L>.
   Node Z determines the downstream MP2MP LSR U for as per Section 4.3.1.2,
   and sends a MP2MP Label Map upstream <X, Y>, where L
   is the label it had previously advertised Y, Lu> to U for <X, Y>.

2.3.2.2. node D.

   Transit Node Operation

   If a transit node Z receives will also receive a MP2MP Label Withdraw message Map upstream <X, Y, L> from
   a node W, it deletes label L from its forwarding state, and sends a
   Lu> in response to the MP2MP Label Release message Map downstream sent to node U
   associated with interface Iu.  Node Z will add label L swap Lu over
   interface Iu to W.

   If deleting L from Z's the forwarding state for P2MP LSP <X, Y> results
   in no state remaining for <X, Y>, then Z propagates the Label
   Withdraw for <X, Y>, to its upstream T, by sending a Label Withdraw upstream <X, Y, L1> where L1 is the label Z had previously advertised D>.  This allows
   packets to T for
   <X, Y>.

2.3.2.3.  Root Node Operation

   The procedure when go up the tree towards the root node.

4.3.1.5.  MP2MP root node of operation

4.3.1.5.1.  Root node is also a P2MP LSP leaf

   Suppose root/leaf node Z receives a MP2MP Label
   Withdraw message are the same as Map downstream <X, Y,
   L> from node D associated with interface I. Z checks whether it
   already has forwarding state downstream <X, Y>.  If not, Z creates
   forwarding state for transit nodes, except downstream to push label L on traffic that it
   would not propagate Z
   wants to forward down the Label Withdraw upstream (as MP2MP LSP.  How it determines what traffic
   to forward on this MP2MP LSP is outside the scope of this document.
   If Z already has no
   upstream).

2.3.2.4.  Upstream LSR change

   If, forwarding state for a given node Z participating in a P2MP LSP downstream <X, Y>, then Z will
   add the
   upstream LSR changes, say from U label push for L over interface I to U', then it.

   Node Z MUST update its checks if it has forwarding state by deleting the state for label L, allocating upstream <X, Y, D>.  If
   not, Z allocates a new
   label, L', for <X,Y>, label Lu and installing creates upstream forwarding state to
   push Lu with the label push(s) from the forwarding state for L'.  In
   addition Z MUST send a Label Map downstream
   <X, Y, L'> Y>, except the push on interface I for node D. This allows
   upstream traffic to U' go down the MP2MP to other node(s), except the
   node from which the traffic was received.  Node Z determines the
   downstream MP2MP LSR as per section Section 4.3.1.2, and send sends a
   MP2MP Label
   Withdraw Map upstream <X, Y, L> to U.

3.  Shared Trees

   The mechanism described above shows how Lu> to build a node D. Since Z is the root of
   the tree with Z will not send a single
   root MP2MP downstream map and multiple leaves, i.e., a P2MP LSP.  One can use essentially
   the same mechanism to build Shared Trees with LDP.  A Shared Tree can
   be used by will not receive
   a group of routers that want to multicast traffic among
   themselves, i.e., each MP2MP upstream map.

4.3.1.5.2.  Root node is both a root node (when it sources
   traffic) and not a leaf node (when any other member of

   Suppose the group sources
   traffic).  A Shared Tree offers similar functionality to root node Z receives a MP2MP LSP,
   but the underlying multicasting mechanism uses a P2MP LSP.  One
   example where a Shared Tree is useful is video-conferencing.  Another
   is Virtual Private LAN Service (VPLS) [7], where Label Map downstream <X, Y,
   L> from node D associated with interface I. Z checks whether it
   already has forwarding state for some types of
   traffic, each device participating in a VPLS must send packets to
   every other device in that VPLS.

   One way to build a Shared Tree is to build an LDP P2MP LSP rooted at
   a common point, the Shared Root (SR), downstream <X, Y>.  If not, Z
   creates downstream forwarding state and whose leaves are all the
   members of the group.  Each member of the Shared Tree unicasts
   traffic installs a outgoing label L
   over interface I. If Z already has forwarding state for downstream
   <X, Y>, then Z will add label L over interface I to the SR (using, existing
   state.

   Node Z checks if it has forwarding state for example, the MP2P LSP created by upstream <X, Y, D>.  If
   not, Z allocates a label Lu and creates forwarding state to swap Lu
   with the
   unicast LDP FEC advertised by label swap(s) from the SR); forwarding state downstream <X, Y>,
   except the SR then splices this swap for node D. This allows upstream traffic into to go down
   the LDP P2MP LSP.  The SR may be (but need not be) a
   member of MP2MP to other node(s), except the multicast group.

   A major advantage of this approach node is that no further protocol
   mechanisms beyond was received from.
   Root node Z determines the one already described are needed to set up a
   Shared Tree.  Furthermore, downstream MP2MP LSR D as per
   Section 4.3.1.2, and sends a Shared Tree MP2MP Label Map upstream <X, Y, Lu> to
   it.  Since Z is very efficient in terms
   of the multicast state in the network, and is reasonably efficient in
   terms root of the bandwidth required to tree Z will not send traffic.

   A property of this approach is that a sender MP2MP
   downstream map and will not receive a MP2MP upstream map.

4.3.2.  MP2MP Label Withdraw

   The following lists procedures for generating and processing MP2MP
   Label Withdraw messages for nodes that participate in a MP2MP LSP.
   An LSR should apply those procedures that apply to it, based on its own
   packets as part
   role in the MP2MP LSP.

4.3.2.1.  MP2MP leaf operation

   If a leaf node Z discovers (by means outside the scope of the multicast; thus a sender must be prepared to
   recognize and discard packets this
   document) that it itself has sent.  For is no longer a number leaf of applications (for example, VPLS), this requirement is easy the MP2MP LSP, it SHOULD
   send a downstream Label Withdraw <X, Y, L> to
   meet.  Another consideration its upstream LSR U for
   <X, Y>, where L is the various techniques that can be
   used label it had previously advertised to splice unicast LDP MP2P LSPs U for
   <X,Y>.

   Leaf node Z expects the upstream router U to respond by sending a
   downstream label release for L and a upstream Label Withdraw for <X,
   Y, Lu> to remove Lu from the LDP P2MP LSP; these upstream state.  Node Z will
   be described in remove
   label Lu from its upstream state and send a later revision.

4.  Setting up MP2MP LSPs label release message
   with LDP

   An label Lu to U.

4.3.2.2.  MP2MP LSP is much like a P2MP LSP in that it consists of transit node operation

   If a single
   root node, zero or more transit nodes and one or more leaf LSRs
   acting equally as Ingress or Egress LSR.  A leaf node participates in
   the setup of an MP2MP LSP by establishing both Z receives a downstream LSP,
   which is much like a P2MP LSP label withdraw message <X,
   Y, L> from the root, node D, it deletes label L from its forwarding state
   downstream <X, Y> and an from all its upstream LSP
   which is used states for <X, Y>.  Node
   Z sends a label release message with label L to send traffic toward the root and other leaf nodes.
   Transit nodes support D. Since node D is no
   longer part of the setup by propagating downstream forwarding state, Z cleans up the
   forwarding state upstream <X, Y, D> and sends a upstream Label
   Withdraw for <X, Y, Lu> to D.

   If deleting L from Z's forwarding state for downstream LSP setup toward <X, Y> results
   in no state remaining for <X, Y>, then Z propagates the Label
   Withdraw <X, Y, L> to its upstream node U for <X,Y>.

4.3.2.3.  MP2MP root and installing the necessary
   MPLS forwarding state. node operation

   The transmission of packets from procedure when the root node of a MP2MP LSP to the receivers is identical to that for a P2MP
   LSP.  Traffic from receives a leaf node follows label
   withdraw message is the upstream LSP toward same as for transit nodes, except that the
   root node and branches downward along the downstream LSP as required
   to reach other leaf nodes.  Mapping traffic to would not propagate the Label Withdraw upstream (as it has
   no upstream).

4.3.2.4.  MP2MP LSP may
   happen at any leaf node.  How that mapping is established is outside Upstream LSR change

   The procedure for changing the scope of this document.

   Due to how a MP2MP LSP is built a leaf upstream LSR that is sending packets on the MP2MP LSP does not receive its own packets.  There same as documented
   in Section 2.4.2.4, except it is also no
   additional mechanism needed on the root or transit LSR to match
   upstream traffic applied to the downstream forwarding state.  Packets that
   are forwarded over a MP2MP LSP will not traverse a link more than
   once, with FECs, using the exception of LAN links which are discussed
   procedures described in Section 4.2.1

   For the setup of 4.3.1 through Section 4.3.2.3.

5.  The LDP MP Status TLV

   An LDP MP capable router MAY use an LDP MP Status TLV to indicate
   additional status for a MP2MP MP LSP with LDP we define 2 new protocol
   entities, the MP2MP downstream FEC and to its remote peers.  This includes
   signaling to peers that are either upstream FEC element.  Both
   elements will be used in or downstream of the FEC TLV. LDP
   MP capable router.  The description value of the MP2MP
   elements follow.

4.1.  The MP2MP downstream LDP MP status TLV will remain
   opaque to LDP and upstream FEC MAY encode one or more status elements.

   The structure, encoding and error handling for the MP2MP downstream
   and upstream FEC elements are the same LDP MP Status TLV is encoded as for follows:

        0                   1                   2                   3
        0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       |1|0| LDP MP Status Type(TBD)   |            Length             |
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       |                           Value                               |
       ~                                                               ~
       |                               +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       |                               |
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

   LDP MP Status Type:  The LDP MP Status Type to be assigned by IANA.

   Length:  Length of the P2MP FEC element
   described LDP MP Status Value in Section 2.1. octets.

   Value:  One or more LDP MP Status Value elements.

5.1.  The LDP MP Status Value Element

   The LDP MP Status Value Element that is included in the LDP MP Status
   TLV Value has the following encoding.

       0                   1                   2                   3
       0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       | Type(TBD)     | Length                        | Value ...     |
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+               |
       ~                                                               ~
       |                                                               |
       |                               +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       |                               |

       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

   Type:  The difference is that two new FEC types
   are used: MP2MP downstream type (TBD) and MP2MP upstream type (TBD).

   If a FEC TLV contains an MP2MP FEC Element, of the MP2MP FEC LDP MP Status Value Element
   MUST is to be assigned
      by IANA.

   Length:  The length of the only FEC Element Value field, in the FEC TLV.

4.2.  Using the MP2MP FEC elements

   This section defines the rules for the processing and propagation octets.

   Value:  String of Length octets, to be interpreted as specified by
      the MP2MP FEC elements. Type field.

5.2.  LDP Messages containing LDP MP Status messages

   The following notation is used LDP MP status message may appear either in the
   processing rules:

   1.  MP2MP downstream LSP <X, Y> (or simply downstream <X, Y>): an
       MP2MP LSP downstream path with root node address X and opaque
       value Y.

   2.  MP2MP upstream LSP <X, Y, D> (or simply upstream <X, Y, D>): a
       MP2MP LSP upstream path for downstream node D label mapping
   message or a LDP notification message.

5.2.1.  LDP MP Status sent in LDP notification messages

   An LDP MP status TLV sent in a notification message must be
   accompanied with root node
       address X and opaque value Y.

   3.  MP2MP downstream FEC element <X, Y>: a Status TLV.  The general format of the
   Notification Message with an LDP MP status TLV is:

       0                   1                   2                   3
       0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |0|   Notification (0x0001)     |      Message Length           |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                       Message ID                              |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                       Status TLV                              |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                   LDP MP Status TLV                           |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                 Optional LDP MP FEC element with root node
       address X and opaque value Y TLV                       |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                 Optional Label TLV                            |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

   The Status TLV status code is used for a downstream MP2MP LSP.

   4.  MP2MP upstream FEC element <X, Y>: a FEC element with root node
       address X to indicate that LDP MP status TLV
   and opaque value Y used for an upstream MP2MP LSP.

   5. additional information follows in the Notification message's
   "optional parameter" section.  Depending on the actual contents of
   the LDP MP status TLV, an LDP P2MP or MP2MP Label Map downstream <X, Y, L>: A Label Map message with a FEC TLV with a single MP2MP downstream FEC element <X, Y> and
       label TLV with label L.

   6.  MP2MP Label Map upstream <X, Y, Lu>: A Label Map message with a
       FEC TLV with a single MP2MP upstream FEC element <X, Y> may
   also be present to provide context to the LDP MP Status TLV.  (NOTE:
   Status Code is pending IANA assignment).

   Since the notification does not refer to any particular message, the
   Message Id and label Message Type fields are set to 0.

5.2.2.  LDP MP Status TLV with label Lu.

   7.  MP2MP in Label Withdraw downstream <X, Y, L>: a Mapping Message

   An example of the Label Withdraw Mapping Message defined in RFC3036 is shown
   below to illustrate the message with a FEC an Optional LDP MP Status TLV with a single MP2MP downstream
   present.

      0                   1                   2                   3
      0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |0|   Label Mapping (0x0400)    |      Message Length           |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                     Message ID                                |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                     FEC element
       <X, Y> and label TLV with label L.

   8.  MP2MP Label Withdraw upstream <X, Y, Lu>: a                                   |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                     Label Withdraw
       message with a FEC TLV with a single MP2MP upstream FEC element
       <X, Y> and label                                 |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                     Optional LDP MP Status TLV with                |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      |                     Additional Optional Parameters            |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

6.  Upstream label Lu.

   The procedures below are organized by the role which the node plays
   in the MP2MP LSP.  Node Z knows that it is allocation on a leaf node by LAN

   On a discovery
   process which is outside the scope of this document.  During LAN the
   course upstream LSR will send a copy of the protocol operation, the root node recognizes its role
   because it owns the root node address.  A transit node packet to each
   receiver individually.  If there is any node
   (other more then the root node) that receives a MP2MP Label Map message
   (i.e., one that has leaf nodes downstream of it).

   Note that a transit node (and indeed receiver on the root node) may also be a
   leaf node and LAN
   we don't take full benefit of the root node does not have to be an ingress LSR or
   leaf multi-access capability of the MP2MP LSP.

4.2.1.  MP2MP Label Map upstream and downstream

   The following lists procedures for generating and processing MP2MP
   Label Map messages for nodes that participate in a MP2MP LSP.  An LSR
   should apply those procedures that apply to it, based on its role in
   network.  We may optimize the MP2MP LSP.

   For bandwidth consumption on the approach described here if there are several receivers for a
   MP2MP LSP LAN and
   replication overhead on a LAN, packets are replicated over the LAN.  This may
   not be optimal; optimizing this case is for further study, see [5].

4.2.1.1.  Determining one's upstream MP2MP LSR

   Determining the by using upstream label
   allocation [4].  Procedures on how to distribute upstream labels
   using LDP peer U for is documented in [5].

6.1.  LDP Multipoint-to-Multipoint on a MP2MP LSP <X, Y> follows
   the LAN

   The procedure for to allocate a P2MP LSP described context label on a LAN is defined in Section 2.3.1.1.

4.2.1.2.  Determining one's downstream MP2MP [4].
   That procedure results in each LSR

   A LDP peer U which receives on a MP2MP Label Map downstream from given LAN having a LDP
   peer D will treat D context
   label which, on that LAN, can be used to identify itself uniquely.
   Each LSR advertises its context label as downstream MP2MP LSR.

4.2.1.3.  MP2MP leaf node operation

   A leaf node Z an upstream-assigned label,
   following the procedures of [5].  Any LSR for which the LAN is a
   downstream link on some P2MP or MP2MP LSP <X, Y> determines its upstream will allocate an upstream-
   assigned label identifying that LSP.  When the LSR U for
   <X, Y> as per Section 4.2.1.1, allocates forwards a packet
   downstream on one of those LSPs, the packet's top label L, must be the
   LSR's context label, and sends a MP2MP
   Label Map downstream <X, Y, L> to U.

   Leaf node Z expects an MP2MP Label Map upstream <X, Y, Lu> from node
   U in response to the MP2MP Label Map downstream it sent to node U. Z
   checks whether it already has forwarding state for upstream <X, Y>.
   If not, Z creates forwarding state to push packet's second label Lu onto is the label
   identifying the LSP.  We will call the top label the "upstream LSR
   label" and the traffic
   that Z wants to forward over second label the "LSP label".

6.1.1.  MP2MP LSP.  How it determines what
   traffic to forward on this downstream forwarding

   The downstream path of a MP2MP LSP is outside the scope of this
   document.

4.2.1.4.  MP2MP transit node operation

   When node Z receives much like a MP2MP Label Map downstream <X, Y, L> from peer
   D associated with interface I, it checks whether it has forwarding
   state normal P2MP LSP, so
   we will use the same procedures as defined in [5].  A label request
   for downstream <X, Y>.  If not, Z allocates a LSP label L' and
   installs downstream forwarding state is send to swap label L' with the upstream LSR.  The label L
   over interface I. Z also determines its mapping that
   is received from the upstream LSR U contains the LSP label for <X, Y> as
   per Section 4.2.1.1, the
   MP2MP FEC and sends a the upstream LSR context label.  The MP2MP Label Map downstream <X, Y,
   L'>
   path (corresponding to U.

   If Z already has the LSP label) will be installed the context
   specific forwarding state for downstream <X, Y>, all that Z
   needs table corresponding to do is update its forwarding state.  Assuming its old the upstream LSR label.
   Packets sent by the upstream router can be forwarded downstream using
   this forwarding state was L'-> {<I1, L1> <I2, L2> ..., <In, Ln>}, its new based on a two label lookup.

6.1.2.  MP2MP upstream forwarding state becomes L'-> {<I1, L1> <I2, L2> ..., <In, Ln>, <I,
   L>}.

   Node Z checks whether it already

   A MP2MP LSP also has an upstream forwarding state path.  Upstream packets
   need to be forwarded in the direction of the root and downstream on
   any node on the LAN that has a downstream interface for the LSP.  For
   a given MP2MP LSP on a given LAN, exactly one LSR is considered to be
   the upstream <X, Y,
   D>. LSR.  If an LSR on the LAN receives a packet from one of
   its downstream interfaces for the LSP, and if it does, then no further action needs to happen.  If it does
   not, forward the
   packet onto the LAN, it allocates a ensures that the packet's top label Lu and creates a new is the
   context label swap for Lu from of the upstream LSR, and that its second label swap(s) from is the forwarding state downstream <X, Y>,
   omitting
   LSP label that was assigned by the swap on interface I for node D. This allows upstream
   traffic to follow LSR.

   Other LSRs receiving the MP2MP tree down packet will not be able to other node(s) except tell whether the
   node
   packet really came from which Z received the MP2MP Label Map downstream <X, Y, L>.
   Node Z determines upstream router, but that makes no
   difference in the downstream MP2MP processing of the packet.  The upstream LSR as per Section 4.2.1.2, will
   see its own upstream LSR in the label, and sends this will enable it to
   determine that the packet is traveling upstream.

7.  Root node redundancy

   The root node is a single point of failure for an MP LSP, whether
   this is P2MP or MP2MP.  The problem is particularly severe for MP2MP Label Map upstream <X, Y, Lu> to
   LSPs.  In the case of MP2MP LSPs, all leaf nodes must use the same
   root node D.

   Transit to set up the MP2MP LSP, because otherwise the traffic
   sourced by some leafs is not received by others.  Because the root
   node Z will also receive is the single point of failure for an MP LSP, we need a fast and
   efficient mechanism to recover from a MP2MP Label Map upstream <X, Y,
   Lu> root node failure.

   An MP LSP is uniquely identified in response to the MP2MP Label Map downstream sent to node U
   associated with interface Iu.  Node Z will add label swap Lu over
   interface Iu to network by the forwarding state upstream <X, Y, D>.  This allows
   packets to go up opaque value
   and the tree towards root node address.  It is likely that the root node.

4.2.1.5.  MP2MP node for an MP
   LSP is defined statically.  The root node operation

4.2.1.5.1.  Root address may be configured
   on each leaf statically or learned using a dynamic protocol.  How
   leafs learn about the root node is also a leaf out of the scope of this document.

   Suppose root/leaf that for the same opaque value we define two (or more) root
   node Z receives addresses and we build a tree to each root using the same opaque
   value.  Effectively these will be treated as different MP LSPs in the
   network.  Once the trees are built, the procedures differ for P2MP
   and MP2MP Label Map downstream <X, Y,
   L> from LSPs.  The different procedures are explained in the
   sections below.

7.1.  Root node D associated with interface I. Z checks whether it
   already has forwarding state downstream <X, Y>.  If not, Z creates
   forwarding state redundancy - procedures for downstream P2MP LSPs

   Since all leafs have set up P2MP LSPs to push label L all the roots, they are
   prepared to receive packets on either one of these LSPs.  However,
   only one of the roots should be forwarding traffic at any given time,
   for the following reasons: 1) to achieve bandwidth savings in the
   network and 2) to ensure that the receiving leafs don't receive
   duplicate packets (since one cannot assume that Z
   wants to forward down the MP2MP LSP.  How it determines what traffic receiving leafs
   are able to forward on this MP2MP LSP discard duplicates).  How the roots determine which one
   is the active sender is outside the scope of this document.
   If Z already has forwarding state for downstream <X, Y>, then Z will
   add the label push

7.2.  Root node redundancy - procedures for L over interface I MP2MP LSPs

   Since all leafs have set up an MP2MP LSP to it.

   Node Z checks if it has forwarding state each one of the root
   nodes for upstream <X, Y, D>.  If
   not, Z allocates this opaque value, a label Lu and creates upstream forwarding state sending leaf may pick either of the
   two (or more) MP2MP LSPs to
   push Lu with forward a packet on.  The leaf nodes
   receive the label push(s) from packet on one of the forwarding state downstream
   <X, Y>, except MP2MP LSPs.  The client of the push MP2MP
   LSP does not care on interface I which MP2MP LSP the packet is received, as long
   as they are for node D. This allows
   upstream traffic to go down the same opaque value.  The sending leaf MUST only
   forward a packet on one MP2MP LSP at a given point in time.  The
   receiving leafs are unable to other node(s), except discard duplicate packets because they
   accept on all LSPs.  Using all the
   node from which available MP2MP LSPs we can
   implement redundancy using the traffic was received.  Node Z determines following procedures.

   A sending leaf selects a single root node out of the
   downstream MP2MP LSR as per section Section 4.2.1.2, available roots
   for a given opaque value.  A good strategy MAY be to look at the
   unicast routing table and sends select a
   MP2MP Label Map upstream <X, Y, Lu> to node D. Since Z root that is closest in terms of
   the unicast metric.  As soon as the root address of the tree Z will not send a MP2MP downstream map and will not receive
   a MP2MP upstream map.

4.2.1.5.2.  Root active root
   disappears from the unicast routing table (or becomes less
   attractive) due to root node is not a or link failure, the leaf

   Suppose can select a
   new best root address and start forwarding to it directly.  If
   multiple root nodes have the same unicast metric, the highest root
   node Z receives addresses MAY be selected, or per session load balancing MAY be
   done over the root nodes.

   All leafs participating in a MP2MP Label Map dowbstream <X, Y,
   L> from node D associated with interface I. Z checks whether it
   already has forwarding state LSP MUST join to all the available
   root nodes for downstream <X, Y>.  If not, Z
   creates downstream forwarding state and installs a outgoing label L
   over interface I. If Z already has forwarding state for downstream
   <X, Y>, then Z will add label L over interface I to given opaque value.  Since the existing
   state.

   Node Z checks if sending leaf may pick
   any MP2MP LSP, it has forwarding state must be prepared to receive on it.

   The advantage of pre-building multiple MP2MP LSPs for upstream <X, Y, D>.  If
   not, Z allocates a label Lu and creates forwarding state to swap Lu
   with the label swap(s) single opaque
   value is that convergence from a root node failure happens as fast as
   the forwarding state downstream <X, Y>,
   except the swap unicast routing protocol is able to notify.  There is no need for node D. This allows upstream traffic
   an additional protocol to go down
   the MP2MP advertise to other node(s), except the leaf nodes which root node
   is was received from.
   Root node Z determines the downstream active root.  The root selection is a local leaf policy that
   does not need to be coordinated with other leafs.  The disadvantage
   of pre-building multiple MP2MP LSPs is that more label resources are
   used, depending on how many root nodes are defined.

8.  Make Before Break (MBB)

   An LSR D selects as per
   Section 4.2.1.2, and sends a MP2MP Label Map its upstream <X, Y, Lu> to
   it.  Since Z LSR for a MP LSP the LSR that is its
   next hop to the root of the tree Z will not send a MP2MP
   downstream map and will not receive LSP.  When the best path to reach the
   root changes the LSR must choose a MP2MP new upstream map.

4.2.2.  MP2MP Label Withdraw

   The following lists procedures for generating LSR.  Sections
   Section 2.4.2.4 and processing MP2MP
   Label Withdraw messages for nodes that participate Section 4.3.2.4 describe these procedures.

   When the best path to the root changes the LSP may be broken
   temporarily resulting in packet loss until the LSP "reconverges" to a MP2MP LSP.
   An
   new upstream LSR.  The goal of MBB when this happens is to keep the
   duration of packet loss as short as possible.  In addition, there are
   scenarios where the best path from the LSR should apply those procedures that apply to it, based on its
   role in the MP2MP LSP.

4.2.2.1.  MP2MP leaf operation

   If root changes but
   the LSP continues to forward packets to the prevous next hop to the
   root.  That may occur when a link comes up or routing metrics change.
   In such a leaf node Z discovers (by means outside case a new LSP should be established before the scope of this
   document) that it old LSP is no longer a leaf of
   removed to limit the MP2MP LSP, it SHOULD
   send duration of packet loss.  The procedures
   described below deal with both scenarios in a downstream Label Withdraw <X, Y, L> to its upstream way that an LSR U for
   <X, Y>, where L is the label it had previously advertised does
   not need to U for
   <X,Y>.

   Leaf node Z expects know which of the events described above caused its
   upstream router U for an MBB LSP to respond by sending change.

   This MBB procedures are an optional extension to the MP LSP building
   procedures described in this draft.

8.1.  MBB overview

   The MBB procedues use additional LDP signaling.

   Suppose some event causes a downstream label release for L and LSR-D to select a new upstream Label Withdraw
   LSR-U for <X,
   Y, Lu> to remove Lu from the upstream state.  Node Z will remove
   label Lu from its upstream state and send a label release message
   with label Lu FEC-A.  The new LSR-U may already be forwarding packets for
   FEC-A; that is, to U.

4.2.2.2.  MP2MP transit node operation

   If a transit node Z downstream LSR's other than LSR-D.  After LSR-U
   receives a downstream label withdraw message <X,
   Y, L> for FEC-A from node D, LSR-D, it deletes label L from its forwarding state
   downstream <X, Y> and will notify LSR-D when it
   knows that the LSP for FEC-A has been established from all the root to
   itself.  When LSR-D receives this MBB notification it will change its upstream states
   next hop for <X, Y>.  Node
   Z sends a label release message with label L the LSP root to D. Since node D LSR-U.

   The assumption is no
   longer part of that if LSR-U has received an MBB notification from
   its upstream router for the downstream FEC-A LSP and has installed forwarding state, Z cleans up
   state the LSP it is capable of forwarding state upstream <X, Y, D> packets on the LSP.  At
   that point LSR-U should signal LSR-D by means of an MBB notification
   that it has become part of the tree identified by FEC-A and sends a upstream Label
   Withdraw for <X, Y, Lu> that
   LSR-D should initiate its switchover to D.

   If deleting L from Z's forwarding state the LSP.

   At LSR-U the LSP for downstream <X, Y> results FEC-A may be in 1 of 3 states.

   1.  There is no state remaining for <X, Y>, then Z propagates FEC-A.

   2.  State for FEC-A exists and LSR-U is waiting for MBB notification
       that the LSP from the root to it exists.

   3.  State for FEC-A exists and the MBB notification has been
       received.

   After LSR-U receives LSR-D's Label
   Withdraw <X, Y, L> Mapping message for FEC-A LSR-U
   MUST NOT reply with an MBB notification to LSR-D until its upstream node U state for <X,Y>.

4.2.2.3.  MP2MP root node operation

   The procedure when
   the root node LSP is state #3 above.  If the state of a MP2MP the LSP receives a label
   withdraw message at LSR-U is state
   #1 or #2, LSR-U should remember receipt of the same as Label Mapping message
   from LSR-D while waiting for an MBB notification from its upstream
   LSR for transit nodes, except that the
   root node would not propagate LSP.  When LSR-U receives the Label Withdraw MBB notification from its
   upstream (as LSR it has
   no upstream).

4.2.2.4.  MP2MP Upstream transitions to LSP state #3 and sends an MBB
   notification to LSR-D.

8.2.  The MBB Status code

   As noted in Section 8.1, the procedures to establish an MBB MP LSP
   are different from those to establish normal MP LSPs.

   When a downstream LSR change

   The procedure sends a Label Mapping message for changing the MP LSP to its
   upstream LSR is the same as documented
   in Section 2.3.2.4, except it is applied MAY include an LDP MP Status TLV that carries a MBB
   Status Code to MP2MP FECs, using the indicate MBB procedures described in Section 4.2.1 through Section 4.2.2.3.

5.  Upstream label allocation on Ethernet networks

   On Ethernet networks apply to the LSP.  This new
   MBB Status Code MAY also appear in an LDP Notification message used
   by an upstream LSR will send a copy of the packet to each receiver individually.  If there is more then one receiver on
   the Ethernet we don't take full benefit of the multi-access
   capability of signal LSP state #3 to the network.  We may optimize downstream LSR; that
   is, that the bandwidth consumption
   on upstream LSR's state for the Ethernet LSP exists and replication overhead on the that it has
   received notification from its upstream LSR by using
   upstream label allocation [5].  Procedures on how to distribute
   upstream labels using LDP that the LSP is documented in [6].

6.  Root node redundancy for MP2MP LSPs

   MP2MP leaf nodes must use the same root node to setup state
   #3.

   The MBB Status is a type of the MP2MP LSP.
   Otherwise there will LDP MP Status Value Element as
   described in Section 5.1.  It is encoded as follows:

       0                   1                   2                   3
       0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      | MBB Type = 1  |      Length = 1               | Status code   |
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

   MBB Type:  Type 1 (to be partitioned MP2MP LSP and traffic sourced by
   some leafs is not received assigned by others.  Having a single root node for
   a MP2MP LSP IANA)

   Length:  1

   Status code:  1 = MBB request
                 2 = MBB ack

8.3.  The MBB capability

   An LSR MAY advertise that it is a single point capable of failure, which is not preferred.  We
   need a fast and efficient mechanism to recover from a root node
   failure.

6.1.  Root node redundancy procedure

   It is likely that handling MBB LSPs using
   the root node for a MP2MP LSP is capability advertisement as defined
   statically. in [6].  The root node address may be configured on each leaf
   statically or learned using a dynamic protocol.  How MP2MP leafs
   learn about the root node is out of LDP MP MBB
   capability has the scope of this document.  A
   MP2MP LSP following format:

        0                   1                   2                   3
        0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       |1|0| LDP MP MBB Capability     |           Length = 1          |
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       |1| Reserved    |
       +-+-+-+-+-+-+-+-+

   Note:  LDP MP MBB Capability (Pending IANA assignment)

        0                   1                   2                   3
        0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       |1|0| LDP MP MBB Capability     |           Length = 1          |
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
       |1| Reserved    |
       +-+-+-+-+-+-+-+-+

   If an LSR has not advertised that it is uniquely identified by MBB capable, its LDP peers
   MUST NOT send it messages which include MBB parameters.  If an LSR
   receives a opaque value Label Mapping message with a MBB parameter from downstream
   LSR-D and the root node
   address.  Suppose its upstream LSR-U has not advertised that for it is MBB
   capable, the same opaque value we define two root
   node addresses and we build a tree LSR MUST send an MBB notification immediatly to each root using the same opaque
   value.  Effectively these LSR-U
   (see Section Section 8.4).  If this happens an MBB MP LSP will not be treated as different MP2MP LSPs in
   the network.  Since all leafs have setup
   established, but normal a MP2MP MP LSP to each one of
   the root nodes for this opaque value, a sending leaf may pick either
   of will be the two result.

8.4.  The MBB procedures

8.4.1.  Terminology

   1.  MBB LSP <X, Y>: A P2MP or MP2MP LSPs Make Before Break (MBB) LSP entry
       with Root Node Address X and Opaque Value Y.

   2.  A(N, L): An Accepting element that consists of an upstream
       Neighbor N and Local label L. This LSR assigned label L to forward
       neighbor N for a packet on.  The leaf nodes will
   receive the packet on one of specific MBB LSP.  For an active element the MP2MP LSPs,
       corresponding Label is stored in the client label forwarding database.

   3.  iA(N, L): An inactive Accepting element that consists of the MP2MP
   LSP does not care on which MP2MP LSP the packet was received from, as
   long as they are an
       upstream neighbor N and local Label L. This LSR assigned label L
       to neighbor N for the same opaque value.  The sending leaf MUST
   only forward a packet on one MP2MP LSP at a given point specific MBB LSP.  For an inactive element
       the corresponding Label is not stored in time.  The
   receiving leafs are unable to discard duplicate packets because they
   accept on both LSPs.  Using both these MP2MP LSPs we can implement
   redundancy using the following procedures. label forwarding
       database.

   4.  F(N, L): A sending leaf selects a single root node out Forwarding state that consists of the available roots downstream Neighbor
       N and Label L. This LSR is sending label packets with label L to
       neighbor N for a given opaque value. specific FEC.

   5.  F'(N, L): A good strategy MAY be Forwarding state that has been marked for sending a
       MBB Notification message to look at the
   unicast routing table Neighbor N with Label L.

   6.  MBB Notification <X, Y, L>: A LDP notification message with a MP
       LSP <X, Y>, Label L and select MBB Status code 2.

   7.  MBB Label Map <X, Y, L>: A P2MP Label Map or MP2MP Label Map
       downstream with a root FEC element <X, Y>, Label L and MBB Status code
       1.

8.4.2.  Accepting elements

   An accepting element represents a specific label value L that is closest according in
   terms of unicast metric.  As soon as the root address of our active
   root disappears from the unicast routing table (or becomes less
   attractive) due has
   been advertised to root node or link failure we can select a new best
   root address neighbor N for a MBB LSP <X, Y> and start forwarding to it directly.  If multiple root
   nodes is a
   candidate for accepting labels switched packets on.  An LSR can have the same unicast metric, the highest root node addresses
   MAY be selected, or we MAY do per session load balancing over the
   root nodes.

   All leafs participating in
   two accepting elements for a MP2MP specific MBB LSP <X, Y> LSP, only one of
   them MUST join to all be active.  An active element is the available
   root nodes element for a given opaque value.  Since which the sending leaf may pick
   any MP2MP LSP, it must be prepared to receive on it.

   The advantage of pre-building multiple MP2MP LSPs for a single opaque
   label value has been installed in the label forwarding database.  An
   inactive accepting element is that we can converge from created after a root node failure as fast as the
   unicast routing protocol new upstream LSR is able to notify us.  There
   chosen and is no need for
   an additional protocol to advertise pending to replace the leaf nodes which root node
   is active element in the label
   forwarding database.  Inactive elements only exist temporarily while
   switching to a new upstream LSR.  Once the switch has been completed
   only one active root.  The root selection element remains.  During network convergence it is a local leaf policy
   possible that
   does not need to be coordinated with an inactive accepting element is created while an other leafs.  The disadvantage
   inactive accepting element is pending.  If that we are using more happens the older
   inactive accepting element MUST be replaced with an newer inactive
   element.  If an accepting element is removed a Label Withdraw has to
   be send for label resources depending on how many root
   nodes are defined.

7.  Make before break

   An L to neighbor N for <X, Y>.

8.4.3.  Procedures for upstream LSR is chosen based on the best path change

   Suppose a node Z has a MBB LSP <X, Y> with an active accepting
   element A(N1, L1).  Due to reach the root of
   the MP LSP.  When the a routing change it detects a new best
   path to reach the for root changes it needs to
   choose X and selects a new upstream LSR.  Section 2.3.2.4 LSR N2.  Node Z allocates
   a new local label L2 and Section 4.2.2.4
   describes these procedures.  When the best path to the root changes
   the LSP may be broken creates an inactive accepting element iA(N2,
   L2).  Node Z sends MBB Label Map <X, Y, L2>to N2 and packet forwarding is interrupted, in that
   case it needs to converge waits for the
   new upstream LSR N2 to respond with a MBB Notification for <X, Y,
   L2>.  During this transition phase there are two accepting elements,
   the element A(N1, L1) still accepting packets from N1 over label L1
   and the new inactive element iA(N2, L2).

   While waiting for the MBB Notification from upstream LSR ASAP.  There are also
   scenarios where the best path changed, but the LSP N2, it is still
   forwarding packets.  That happens when links come up or routing
   metrics are changed.  In
   possible that case it would like an other transition occurs due to build a routing change.
   Suppose the new LSP
   before it breaks upstream LSR is N3.  An inactive element iA(N3, L3)
   is created and the old LSP inactive element iA(N2, L2) MUST be removed.
   A label withdraw MUST be sent to minimize the traffic interruption. N2 for <X, Y, L2&gt.  The approuch described below deals with both scenarios and does not
   require LDP to know which of MBB
   Notification for <X, Y, L2> from N2 will be ignored because the events above caused
   inactive element is removed.

   It is possible that the MBB Notification from upstream
   router to change.  The approuch below LSR is an optional extention never
   received due to link or node failure.  To prevent waiting
   indefinitely for the MBB Notification a timeout SHOULD be applied.
   As soon as the timer expires, the
   MP LSP building procedures described in this draft.

7.1.  Protocol event

   An approach is to use additional signaling in LDP. Section 8.4.5 are
   applied as if a MBB Notification was received for the inactive
   element.

8.4.4.  Receiving a Label Map with MBB status code

   Suppose node Z has state for a
   downstream LSR-D is changing to MBB LSP <X, Y> and receives a MBB
   Label Map <X, Y, L2> from N2.  A new upstream LSR-U for FEC-A, this
   LSR-U may already be forwarding packets for this FEC-A.  Based on the
   existence of state for FEC-A, LSR-U F(N2, L2) will send a notification to the
   LSR-D
   be added to initiate the switchover.  The assumption is that MP LSP if our
   upstream LSR-U has state for the FEC-A and it has received a
   notification from its upstream router, then did not already exist.  If this LSR MBB LSP
   has an active accepting element or node Z is forwarding
   packets for this FEC-A and it can send the root of the MBB LSP
   a MBB notification back to
   initiate the switchover.  You could say there <X, Y, L2)> is an explicit
   notification send to tell the LSR node N2.  If node Z has an
   inactive accepting element it became part of marks the tree identified by
   FEC-A.  LSR-D can be in 3 different states.

   1.  There no Forwarding state for as <X, Y,
   F'(N2, L2)>.

8.4.5.  Receiving a given FEC-A.

   2.  State for FEC-A Notification with MBB status code

   Suppose node Z receives a MBB Notification <X, Y, L> from N. If node
   Z has just been created and is waiting for
       notification.

   3.  State state for FEC-A exists MBB LSP <X, Y> and notification an inactive accepting element
   iA(N, L) that matches with N and L, we activate this accepting
   element and install label L in the label forwarding database.  If an
   other active accepting was received.

   Suppose LSR-D sends a present it will be removed from the label mapping
   forwarding database.

   If this MBB LSP <X, Y> also has Forwarding states marked for FEC-A to LSR-U.  LSR-U must
   only reply with a notification sending
   MBB Notifications, like <X, Y, F'(N2, L2)>, MBB Notifications are
   send to LSR-D if it is in state #3 as
   described above. these downstream LSRs.  If LSR-U node Z receives a MBB Notification
   for an accepting element that is in state 1 not inactive or 2, it should remember it
   has received a label mapping from LSR-D which does not match the
   Label value and Neighbor address, the MBB notification is waiting ignored.

8.4.6.  Node operation for MP2MP LSPs

   The procedures described above apply to the downstream path of a
   notification.  As soon
   MP2MP LSP.  The upstream path of the MP2MP is setup as LSR-U received normal without
   including a notification from its
   upstream LSR it can move to state #3 and trigger notifications MBB Status code.  If the MBB procedures apply to its a MP2MP
   downstream LSR's that requested it.  More details will be added FEC element, the upstream path to a node N is only
   installed in the next revision label forwarding database if node N is part of the
   active accepting element.  If node N is part of an inactive accepting
   element, the draft.

8. upstream path is installed when this inactive accepting
   element is activated.

9.  Security Considerations

   The same security considerations apply as for the base LDP
   specification, as described in [1].

9.

10.  IANA considerations

   This document creates a new name space (the LDP MP Opaque Value
   Element type) that is to be managed by IANA.  Also, this IANA, and the allocation of
   the value 1 for the type of Generic LSP Identifier.

   This document requires allocation of three new LDP FEC element Element types:

   1.  the P2MP
   type, FEC type - requested value 0x04

   2.  the MP2MP-up and FEC type - requested value 0x05

   3.  the MP2MP-down types.

10. FEC type - requested value 0x06

   This document requires the assignment of three new code points for
   three new Capability Parameter TLVs, corresponding to the
   advertisement of the P2MP, MP2MP and MBB capabilities.  The values
   requested are:

      P2MP Capability Parameter - requested value 0x0508

      MP2MP Capability Parameter - requested value 0x0509

      MBB Capability Parameter - requested value 0x050A

   This document requires the assignment of a LDP Status TLV code to
   indicate a LDP MP Status TLV is following in the Notification
   message.  The value requested is:

      LDP MP status - requested value 0x0000002C

   This document requires the assigment of a new code point for a LDP MP
   Status TLV.  The value requested is:

      LDP MP Status TLV Type - requested value 0x096D

   This document creates a new name space (the LDP MP Status Value
   Element type) that is to be managed by IANA, and the allocation of
   the value 1 for the type of MBB Status.

11.  Acknowledgments

   The authors would like to thank the following individuals for their
   review and contribution: Nischal Sheth, Yakov Rekhter, Rahul
   Aggarwal, Arjen Boers, Eric Rosen, Nidhi Bhaskar, Toerless Eckert and Eckert,
   George Swallow.

11. Swallow, Jin Lizhong and Vanson Lim.

12.  Contributing authors

   Below is a list of the contributing authors in alphabetical order:

   Shane Amante
   Level 3 Communications, LLC
   1025 Eldorado Blvd
   Broomfield, CO 80021
   US
   Email: Shane.Amante@Level3.com

   Luyuan Fang
   AT&T
   200 Laurel Avenue, Room C2-3B35
   Middletown, NJ 07748
   Cisco Systems
   300 Beaver Brook Road
   Boxborough, MA 01719
   US
   Email: luyuanfang@att.com lufang@cisco.com

   Hitoshi Fukuda
   NTT Communications Corporation
   1-1-6, Uchisaiwai-cho, Chiyoda-ku
   Tokyo 100-8019,
   Japan
   Email: hitoshi.fukuda@ntt.com
   Yuji Kamite
   NTT Communications Corporation
   Tokyo Opera City Tower
   3-20-2 Nishi Shinjuku, Shinjuku-ku,
   Tokyo 163-1421,
   Japan
   Email: y.kamite@ntt.com

   Kireeti Kompella
   Juniper Networks
   1194 N. Mathilda Ave.
   Sunnyvale, CA 94089
   US
   Email: kireeti@juniper.net

   Ina Minei
   Juniper Networks
   1194 N. Mathilda Ave.
   Sunnyvale, CA  94089
   US
   Email: ina@juniper.net

   Jean-Louis Le Roux
   France Telecom
   2, avenue Pierre-Marzin
   Lannion, Cedex 22307
   France
   Email: jeanlouis.leroux@francetelecom.com

   Bob Thomas
   Cisco Systems, Inc.
   300 Beaver Brook Road
   Boxborough, MA, 01719
   E-mail: rhthomas@cisco.com

   Lei Wang
   Telenor
   Snaroyveien 30
   Fornebu 1331
   Norway
   Email: lei.wang@telenor.com
   IJsbrand Wijnands
   Cisco Systems, Inc.
   De kleetlaan 6a
   1831 Diegem
   Belgium
   E-mail: ice@cisco.com

12.

13.  References

12.1.

13.1.  Normative References

   [1]   Andersson, L., Doolan, P., Feldman, N., Fredette, A., and B.
         Thomas, "LDP Specification", RFC 3036, January 2001.

   [2]   Bradner, S., "Key words for use in RFCs to Indicate Requirement
         Levels", BCP 14, RFC 2119, March 1997.

   [3]   Reynolds, J. and J. Postel, J., "Assigned Numbers", Numbers: RFC 1700 is Replaced by an On-
         line Database", RFC 1700,
        October 1994. 3232, January 2002.

   [4]  Roux, J., "Requirements for point-to-multipoint extensions to
        the Label Distribution  Protocol",
        draft-leroux-mpls-mp-ldp-reqs-03 (work in progress),
        February 2006.

   [5]   Aggarwal, R., "MPLS Upstream Label Assignment and Context
         Specific Label Space", draft-raggarwa-mpls-upstream-label-01 draft-ietf-mpls-upstream-label-02 (work
         in progress), October 2005.

   [6] March 2007.

   [5]   Aggarwal, R. and J. Roux, "MPLS Upstream Label Assignment for
        RSVP-TE and
         LDP", draft-raggarwa-mpls-rsvp-ldp-upstream-00 draft-ietf-mpls-ldp-upstream-01 (work in progress), July 2005.

12.2.
         March 2007.

   [6]   Thomas, B., "LDP Capabilities",
         draft-ietf-mpls-ldp-capabilities-00 (work in progress),
         May 2007.

13.2.  Informative References

   [7]   Andersson, L. and E. Rosen, "Framework for Layer 2 Virtual
         Private Networks (L2VPNs)", draft-ietf-l2vpn-l2-framework-05
        (work in progress), June 2004. RFC 4664, September 2006.

   [8]   Aggarwal, R., Papadimitriou, D., and S. Yasukawa, "Extensions
         to RSVP-TE Resource Reservation Protocol - Traffic Engineering
         (RSVP-TE) for Point-to-Multipoint TE
        LSPs", draft-ietf-mpls-rsvp-te-p2mp-06 Label Switched Paths
         (LSPs)", RFC 4875, May 2007.

   [9]   Roux, J., "Requirements for Point-To-Multipoint Extensions to
         the Label Distribution  Protocol",
         draft-ietf-mpls-mp-ldp-reqs-02 (work in progress),
        August 2006.

   [9] March 2007.

   [10]  Rosen, E. and R. Aggarwal, "Multicast in MPLS/BGP IP VPNs",
        draft-ietf-l3vpn-2547bis-mcast-02
         draft-ietf-l3vpn-2547bis-mcast-00 (work in progress),
         June 2006. 2005.

Authors' Addresses

   Ina Minei
   Juniper Networks
   1194 N. Mathilda Ave.
   Sunnyvale, CA  94089
   US

   Email: ina@juniper.net

   Kireeti Kompella
   Juniper Networks
   1194 N. Mathilda Ave.
   Sunnyvale, CA  94089
   US

   Email: kireeti@juniper.net

   IJsbrand Wijnands
   Cisco Systems, Inc.
   De kleetlaan 6a
   Diegem  1831
   Belgium

   Email: ice@cisco.com

   Bob Thomas
   Cisco Systems, Inc.
   300 Beaver Brook Road
   Boxborough  01719
   US

   Email: rhthomas@cisco.com

Full Copyright Statement

   Copyright (C) The Internet Society (2006). IETF Trust (2007).

   This document is subject to the rights, licenses and restrictions
   contained in BCP 78, and except as set forth therein, the authors
   retain all their rights.

   This document and the information contained herein are provided on an
   "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS
   OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY SOCIETY, THE IETF TRUST AND
   THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS
   OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF
   THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED
   WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Intellectual Property

   The IETF takes no position regarding the validity or scope of any
   Intellectual Property Rights or other rights that might be claimed to
   pertain to the implementation or use of the technology described in
   this document or the extent to which any license under such rights
   might or might not be available; nor does it represent that it has
   made any independent effort to identify any such rights.  Information
   on the procedures with respect to rights in RFC documents can be
   found in BCP 78 and BCP 79.

   Copies of IPR disclosures made to the IETF Secretariat and any
   assurances of licenses to be made available, or the result of an
   attempt made to obtain a general license or permission for the use of
   such proprietary rights by implementers or users of this
   specification can be obtained from the IETF on-line IPR repository at
   http://www.ietf.org/ipr.

   The IETF invites any interested party to bring to its attention any
   copyrights, patents or patent applications, or other proprietary
   rights that may cover technology that may be required to implement
   this standard.  Please address the information to the IETF at
   ietf-ipr@ietf.org.

Acknowledgment

   Funding for the RFC Editor function is provided by the IETF
   Administrative Support Activity (IASA).