draft-ietf-mpls-ldp-p2mp-02.txt   draft-ietf-mpls-ldp-p2mp-03.txt 
Network Working Group I. Minei (Editor) Network Working Group I. Minei (Editor)
Internet-Draft K. Kompella Internet-Draft K. Kompella
Intended status: Standards Track Juniper Networks Intended status: Standards Track Juniper Networks
Expires: December 3, 2006 I. Wijnands (Editor) Expires: January 10, 2008 I. Wijnands (Editor)
B. Thomas B. Thomas
Cisco Systems, Inc. Cisco Systems, Inc.
July 9, 2007
Label Distribution Protocol Extensions for Point-to-Multipoint and Label Distribution Protocol Extensions for Point-to-Multipoint and
Multipoint-to-Multipoint Label Switched Paths Multipoint-to-Multipoint Label Switched Paths
draft-ietf-mpls-ldp-p2mp-02 draft-ietf-mpls-ldp-p2mp-03
Status of this Memo Status of this Memo
By submitting this Internet-Draft, each author represents that any By submitting this Internet-Draft, each author represents that any
applicable patent or other IPR claims of which he or she is aware applicable patent or other IPR claims of which he or she is aware
have been or will be disclosed, and any of which he or she becomes have been or will be disclosed, and any of which he or she becomes
aware will be disclosed, in accordance with Section 6 of BCP 79. aware will be disclosed, in accordance with Section 6 of BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that Task Force (IETF), its areas, and its working groups. Note that
skipping to change at page 1, line 37 skipping to change at page 1, line 38
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt. http://www.ietf.org/ietf/1id-abstracts.txt.
The list of Internet-Draft Shadow Directories can be accessed at The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html. http://www.ietf.org/shadow.html.
This Internet-Draft will expire on December 3, 2006. This Internet-Draft will expire on January 10, 2008.
Copyright Notice Copyright Notice
Copyright (C) The Internet Society (2006). Copyright (C) The IETF Trust (2007).
Abstract Abstract
This document describes extensions to the Label Distribution Protocol This document describes extensions to the Label Distribution Protocol
(LDP) for the setup of point to multi-point (P2MP) and multipoint-to- (LDP) for the setup of point to multi-point (P2MP) and multipoint-to-
multipoint (MP2MP) Label Switched Paths (LSPs) in Multi-Protocol multipoint (MP2MP) Label Switched Paths (LSPs) in Multi-Protocol
Label Switching (MPLS) networks. The solution relies on LDP without Label Switching (MPLS) networks. The solution relies on LDP without
requiring a multicast routing protocol in the network. Protocol requiring a multicast routing protocol in the network. Protocol
elements and procedures for this solution are described for building elements and procedures for this solution are described for building
such LSPs in a receiver-initiated manner. There can be various such LSPs in a receiver-initiated manner. There can be various
applications for P2MP/MP2MP LSPs, for example IP multicast or support applications for P2MP/MP2MP LSPs, for example IP multicast or support
for multicast in BGP/MPLS L3VPNs. Specification of how such for multicast in BGP/MPLS L3VPNs. Specification of how such
applications can use a LDP signaled P2MP/MP2MP LSP is outside the applications can use a LDP signaled P2MP/MP2MP LSP is outside the
scope of this document. scope of this document.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1. Conventions used in this document . . . . . . . . . . . . 3 1.1. Conventions used in this document . . . . . . . . . . . . 4
1.2. Terminology . . . . . . . . . . . . . . . . . . . . . . . 3 1.2. Terminology . . . . . . . . . . . . . . . . . . . . . . . 4
2. Setting up P2MP LSPs with LDP . . . . . . . . . . . . . . . . 4 2. Setting up P2MP LSPs with LDP . . . . . . . . . . . . . . . . 5
2.1. The P2MP FEC Element . . . . . . . . . . . . . . . . . . . 4 2.1. Support for P2MP LSP setup with LDP . . . . . . . . . . . 5
2.2. The LDP MP Opaque Value Element . . . . . . . . . . . . . 6 2.2. The P2MP FEC Element . . . . . . . . . . . . . . . . . . . 6
2.2.1. The Generic LSP Identifier . . . . . . . . . . . . . . 6 2.3. The LDP MP Opaque Value Element . . . . . . . . . . . . . 7
2.3. Using the P2MP FEC Element . . . . . . . . . . . . . . . . 7 2.3.1. The Generic LSP Identifier . . . . . . . . . . . . . . 8
2.3.1. Label Map . . . . . . . . . . . . . . . . . . . . . . 8 2.4. Using the P2MP FEC Element . . . . . . . . . . . . . . . . 8
2.3.2. Label Withdraw . . . . . . . . . . . . . . . . . . . . 9 2.4.1. Label Map . . . . . . . . . . . . . . . . . . . . . . 9
3. Shared Trees . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.4.2. Label Withdraw . . . . . . . . . . . . . . . . . . . . 11
4. Setting up MP2MP LSPs with LDP . . . . . . . . . . . . . . . . 11 3. Shared Trees . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.1. The MP2MP downstream and upstream FEC elements. . . . . . 11 4. Setting up MP2MP LSPs with LDP . . . . . . . . . . . . . . . . 12
4.2. Using the MP2MP FEC elements . . . . . . . . . . . . . . . 11 4.1. Support for MP2MP LSP setup with LDP . . . . . . . . . . . 13
4.2.1. MP2MP Label Map upstream and downstream . . . . . . . 13 4.2. The MP2MP downstream and upstream FEC Elements. . . . . . 13
4.2.2. MP2MP Label Withdraw . . . . . . . . . . . . . . . . . 15 4.3. Using the MP2MP FEC Elements . . . . . . . . . . . . . . . 14
5. Upstream label allocation on Ethernet networks . . . . . . . . 16 4.3.1. MP2MP Label Map upstream and downstream . . . . . . . 15
6. Root node redundancy for MP2MP LSPs . . . . . . . . . . . . . 16 4.3.2. MP2MP Label Withdraw . . . . . . . . . . . . . . . . . 17
6.1. Root node redundancy procedure . . . . . . . . . . . . . . 16 5. The LDP MP Status TLV . . . . . . . . . . . . . . . . . . . . 18
7. Make before break . . . . . . . . . . . . . . . . . . . . . . 17 5.1. The LDP MP Status Value Element . . . . . . . . . . . . . 19
7.1. Protocol event . . . . . . . . . . . . . . . . . . . . . . 18 5.2. LDP Messages containing LDP MP Status messages . . . . . . 20
8. Security Considerations . . . . . . . . . . . . . . . . . . . 18 5.2.1. LDP MP Status sent in LDP notification messages . . . 20
9. IANA considerations . . . . . . . . . . . . . . . . . . . . . 18 5.2.2. LDP MP Status TLV in Label Mapping Message . . . . . . 20
10. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 19 6. Upstream label allocation on a LAN . . . . . . . . . . . . . . 21
11. Contributing authors . . . . . . . . . . . . . . . . . . . . . 19 6.1. LDP Multipoint-to-Multipoint on a LAN . . . . . . . . . . 21
12. References . . . . . . . . . . . . . . . . . . . . . . . . . . 21 6.1.1. MP2MP downstream forwarding . . . . . . . . . . . . . 21
12.1. Normative References . . . . . . . . . . . . . . . . . . . 21 6.1.2. MP2MP upstream forwarding . . . . . . . . . . . . . . 22
12.2. Informative References . . . . . . . . . . . . . . . . . . 21 7. Root node redundancy . . . . . . . . . . . . . . . . . . . . . 22
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 21 7.1. Root node redundancy - procedures for P2MP LSPs . . . . . 23
Intellectual Property and Copyright Statements . . . . . . . . . . 23 7.2. Root node redundancy - procedures for MP2MP LSPs . . . . . 23
8. Make Before Break (MBB) . . . . . . . . . . . . . . . . . . . 24
8.1. MBB overview . . . . . . . . . . . . . . . . . . . . . . . 24
8.2. The MBB Status code . . . . . . . . . . . . . . . . . . . 25
8.3. The MBB capability . . . . . . . . . . . . . . . . . . . . 26
8.4. The MBB procedures . . . . . . . . . . . . . . . . . . . . 26
8.4.1. Terminology . . . . . . . . . . . . . . . . . . . . . 26
8.4.2. Accepting elements . . . . . . . . . . . . . . . . . . 27
8.4.3. Procedures for upstream LSR change . . . . . . . . . . 27
8.4.4. Receiving a Label Map with MBB status code . . . . . . 28
8.4.5. Receiving a Notification with MBB status code . . . . 28
8.4.6. Node operation for MP2MP LSPs . . . . . . . . . . . . 29
9. Security Considerations . . . . . . . . . . . . . . . . . . . 29
10. IANA considerations . . . . . . . . . . . . . . . . . . . . . 29
11. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 30
12. Contributing authors . . . . . . . . . . . . . . . . . . . . . 30
13. References . . . . . . . . . . . . . . . . . . . . . . . . . . 32
13.1. Normative References . . . . . . . . . . . . . . . . . . . 32
13.2. Informative References . . . . . . . . . . . . . . . . . . 32
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 33
Intellectual Property and Copyright Statements . . . . . . . . . . 34
1. Introduction 1. Introduction
The LDP protocol is described in [1]. It defines mechanisms for The LDP protocol is described in [1]. It defines mechanisms for
setting up point-to-point (P2P) and multipoint-to-point (MP2P) LSPs setting up point-to-point (P2P) and multipoint-to-point (MP2P) LSPs
in the network. This document describes extensions to LDP for in the network. This document describes extensions to LDP for
setting up point-to-multipoint (P2MP) and multipoint-to-multipoint setting up point-to-multipoint (P2MP) and multipoint-to-multipoint
(MP2MP) LSPs. These are collectively referred to as multipoint LSPs (MP2MP) LSPs. These are collectively referred to as multipoint LSPs
(MP LSPs). A P2MP LSP allows traffic from a single root (or ingress) (MP LSPs). A P2MP LSP allows traffic from a single root (or ingress)
node to be delivered to a number of leaf (or egress) nodes. A MP2MP node to be delivered to a number of leaf (or egress) nodes. A MP2MP
LSP allows traffic from multiple ingress nodes to be delivered to LSP allows traffic from multiple ingress nodes to be delivered to
multiple egress nodes. Only a single copy of the packet will be sent multiple egress nodes. Only a single copy of the packet will be sent
on any link traversed by the MP LSP (see note at end of on any link traversed by the MP LSP (see note at end of
Section 2.3.1). This is accomplished without the use of a multicast Section 2.4.1). This is accomplished without the use of a multicast
protocol in the network. There can be several MP LSPs rooted at a protocol in the network. There can be several MP LSPs rooted at a
given ingress node, each with its own identifier. given ingress node, each with its own identifier.
The solution assumes that the leaf nodes of the MP LSP know the root The solution assumes that the leaf nodes of the MP LSP know the root
node and identifier of the MP LSP to which they belong. The node and identifier of the MP LSP to which they belong. The
mechanisms for the distribution of this information are outside the mechanisms for the distribution of this information are outside the
scope of this document. The specification of how an application can scope of this document. The specification of how an application can
use a MP LSP signaled by LDP is also outside the scope of this use a MP LSP signaled by LDP is also outside the scope of this
document. document.
Interested readers may also wish to peruse the requirement draft [4] Interested readers may also wish to peruse the requirements draft [9]
and other documents [8] and [9]. and other documents [8] and [10].
1.1. Conventions used in this document 1.1. Conventions used in this document
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119 [2]. document are to be interpreted as described in RFC 2119 [2].
1.2. Terminology 1.2. Terminology
The following terminology is taken from [4]. The following terminology is taken from [9].
P2P LSP: An LSP that has one Ingress LSR and one Egress LSR. P2P LSP: An LSP that has one Ingress LSR and one Egress LSR.
P2MP LSP: An LSP that has one Ingress LSR and one or more Egress P2MP LSP: An LSP that has one Ingress LSR and one or more Egress
LSRs. LSRs.
MP2P LSP: A LSP that has one or more Ingress LSRs and one unique MP2P LSP: An LSP that has one or more Ingress LSRs and one unique
Egress LSR. Egress LSR.
MP2MP LSP: A LSP that connects a set of leaf nodes, acting MP2MP LSP: An LSP that connects a set of leaf nodes, acting
indifferently as ingress or egress. indifferently as ingress or egress.
MP LSP: A multipoint LSP, either a P2MP or an MP2MP LSP. MP LSP: A multipoint LSP, either a P2MP or an MP2MP LSP.
Ingress LSR: Source of the P2MP LSP, also referred to as root node. Ingress LSR: Source of the P2MP LSP, also referred to as root node.
Egress LSR: One of potentially many destinations of an LSP, also Egress LSR: One of potentially many destinations of an LSP, also
referred to as leaf node in the case of P2MP and MP2MP LSPs. referred to as leaf node in the case of P2MP and MP2MP LSPs.
Transit LSR: An LSR that has one or more directly connected Transit LSR: An LSR that has one or more directly connected
skipping to change at page 4, line 34 skipping to change at page 5, line 34
and one or more leaf nodes. Leaf nodes initiate P2MP LSP setup and and one or more leaf nodes. Leaf nodes initiate P2MP LSP setup and
tear-down. Leaf nodes also install forwarding state to deliver the tear-down. Leaf nodes also install forwarding state to deliver the
traffic received on a P2MP LSP to wherever it needs to go; how this traffic received on a P2MP LSP to wherever it needs to go; how this
is done is outside the scope of this document. Transit nodes install is done is outside the scope of this document. Transit nodes install
MPLS forwarding state and propagate the P2MP LSP setup (and tear- MPLS forwarding state and propagate the P2MP LSP setup (and tear-
down) toward the root. The root node installs forwarding state to down) toward the root. The root node installs forwarding state to
map traffic into the P2MP LSP; how the root node determines which map traffic into the P2MP LSP; how the root node determines which
traffic should go over the P2MP LSP is outside the scope of this traffic should go over the P2MP LSP is outside the scope of this
document. document.
For the setup of a P2MP LSP with LDP, we define one new protocol 2.1. Support for P2MP LSP setup with LDP
entity, the P2MP FEC Element to be used in the FEC TLV. The
description of the P2MP FEC Element follows.
2.1. The P2MP FEC Element Support for the setup of P2MP LSPs is advertised using LDP
capabilities as defined in [6]. An implementation supporting the
P2MP procedures specified in this document MUST implement the
procedures for Capability Parameters in Initialization Messages.
A new Capability Parameter TLV is defined, the P2MP Capability.
Following is the format of the P2MP Capability Parameter.
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|1|0| P2MP Capability (TBD IANA) | Length (= 1) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|1| Reserved |
+-+-+-+-+-+-+-+-+
The P2MP Capability TLV MUST be supported in the LDP Initialization
Message. Advertisement of the P2MP Capability indicates support of
the procedures for P2MP LSP setup detailed in this document. If the
peer has not advertised the corresponding capability, then no label
messages using the P2MP FEC Element should be sent to the peer.
2.2. The P2MP FEC Element
For the setup of a P2MP LSP with LDP, we define one new protocol
entity, the P2MP FEC Element to be used as a FEC Element in the FEC
TLV. Note that the P2MP FEC Element does not necessarily identify
the traffic that must be mapped to the LSP, so from that point of
view, the use of the term FEC is a misnomer. The description of the
P2MP FEC Element follows.
The P2MP FEC Element consists of the address of the root of the P2MP The P2MP FEC Element consists of the address of the root of the P2MP
LSP and an opaque value. The opaque value consists of one or more LSP and an opaque value. The opaque value consists of one or more
LDP MP Opaque Value Elements. The opaque value is unique within the LDP MP Opaque Value Elements. The opaque value is unique within the
context of the root node. The combination of (Root Node Address, context of the root node. The combination of (Root Node Address,
Opaque Value) uniquely identifies a P2MP LSP within the MPLS network. Opaque Value) uniquely identifies a P2MP LSP within the MPLS network.
The P2MP FEC element is encoded as follows: The P2MP FEC Element is encoded as follows:
0 1 2 3 0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|P2MP Type (TBD)| Address Family | Address Length| |P2MP Type (TBD)| Address Family | Address Length|
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Root Node Address | | Root Node Address |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Opaque Length | Opaque Value ... | | Opaque Length | Opaque Value ... |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +
~ ~ ~ ~
| | | |
| +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| | | |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Type: The type of the P2MP FEC Element is to be assigned by IANA.
Type: The type of the P2MP FEC element is to be assigned by IANA.
Address Family: Two octet quantity containing a value from ADDRESS Address Family: Two octet quantity containing a value from ADDRESS
FAMILY NUMBERS in [3] that encodes the address family for the Root FAMILY NUMBERS in [3] that encodes the address family for the Root
LSR Address. LSR Address.
Address Length: Length of the Root LSR Address in octets. Address Length: Length of the Root LSR Address in octets.
Root Node Address: A host address encoded according to the Address Root Node Address: A host address encoded according to the Address
Family field. Family field.
skipping to change at page 6, line 9 skipping to change at page 7, line 33
Address Lengths are defined at present. Address Lengths are defined at present.
If the Address Length doesn't match the defined length for the If the Address Length doesn't match the defined length for the
Address Family, the receiver SHOULD abort processing the message Address Family, the receiver SHOULD abort processing the message
containing the FEC Element, and send an "Unknown FEC" Notification containing the FEC Element, and send an "Unknown FEC" Notification
message to its LDP peer signaling an error. message to its LDP peer signaling an error.
If a FEC TLV contains a P2MP FEC Element, the P2MP FEC Element MUST If a FEC TLV contains a P2MP FEC Element, the P2MP FEC Element MUST
be the only FEC Element in the FEC TLV. be the only FEC Element in the FEC TLV.
2.2. The LDP MP Opaque Value Element 2.3. The LDP MP Opaque Value Element
The LDP MP Opaque Value Element is used in the P2MP and MP2MP FEC The LDP MP Opaque Value Element is used in the P2MP and MP2MP FEC
elements defined in subsequent sections. It carries information that Elements defined in subsequent sections. It carries information that
is meaningful to leaf (and bud) LSRs, but need not be interpreted by is meaningful to leaf (and bud) LSRs, but need not be interpreted by
non-leaf LSRs. non-leaf LSRs.
The LDP MP Opaque Value Element is encoded as follows: The LDP MP Opaque Value Element is encoded as follows:
0 1 2 3 0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Type(TBD) | Length | Value ... | | Type(TBD) | Length | Value ... |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
skipping to change at page 6, line 38 skipping to change at page 8, line 27
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Type: The type of the LDP MP Opaque Value Element is to be assigned Type: The type of the LDP MP Opaque Value Element is to be assigned
by IANA. by IANA.
Length: The length of the Value field, in octets. Length: The length of the Value field, in octets.
Value: String of Length octets, to be interpreted as specified by Value: String of Length octets, to be interpreted as specified by
the Type field. the Type field.
2.2.1. The Generic LSP Identifier 2.3.1. The Generic LSP Identifier
The generic LSP identifier is a type of Opaque Value Element encoded The generic LSP identifier is a type of Opaque Value Element encoded
as follows: as follows:
Type: 1 (to be assigned by IANA) Type: 1 (to be assigned by IANA)
Length: 4 Length: 4
Value: A 32bit integer, unique in the context of the root, as Value: A 32bit integer, unique in the context of the root, as
identified by the root's address. identified by the root's address.
This type of opaque value element is recommended when mapping of This type of Opaque Value Element is recommended when mapping of
traffic to LSPs is non-algorithmic, and done by means outside LDP. traffic to LSPs is non-algorithmic, and done by means outside LDP.
2.3. Using the P2MP FEC Element 2.4. Using the P2MP FEC Element
This section defines the rules for the processing and propagation of This section defines the rules for the processing and propagation of
the P2MP FEC Element. The following notation is used in the the P2MP FEC Element. The following notation is used in the
processing rules: processing rules:
1. P2MP FEC Element <X, Y>: a FEC Element with Root Node Address X 1. P2MP FEC Element <X, Y>: a FEC Element with Root Node Address X
and Opaque Value Y. and Opaque Value Y.
2. P2MP Label Map <X, Y, L>: a Label Map message with a FEC TLV with 2. P2MP Label Map <X, Y, L>: a Label Map message with a FEC TLV with
a single P2MP FEC Element <X, Y> and Label TLV with label L. a single P2MP FEC Element <X, Y> and Label TLV with label L.
skipping to change at page 8, line 5 skipping to change at page 9, line 34
in the P2MP LSP. Node Z knows that it is a leaf node by a discovery in the P2MP LSP. Node Z knows that it is a leaf node by a discovery
process which is outside the scope of this document. During the process which is outside the scope of this document. During the
course of protocol operation, the root node recognizes its role course of protocol operation, the root node recognizes its role
because it owns the Root Node Address. A transit node is any node because it owns the Root Node Address. A transit node is any node
(other than the root node) that receives a P2MP Label Map message (other than the root node) that receives a P2MP Label Map message
(i.e., one that has leaf nodes downstream of it). (i.e., one that has leaf nodes downstream of it).
Note that a transit node (and indeed the root node) may also be a Note that a transit node (and indeed the root node) may also be a
leaf node. leaf node.
2.3.1. Label Map 2.4.1. Label Map
The following lists procedures for generating and processing P2MP The following lists procedures for generating and processing P2MP
Label Map messages for nodes that participate in a P2MP LSP. An LSR Label Map messages for nodes that participate in a P2MP LSP. An LSR
should apply those procedures that apply to it, based on its role in should apply those procedures that apply to it, based on its role in
the P2MP LSP. the P2MP LSP.
For the approach described here we use downstream assigned labels. For the approach described here we use downstream assigned labels.
On Ethernet networks this may be less optimal, see Section 5. On Ethernet networks this may be less optimal, see Section 6.
2.3.1.1. Determining one's 'upstream LSR' 2.4.1.1. Determining one's 'upstream LSR'
A node Z that is part of P2MP LSP <X, Y> determines the LDP peer U A node Z that is part of P2MP LSP <X, Y> determines the LDP peer U
which lies on the best path from Z to the root node X. If there are which lies on the best path from Z to the root node X. If there are
more than one such LDP peers, only one of them is picked. U is Z's more than one such LDP peers, only one of them is picked. U is Z's
"Upstream LSR" for <X, Y>. "Upstream LSR" for <X, Y>.
When there are several candidate upstream LSRs, the LSR MAY select When there are several candidate upstream LSRs, the LSR MAY select
one upstream LSR using the following procedure: one upstream LSR using the following procedure:
1. The candidate upstream LSRs are numbered from lower to higher IP 1. The candidate upstream LSRs are numbered from lower to higher IP
skipping to change at page 8, line 37 skipping to change at page 10, line 20
2. The following hash is performed: H = (Sum Opaque value) modulo N, 2. The following hash is performed: H = (Sum Opaque value) modulo N,
where N is the number of candidate upstream LSRs where N is the number of candidate upstream LSRs
3. The selected upstream LSR U is the LSR that has the number H. 3. The selected upstream LSR U is the LSR that has the number H.
This allows for load balancing of a set of LSPs among a set of This allows for load balancing of a set of LSPs among a set of
candidate upstream LSRs, while ensuring that on a LAN interface a candidate upstream LSRs, while ensuring that on a LAN interface a
single upstream LSR is selected. single upstream LSR is selected.
2.3.1.2. Leaf Operation 2.4.1.2. Leaf Operation
A leaf node Z of P2MP LSP <X, Y> determines its upstream LSR U for A leaf node Z of P2MP LSP <X, Y> determines its upstream LSR U for
<X, Y> as per Section 2.3.1.1, allocates a label L, and sends a P2MP <X, Y> as per Section 2.4.1.1, allocates a label L, and sends a P2MP
Label Map <X, Y, L> to U. Label Map <X, Y, L> to U.
2.3.1.3. Transit Node operation 2.4.1.3. Transit Node operation
Suppose a transit node Z receives a P2MP Label Map <X, Y, L> from LDP Suppose a transit node Z receives a P2MP Label Map <X, Y, L> from LDP
peer T. Z checks whether it already has state for <X, Y>. If not, Z peer T. Z checks whether it already has state for <X, Y>. If not, Z
allocates a label L', and installs state to swap L' with L over allocates a label L', and installs state to swap L' with L over
interface I associated with peer T. Z also determines its upstream interface I associated with peer T. Z also determines its upstream
LSR U for <X, Y> as per Section 2.3.1.1, and sends a P2MP Label Map LSR U for <X, Y> as per Section 2.4.1.1, and sends a P2MP Label Map
<X, Y, L'> to U. <X, Y, L'> to U.
If Z already has state for <X, Y>, then Z does not send a Label Map If Z already has state for <X, Y>, then Z does not send a Label Map
message for P2MP LSP <X, Y>. All that Z needs to do in this case is message for P2MP LSP <X, Y>. All that Z needs to do in this case is
update its forwarding state. Assuming its old forwarding state was update its forwarding state. Assuming its old forwarding state was
L'-> {<I1, L1> <I2, L2> ..., <In, Ln>}, its new forwarding state L'-> {<I1, L1> <I2, L2> ..., <In, Ln>}, its new forwarding state
becomes L'-> {<I1, L1> <I2, L2> ..., <In, Ln>, <I, L>}. becomes L'-> {<I1, L1> <I2, L2> ..., <In, Ln>, <I, L>}.
2.3.1.4. Root Node Operation 2.4.1.4. Root Node Operation
Suppose the root node Z receives a P2MP Label Map <X, Y, L> from peer Suppose the root node Z receives a P2MP Label Map <X, Y, L> from peer
T. Z checks whether it already has forwarding state for <X, Y>. If T. Z checks whether it already has forwarding state for <X, Y>. If
not, Z creates forwarding state to push label L onto the traffic that not, Z creates forwarding state to push label L onto the traffic that
Z wants to forward over the P2MP LSP (how this traffic is determined Z wants to forward over the P2MP LSP (how this traffic is determined
is outside the scope of this document). is outside the scope of this document).
If Z already has forwarding state for <X, Y>, then Z adds "push label If Z already has forwarding state for <X, Y>, then Z adds "push label
L, send over interface I" to the nexthop, where I is the interface L, send over interface I" to the nexthop, where I is the interface
associated with peer T. associated with peer T.
2.3.2. Label Withdraw 2.4.2. Label Withdraw
The following lists procedures for generating and processing P2MP The following lists procedures for generating and processing P2MP
Label Withdraw messages for nodes that participate in a P2MP LSP. An Label Withdraw messages for nodes that participate in a P2MP LSP. An
LSR should apply those procedures that apply to it, based on its role LSR should apply those procedures that apply to it, based on its role
in the P2MP LSP. in the P2MP LSP.
2.3.2.1. Leaf Operation 2.4.2.1. Leaf Operation
If a leaf node Z discovers (by means outside the scope of this If a leaf node Z discovers (by means outside the scope of this
document) that it is no longer a leaf of the P2MP LSP, it SHOULD send document) that it is no longer a leaf of the P2MP LSP, it SHOULD send
a Label Withdraw <X, Y, L> to its upstream LSR U for <X, Y>, where L a Label Withdraw <X, Y, L> to its upstream LSR U for <X, Y>, where L
is the label it had previously advertised to U for <X, Y>. is the label it had previously advertised to U for <X, Y>.
2.3.2.2. Transit Node Operation 2.4.2.2. Transit Node Operation
If a transit node Z receives a Label Withdraw message <X, Y, L> from If a transit node Z receives a Label Withdraw message <X, Y, L> from
a node W, it deletes label L from its forwarding state, and sends a a node W, it deletes label L from its forwarding state, and sends a
Label Release message with label L to W. Label Release message with label L to W.
If deleting L from Z's forwarding state for P2MP LSP <X, Y> results If deleting L from Z's forwarding state for P2MP LSP <X, Y> results
in no state remaining for <X, Y>, then Z propagates the Label in no state remaining for <X, Y>, then Z propagates the Label
Withdraw for <X, Y>, to its upstream T, by sending a Label Withdraw Withdraw for <X, Y>, to its upstream T, by sending a Label Withdraw
<X, Y, L1> where L1 is the label Z had previously advertised to T for <X, Y, L1> where L1 is the label Z had previously advertised to T for
<X, Y>. <X, Y>.
2.3.2.3. Root Node Operation 2.4.2.3. Root Node Operation
The procedure when the root node of a P2MP LSP receives a Label The procedure when the root node of a P2MP LSP receives a Label
Withdraw message are the same as for transit nodes, except that it Withdraw message are the same as for transit nodes, except that it
would not propagate the Label Withdraw upstream (as it has no would not propagate the Label Withdraw upstream (as it has no
upstream). upstream).
2.3.2.4. Upstream LSR change 2.4.2.4. Upstream LSR change
If, for a given node Z participating in a P2MP LSP <X, Y>, the If, for a given node Z participating in a P2MP LSP <X, Y>, the
upstream LSR changes, say from U to U', then Z MUST update its upstream LSR changes, say from U to U', then Z MUST update its
forwarding state by deleting the state for label L, allocating a new forwarding state by deleting the state for label L, allocating a new
label, L', for <X,Y>, and installing the forwarding state for L'. In label, L', for <X,Y>, and installing the forwarding state for L'. In
addition Z MUST send a Label Map <X, Y, L'> to U' and send a Label addition Z MUST send a Label Map <X, Y, L'> to U' and send a Label
Withdraw <X, Y, L> to U. Withdraw <X, Y, L> to U.
3. Shared Trees 3. Shared Trees
skipping to change at page 11, line 29 skipping to change at page 13, line 13
to reach other leaf nodes. Mapping traffic to the MP2MP LSP may to reach other leaf nodes. Mapping traffic to the MP2MP LSP may
happen at any leaf node. How that mapping is established is outside happen at any leaf node. How that mapping is established is outside
the scope of this document. the scope of this document.
Due to how a MP2MP LSP is built a leaf LSR that is sending packets on Due to how a MP2MP LSP is built a leaf LSR that is sending packets on
the MP2MP LSP does not receive its own packets. There is also no the MP2MP LSP does not receive its own packets. There is also no
additional mechanism needed on the root or transit LSR to match additional mechanism needed on the root or transit LSR to match
upstream traffic to the downstream forwarding state. Packets that upstream traffic to the downstream forwarding state. Packets that
are forwarded over a MP2MP LSP will not traverse a link more than are forwarded over a MP2MP LSP will not traverse a link more than
once, with the exception of LAN links which are discussed in once, with the exception of LAN links which are discussed in
Section 4.2.1 Section 4.3.1
For the setup of a MP2MP LSP with LDP we define 2 new protocol 4.1. Support for MP2MP LSP setup with LDP
entities, the MP2MP downstream FEC and upstream FEC element. Both
elements will be used in the FEC TLV. The description of the MP2MP
elements follow.
4.1. The MP2MP downstream and upstream FEC elements. Support for the setup of MP2MP LSPs is advertised using LDP
capabilities as defined in [6]. An implementation supporting the
MP2MP procedures specified in this document MUST implement the
procedures for Capability Parameters in Initialization Messages.
A new Capability Parameter TLV is defined, the MP2MP Capability.
Following is the format of the MP2MP Capability Parameter.
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|1|0| MP2MP Capability (TBD IANA) | Length (= 1) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|1| Reserved |
+-+-+-+-+-+-+-+-+
The MP2MP Capability TLV MUST be supported in the LDP Initialization
Message. Advertisement of the MP2MP Capability indicates support of
the procedures for MP2MP LSP setup detailed in this document. If the
peer has not advertised the corresponding capability, then no label
messages using the MP2MP upstream and downstream FEC Elements should
be sent to the peer.
4.2. The MP2MP downstream and upstream FEC Elements.
For the setup of a MP2MP LSP with LDP we define 2 new protocol
entities, the MP2MP downstream FEC and upstream FEC Element. Both
elements will be used as FEC Elements in the FEC TLV. Note that the
MP2MP FEC Elements do not necessarily identify the traffic that must
be mapped to the LSP, so from that point of view, the use of the term
FEC is a misnomer. The description of the MP2MP FEC Elements follow.
The structure, encoding and error handling for the MP2MP downstream The structure, encoding and error handling for the MP2MP downstream
and upstream FEC elements are the same as for the P2MP FEC element and upstream FEC Elements are the same as for the P2MP FEC Element
described in Section 2.1. The difference is that two new FEC types described in Section 2.2. The difference is that two new FEC types
are used: MP2MP downstream type (TBD) and MP2MP upstream type (TBD). are used: MP2MP downstream type (TBD) and MP2MP upstream type (TBD).
If a FEC TLV contains an MP2MP FEC Element, the MP2MP FEC Element If a FEC TLV contains an MP2MP FEC Element, the MP2MP FEC Element
MUST be the only FEC Element in the FEC TLV. MUST be the only FEC Element in the FEC TLV.
4.2. Using the MP2MP FEC elements 4.3. Using the MP2MP FEC Elements
This section defines the rules for the processing and propagation of This section defines the rules for the processing and propagation of
the MP2MP FEC elements. The following notation is used in the the MP2MP FEC Elements. The following notation is used in the
processing rules: processing rules:
1. MP2MP downstream LSP <X, Y> (or simply downstream <X, Y>): an 1. MP2MP downstream LSP <X, Y> (or simply downstream <X, Y>): an
MP2MP LSP downstream path with root node address X and opaque MP2MP LSP downstream path with root node address X and opaque
value Y. value Y.
2. MP2MP upstream LSP <X, Y, D> (or simply upstream <X, Y, D>): a 2. MP2MP upstream LSP <X, Y, D> (or simply upstream <X, Y, D>): a
MP2MP LSP upstream path for downstream node D with root node MP2MP LSP upstream path for downstream node D with root node
address X and opaque value Y. address X and opaque value Y.
3. MP2MP downstream FEC element <X, Y>: a FEC element with root node 3. MP2MP downstream FEC Element <X, Y>: a FEC Element with root node
address X and opaque value Y used for a downstream MP2MP LSP. address X and opaque value Y used for a downstream MP2MP LSP.
4. MP2MP upstream FEC element <X, Y>: a FEC element with root node 4. MP2MP upstream FEC Element <X, Y>: a FEC Element with root node
address X and opaque value Y used for an upstream MP2MP LSP. address X and opaque value Y used for an upstream MP2MP LSP.
5. MP2MP Label Map downstream <X, Y, L>: A Label Map message with a 5. MP2MP Label Map downstream <X, Y, L>: A Label Map message with a
FEC TLV with a single MP2MP downstream FEC element <X, Y> and FEC TLV with a single MP2MP downstream FEC Element <X, Y> and
label TLV with label L. label TLV with label L.
6. MP2MP Label Map upstream <X, Y, Lu>: A Label Map message with a 6. MP2MP Label Map upstream <X, Y, Lu>: A Label Map message with a
FEC TLV with a single MP2MP upstream FEC element <X, Y> and label FEC TLV with a single MP2MP upstream FEC Element <X, Y> and label
TLV with label Lu. TLV with label Lu.
7. MP2MP Label Withdraw downstream <X, Y, L>: a Label Withdraw 7. MP2MP Label Withdraw downstream <X, Y, L>: a Label Withdraw
message with a FEC TLV with a single MP2MP downstream FEC element message with a FEC TLV with a single MP2MP downstream FEC Element
<X, Y> and label TLV with label L. <X, Y> and label TLV with label L.
8. MP2MP Label Withdraw upstream <X, Y, Lu>: a Label Withdraw 8. MP2MP Label Withdraw upstream <X, Y, Lu>: a Label Withdraw
message with a FEC TLV with a single MP2MP upstream FEC element message with a FEC TLV with a single MP2MP upstream FEC Element
<X, Y> and label TLV with label Lu. <X, Y> and label TLV with label Lu.
The procedures below are organized by the role which the node plays The procedures below are organized by the role which the node plays
in the MP2MP LSP. Node Z knows that it is a leaf node by a discovery in the MP2MP LSP. Node Z knows that it is a leaf node by a discovery
process which is outside the scope of this document. During the process which is outside the scope of this document. During the
course of the protocol operation, the root node recognizes its role course of the protocol operation, the root node recognizes its role
because it owns the root node address. A transit node is any node because it owns the root node address. A transit node is any node
(other then the root node) that receives a MP2MP Label Map message (other then the root node) that receives a MP2MP Label Map message
(i.e., one that has leaf nodes downstream of it). (i.e., one that has leaf nodes downstream of it).
Note that a transit node (and indeed the root node) may also be a Note that a transit node (and indeed the root node) may also be a
leaf node and the root node does not have to be an ingress LSR or leaf node and the root node does not have to be an ingress LSR or
leaf of the MP2MP LSP. leaf of the MP2MP LSP.
4.2.1. MP2MP Label Map upstream and downstream 4.3.1. MP2MP Label Map upstream and downstream
The following lists procedures for generating and processing MP2MP The following lists procedures for generating and processing MP2MP
Label Map messages for nodes that participate in a MP2MP LSP. An LSR Label Map messages for nodes that participate in a MP2MP LSP. An LSR
should apply those procedures that apply to it, based on its role in should apply those procedures that apply to it, based on its role in
the MP2MP LSP. the MP2MP LSP.
For the approach described here if there are several receivers for a For the approach described here if there are several receivers for a
MP2MP LSP on a LAN, packets are replicated over the LAN. This may MP2MP LSP on a LAN, packets are replicated over the LAN. This may
not be optimal; optimizing this case is for further study, see [5]. not be optimal; optimizing this case is for further study, see [4].
4.2.1.1. Determining one's upstream MP2MP LSR 4.3.1.1. Determining one's upstream MP2MP LSR
Determining the upstream LDP peer U for a MP2MP LSP <X, Y> follows Determining the upstream LDP peer U for a MP2MP LSP <X, Y> follows
the procedure for a P2MP LSP described in Section 2.3.1.1. the procedure for a P2MP LSP described in Section 2.4.1.1.
4.2.1.2. Determining one's downstream MP2MP LSR 4.3.1.2. Determining one's downstream MP2MP LSR
A LDP peer U which receives a MP2MP Label Map downstream from a LDP A LDP peer U which receives a MP2MP Label Map downstream from a LDP
peer D will treat D as downstream MP2MP LSR. peer D will treat D as downstream MP2MP LSR.
4.2.1.3. MP2MP leaf node operation 4.3.1.3. MP2MP leaf node operation
A leaf node Z of a MP2MP LSP <X, Y> determines its upstream LSR U for A leaf node Z of a MP2MP LSP <X, Y> determines its upstream LSR U for
<X, Y> as per Section 4.2.1.1, allocates a label L, and sends a MP2MP <X, Y> as per Section 4.3.1.1, allocates a label L, and sends a MP2MP
Label Map downstream <X, Y, L> to U. Label Map downstream <X, Y, L> to U.
Leaf node Z expects an MP2MP Label Map upstream <X, Y, Lu> from node Leaf node Z expects an MP2MP Label Map upstream <X, Y, Lu> from node
U in response to the MP2MP Label Map downstream it sent to node U. Z U in response to the MP2MP Label Map downstream it sent to node U. Z
checks whether it already has forwarding state for upstream <X, Y>. checks whether it already has forwarding state for upstream <X, Y>.
If not, Z creates forwarding state to push label Lu onto the traffic If not, Z creates forwarding state to push label Lu onto the traffic
that Z wants to forward over the MP2MP LSP. How it determines what that Z wants to forward over the MP2MP LSP. How it determines what
traffic to forward on this MP2MP LSP is outside the scope of this traffic to forward on this MP2MP LSP is outside the scope of this
document. document.
4.2.1.4. MP2MP transit node operation 4.3.1.4. MP2MP transit node operation
When node Z receives a MP2MP Label Map downstream <X, Y, L> from peer When node Z receives a MP2MP Label Map downstream <X, Y, L> from peer
D associated with interface I, it checks whether it has forwarding D associated with interface I, it checks whether it has forwarding
state for downstream <X, Y>. If not, Z allocates a label L' and state for downstream <X, Y>. If not, Z allocates a label L' and
installs downstream forwarding state to swap label L' with label L installs downstream forwarding state to swap label L' with label L
over interface I. Z also determines its upstream LSR U for <X, Y> as over interface I. Z also determines its upstream LSR U for <X, Y> as
per Section 4.2.1.1, and sends a MP2MP Label Map downstream <X, Y, per Section 4.3.1.1, and sends a MP2MP Label Map downstream <X, Y,
L'> to U. L'> to U.
If Z already has forwarding state for downstream <X, Y>, all that Z If Z already has forwarding state for downstream <X, Y>, all that Z
needs to do is update its forwarding state. Assuming its old needs to do is update its forwarding state. Assuming its old
forwarding state was L'-> {<I1, L1> <I2, L2> ..., <In, Ln>}, its new forwarding state was L'-> {<I1, L1> <I2, L2> ..., <In, Ln>}, its new
forwarding state becomes L'-> {<I1, L1> <I2, L2> ..., <In, Ln>, <I, forwarding state becomes L'-> {<I1, L1> <I2, L2> ..., <In, Ln>, <I,
L>}. L>}.
Node Z checks whether it already has forwarding state upstream <X, Y, Node Z checks whether it already has forwarding state upstream <X, Y,
D>. If it does, then no further action needs to happen. If it does D>. If it does, then no further action needs to happen. If it does
not, it allocates a label Lu and creates a new label swap for Lu from not, it allocates a label Lu and creates a new label swap for Lu from
the label swap(s) from the forwarding state downstream <X, Y>, the label swap(s) from the forwarding state downstream <X, Y>,
omitting the swap on interface I for node D. This allows upstream omitting the swap on interface I for node D. This allows upstream
traffic to follow the MP2MP tree down to other node(s) except the traffic to follow the MP2MP tree down to other node(s) except the
node from which Z received the MP2MP Label Map downstream <X, Y, L>. node from which Z received the MP2MP Label Map downstream <X, Y, L>.
Node Z determines the downstream MP2MP LSR as per Section 4.2.1.2, Node Z determines the downstream MP2MP LSR as per Section 4.3.1.2,
and sends a MP2MP Label Map upstream <X, Y, Lu> to node D. and sends a MP2MP Label Map upstream <X, Y, Lu> to node D.
Transit node Z will also receive a MP2MP Label Map upstream <X, Y, Transit node Z will also receive a MP2MP Label Map upstream <X, Y,
Lu> in response to the MP2MP Label Map downstream sent to node U Lu> in response to the MP2MP Label Map downstream sent to node U
associated with interface Iu. Node Z will add label swap Lu over associated with interface Iu. Node Z will add label swap Lu over
interface Iu to the forwarding state upstream <X, Y, D>. This allows interface Iu to the forwarding state upstream <X, Y, D>. This allows
packets to go up the tree towards the root node. packets to go up the tree towards the root node.
4.2.1.5. MP2MP root node operation 4.3.1.5. MP2MP root node operation
4.2.1.5.1. Root node is also a leaf 4.3.1.5.1. Root node is also a leaf
Suppose root/leaf node Z receives a MP2MP Label Map downstream <X, Y, Suppose root/leaf node Z receives a MP2MP Label Map downstream <X, Y,
L> from node D associated with interface I. Z checks whether it L> from node D associated with interface I. Z checks whether it
already has forwarding state downstream <X, Y>. If not, Z creates already has forwarding state downstream <X, Y>. If not, Z creates
forwarding state for downstream to push label L on traffic that Z forwarding state for downstream to push label L on traffic that Z
wants to forward down the MP2MP LSP. How it determines what traffic wants to forward down the MP2MP LSP. How it determines what traffic
to forward on this MP2MP LSP is outside the scope of this document. to forward on this MP2MP LSP is outside the scope of this document.
If Z already has forwarding state for downstream <X, Y>, then Z will If Z already has forwarding state for downstream <X, Y>, then Z will
add the label push for L over interface I to it. add the label push for L over interface I to it.
Node Z checks if it has forwarding state for upstream <X, Y, D>. If Node Z checks if it has forwarding state for upstream <X, Y, D>. If
not, Z allocates a label Lu and creates upstream forwarding state to not, Z allocates a label Lu and creates upstream forwarding state to
push Lu with the label push(s) from the forwarding state downstream push Lu with the label push(s) from the forwarding state downstream
<X, Y>, except the push on interface I for node D. This allows <X, Y>, except the push on interface I for node D. This allows
upstream traffic to go down the MP2MP to other node(s), except the upstream traffic to go down the MP2MP to other node(s), except the
node from which the traffic was received. Node Z determines the node from which the traffic was received. Node Z determines the
downstream MP2MP LSR as per section Section 4.2.1.2, and sends a downstream MP2MP LSR as per section Section 4.3.1.2, and sends a
MP2MP Label Map upstream <X, Y, Lu> to node D. Since Z is the root of MP2MP Label Map upstream <X, Y, Lu> to node D. Since Z is the root of
the tree Z will not send a MP2MP downstream map and will not receive the tree Z will not send a MP2MP downstream map and will not receive
a MP2MP upstream map. a MP2MP upstream map.
4.2.1.5.2. Root node is not a leaf 4.3.1.5.2. Root node is not a leaf
Suppose the root node Z receives a MP2MP Label Map dowbstream <X, Y, Suppose the root node Z receives a MP2MP Label Map downstream <X, Y,
L> from node D associated with interface I. Z checks whether it L> from node D associated with interface I. Z checks whether it
already has forwarding state for downstream <X, Y>. If not, Z already has forwarding state for downstream <X, Y>. If not, Z
creates downstream forwarding state and installs a outgoing label L creates downstream forwarding state and installs a outgoing label L
over interface I. If Z already has forwarding state for downstream over interface I. If Z already has forwarding state for downstream
<X, Y>, then Z will add label L over interface I to the existing <X, Y>, then Z will add label L over interface I to the existing
state. state.
Node Z checks if it has forwarding state for upstream <X, Y, D>. If Node Z checks if it has forwarding state for upstream <X, Y, D>. If
not, Z allocates a label Lu and creates forwarding state to swap Lu not, Z allocates a label Lu and creates forwarding state to swap Lu
with the label swap(s) from the forwarding state downstream <X, Y>, with the label swap(s) from the forwarding state downstream <X, Y>,
except the swap for node D. This allows upstream traffic to go down except the swap for node D. This allows upstream traffic to go down
the MP2MP to other node(s), except the node is was received from. the MP2MP to other node(s), except the node is was received from.
Root node Z determines the downstream MP2MP LSR D as per Root node Z determines the downstream MP2MP LSR D as per
Section 4.2.1.2, and sends a MP2MP Label Map upstream <X, Y, Lu> to Section 4.3.1.2, and sends a MP2MP Label Map upstream <X, Y, Lu> to
it. Since Z is the root of the tree Z will not send a MP2MP it. Since Z is the root of the tree Z will not send a MP2MP
downstream map and will not receive a MP2MP upstream map. downstream map and will not receive a MP2MP upstream map.
4.2.2. MP2MP Label Withdraw 4.3.2. MP2MP Label Withdraw
The following lists procedures for generating and processing MP2MP The following lists procedures for generating and processing MP2MP
Label Withdraw messages for nodes that participate in a MP2MP LSP. Label Withdraw messages for nodes that participate in a MP2MP LSP.
An LSR should apply those procedures that apply to it, based on its An LSR should apply those procedures that apply to it, based on its
role in the MP2MP LSP. role in the MP2MP LSP.
4.2.2.1. MP2MP leaf operation 4.3.2.1. MP2MP leaf operation
If a leaf node Z discovers (by means outside the scope of this If a leaf node Z discovers (by means outside the scope of this
document) that it is no longer a leaf of the MP2MP LSP, it SHOULD document) that it is no longer a leaf of the MP2MP LSP, it SHOULD
send a downstream Label Withdraw <X, Y, L> to its upstream LSR U for send a downstream Label Withdraw <X, Y, L> to its upstream LSR U for
<X, Y>, where L is the label it had previously advertised to U for <X, Y>, where L is the label it had previously advertised to U for
<X,Y>. <X,Y>.
Leaf node Z expects the upstream router U to respond by sending a Leaf node Z expects the upstream router U to respond by sending a
downstream label release for L and a upstream Label Withdraw for <X, downstream label release for L and a upstream Label Withdraw for <X,
Y, Lu> to remove Lu from the upstream state. Node Z will remove Y, Lu> to remove Lu from the upstream state. Node Z will remove
label Lu from its upstream state and send a label release message label Lu from its upstream state and send a label release message
with label Lu to U. with label Lu to U.
4.2.2.2. MP2MP transit node operation 4.3.2.2. MP2MP transit node operation
If a transit node Z receives a downstream label withdraw message <X, If a transit node Z receives a downstream label withdraw message <X,
Y, L> from node D, it deletes label L from its forwarding state Y, L> from node D, it deletes label L from its forwarding state
downstream <X, Y> and from all its upstream states for <X, Y>. Node downstream <X, Y> and from all its upstream states for <X, Y>. Node
Z sends a label release message with label L to D. Since node D is no Z sends a label release message with label L to D. Since node D is no
longer part of the downstream forwarding state, Z cleans up the longer part of the downstream forwarding state, Z cleans up the
forwarding state upstream <X, Y, D> and sends a upstream Label forwarding state upstream <X, Y, D> and sends a upstream Label
Withdraw for <X, Y, Lu> to D. Withdraw for <X, Y, Lu> to D.
If deleting L from Z's forwarding state for downstream <X, Y> results If deleting L from Z's forwarding state for downstream <X, Y> results
in no state remaining for <X, Y>, then Z propagates the Label in no state remaining for <X, Y>, then Z propagates the Label
Withdraw <X, Y, L> to its upstream node U for <X,Y>. Withdraw <X, Y, L> to its upstream node U for <X,Y>.
4.2.2.3. MP2MP root node operation 4.3.2.3. MP2MP root node operation
The procedure when the root node of a MP2MP LSP receives a label The procedure when the root node of a MP2MP LSP receives a label
withdraw message is the same as for transit nodes, except that the withdraw message is the same as for transit nodes, except that the
root node would not propagate the Label Withdraw upstream (as it has root node would not propagate the Label Withdraw upstream (as it has
no upstream). no upstream).
4.2.2.4. MP2MP Upstream LSR change 4.3.2.4. MP2MP Upstream LSR change
The procedure for changing the upstream LSR is the same as documented The procedure for changing the upstream LSR is the same as documented
in Section 2.3.2.4, except it is applied to MP2MP FECs, using the in Section 2.4.2.4, except it is applied to MP2MP FECs, using the
procedures described in Section 4.2.1 through Section 4.2.2.3. procedures described in Section 4.3.1 through Section 4.3.2.3.
5. Upstream label allocation on Ethernet networks 5. The LDP MP Status TLV
On Ethernet networks the upstream LSR will send a copy of the packet An LDP MP capable router MAY use an LDP MP Status TLV to indicate
to each receiver individually. If there is more then one receiver on additional status for a MP LSP to its remote peers. This includes
the Ethernet we don't take full benefit of the multi-access signaling to peers that are either upstream or downstream of the LDP
capability of the network. We may optimize the bandwidth consumption MP capable router. The value of the LDP MP status TLV will remain
on the Ethernet and replication overhead on the upstream LSR by using opaque to LDP and MAY encode one or more status elements.
upstream label allocation [5]. Procedures on how to distribute
upstream labels using LDP is documented in [6].
6. Root node redundancy for MP2MP LSPs The LDP MP Status TLV is encoded as follows:
MP2MP leaf nodes must use the same root node to setup the MP2MP LSP. 0 1 2 3
Otherwise there will be partitioned MP2MP LSP and traffic sourced by 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
some leafs is not received by others. Having a single root node for +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
a MP2MP LSP is a single point of failure, which is not preferred. We |1|0| LDP MP Status Type(TBD) | Length |
need a fast and efficient mechanism to recover from a root node +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
failure. | Value |
~ ~
| +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
6.1. Root node redundancy procedure LDP MP Status Type: The LDP MP Status Type to be assigned by IANA.
It is likely that the root node for a MP2MP LSP is defined Length: Length of the LDP MP Status Value in octets.
statically. The root node address may be configured on each leaf
statically or learned using a dynamic protocol. How MP2MP leafs Value: One or more LDP MP Status Value elements.
learn about the root node is out of the scope of this document. A
MP2MP LSP is uniquely identified by a opaque value and the root node 5.1. The LDP MP Status Value Element
address. Suppose that for the same opaque value we define two root
The LDP MP Status Value Element that is included in the LDP MP Status
TLV Value has the following encoding.
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Type(TBD) | Length | Value ... |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
~ ~
| |
| +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Type: The type of the LDP MP Status Value Element is to be assigned
by IANA.
Length: The length of the Value field, in octets.
Value: String of Length octets, to be interpreted as specified by
the Type field.
5.2. LDP Messages containing LDP MP Status messages
The LDP MP status message may appear either in a label mapping
message or a LDP notification message.
5.2.1. LDP MP Status sent in LDP notification messages
An LDP MP status TLV sent in a notification message must be
accompanied with a Status TLV. The general format of the
Notification Message with an LDP MP status TLV is:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|0| Notification (0x0001) | Message Length |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Message ID |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Status TLV |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| LDP MP Status TLV |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Optional LDP MP FEC TLV |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Optional Label TLV |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
The Status TLV status code is used to indicate that LDP MP status TLV
and an additional information follows in the Notification message's
"optional parameter" section. Depending on the actual contents of
the LDP MP status TLV, an LDP P2MP or MP2MP FEC TLV and Label TLV may
also be present to provide context to the LDP MP Status TLV. (NOTE:
Status Code is pending IANA assignment).
Since the notification does not refer to any particular message, the
Message Id and Message Type fields are set to 0.
5.2.2. LDP MP Status TLV in Label Mapping Message
An example of the Label Mapping Message defined in RFC3036 is shown
below to illustrate the message with an Optional LDP MP Status TLV
present.
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|0| Label Mapping (0x0400) | Message Length |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Message ID |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| FEC TLV |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Label TLV |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Optional LDP MP Status TLV |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Additional Optional Parameters |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
6. Upstream label allocation on a LAN
On a LAN the upstream LSR will send a copy of the packet to each
receiver individually. If there is more then one receiver on the LAN
we don't take full benefit of the multi-access capability of the
network. We may optimize the bandwidth consumption on the LAN and
replication overhead on the upstream LSR by using upstream label
allocation [4]. Procedures on how to distribute upstream labels
using LDP is documented in [5].
6.1. LDP Multipoint-to-Multipoint on a LAN
The procedure to allocate a context label on a LAN is defined in [4].
That procedure results in each LSR on a given LAN having a context
label which, on that LAN, can be used to identify itself uniquely.
Each LSR advertises its context label as an upstream-assigned label,
following the procedures of [5]. Any LSR for which the LAN is a
downstream link on some P2MP or MP2MP LSP will allocate an upstream-
assigned label identifying that LSP. When the LSR forwards a packet
downstream on one of those LSPs, the packet's top label must be the
LSR's context label, and the packet's second label is the label
identifying the LSP. We will call the top label the "upstream LSR
label" and the second label the "LSP label".
6.1.1. MP2MP downstream forwarding
The downstream path of a MP2MP LSP is much like a normal P2MP LSP, so
we will use the same procedures as defined in [5]. A label request
for a LSP label is send to the upstream LSR. The label mapping that
is received from the upstream LSR contains the LSP label for the
MP2MP FEC and the upstream LSR context label. The MP2MP downstream
path (corresponding to the LSP label) will be installed the context
specific forwarding table corresponding to the upstream LSR label.
Packets sent by the upstream router can be forwarded downstream using
this forwarding state based on a two label lookup.
6.1.2. MP2MP upstream forwarding
A MP2MP LSP also has an upstream forwarding path. Upstream packets
need to be forwarded in the direction of the root and downstream on
any node on the LAN that has a downstream interface for the LSP. For
a given MP2MP LSP on a given LAN, exactly one LSR is considered to be
the upstream LSR. If an LSR on the LAN receives a packet from one of
its downstream interfaces for the LSP, and if it needs to forward the
packet onto the LAN, it ensures that the packet's top label is the
context label of the upstream LSR, and that its second label is the
LSP label that was assigned by the upstream LSR.
Other LSRs receiving the packet will not be able to tell whether the
packet really came from the upstream router, but that makes no
difference in the processing of the packet. The upstream LSR will
see its own upstream LSR in the label, and this will enable it to
determine that the packet is traveling upstream.
7. Root node redundancy
The root node is a single point of failure for an MP LSP, whether
this is P2MP or MP2MP. The problem is particularly severe for MP2MP
LSPs. In the case of MP2MP LSPs, all leaf nodes must use the same
root node to set up the MP2MP LSP, because otherwise the traffic
sourced by some leafs is not received by others. Because the root
node is the single point of failure for an MP LSP, we need a fast and
efficient mechanism to recover from a root node failure.
An MP LSP is uniquely identified in the network by the opaque value
and the root node address. It is likely that the root node for an MP
LSP is defined statically. The root node address may be configured
on each leaf statically or learned using a dynamic protocol. How
leafs learn about the root node is out of the scope of this document.
Suppose that for the same opaque value we define two (or more) root
node addresses and we build a tree to each root using the same opaque node addresses and we build a tree to each root using the same opaque
value. Effectively these will be treated as different MP2MP LSPs in value. Effectively these will be treated as different MP LSPs in the
the network. Since all leafs have setup a MP2MP LSP to each one of network. Once the trees are built, the procedures differ for P2MP
the root nodes for this opaque value, a sending leaf may pick either and MP2MP LSPs. The different procedures are explained in the
of the two MP2MP LSPs to forward a packet on. The leaf nodes will sections below.
receive the packet on one of the MP2MP LSPs, the client of the MP2MP
LSP does not care on which MP2MP LSP the packet was received from, as 7.1. Root node redundancy - procedures for P2MP LSPs
long as they are for the same opaque value. The sending leaf MUST
only forward a packet on one MP2MP LSP at a given point in time. The Since all leafs have set up P2MP LSPs to all the roots, they are
prepared to receive packets on either one of these LSPs. However,
only one of the roots should be forwarding traffic at any given time,
for the following reasons: 1) to achieve bandwidth savings in the
network and 2) to ensure that the receiving leafs don't receive
duplicate packets (since one cannot assume that the receiving leafs
are able to discard duplicates). How the roots determine which one
is the active sender is outside the scope of this document.
7.2. Root node redundancy - procedures for MP2MP LSPs
Since all leafs have set up an MP2MP LSP to each one of the root
nodes for this opaque value, a sending leaf may pick either of the
two (or more) MP2MP LSPs to forward a packet on. The leaf nodes
receive the packet on one of the MP2MP LSPs. The client of the MP2MP
LSP does not care on which MP2MP LSP the packet is received, as long
as they are for the same opaque value. The sending leaf MUST only
forward a packet on one MP2MP LSP at a given point in time. The
receiving leafs are unable to discard duplicate packets because they receiving leafs are unable to discard duplicate packets because they
accept on both LSPs. Using both these MP2MP LSPs we can implement accept on all LSPs. Using all the available MP2MP LSPs we can
redundancy using the following procedures. implement redundancy using the following procedures.
A sending leaf selects a single root node out of the available roots A sending leaf selects a single root node out of the available roots
for a given opaque value. A good strategy MAY be to look at the for a given opaque value. A good strategy MAY be to look at the
unicast routing table and select a root that is closest according in unicast routing table and select a root that is closest in terms of
terms of unicast metric. As soon as the root address of our active the unicast metric. As soon as the root address of the active root
root disappears from the unicast routing table (or becomes less disappears from the unicast routing table (or becomes less
attractive) due to root node or link failure we can select a new best attractive) due to root node or link failure, the leaf can select a
root address and start forwarding to it directly. If multiple root new best root address and start forwarding to it directly. If
nodes have the same unicast metric, the highest root node addresses multiple root nodes have the same unicast metric, the highest root
MAY be selected, or we MAY do per session load balancing over the node addresses MAY be selected, or per session load balancing MAY be
root nodes. done over the root nodes.
All leafs participating in a MP2MP LSP MUST join to all the available All leafs participating in a MP2MP LSP MUST join to all the available
root nodes for a given opaque value. Since the sending leaf may pick root nodes for a given opaque value. Since the sending leaf may pick
any MP2MP LSP, it must be prepared to receive on it. any MP2MP LSP, it must be prepared to receive on it.
The advantage of pre-building multiple MP2MP LSPs for a single opaque The advantage of pre-building multiple MP2MP LSPs for a single opaque
value is that we can converge from a root node failure as fast as the value is that convergence from a root node failure happens as fast as
unicast routing protocol is able to notify us. There is no need for the unicast routing protocol is able to notify. There is no need for
an additional protocol to advertise to the leaf nodes which root node an additional protocol to advertise to the leaf nodes which root node
is the active root. The root selection is a local leaf policy that is the active root. The root selection is a local leaf policy that
does not need to be coordinated with other leafs. The disadvantage does not need to be coordinated with other leafs. The disadvantage
is that we are using more label resources depending on how many root of pre-building multiple MP2MP LSPs is that more label resources are
nodes are defined. used, depending on how many root nodes are defined.
7. Make before break 8. Make Before Break (MBB)
An upstream LSR is chosen based on the best path to reach the root of An LSR selects as its upstream LSR for a MP LSP the LSR that is its
the MP LSP. When the best path to reach the root changes it needs to next hop to the root of the LSP. When the best path to reach the
choose a new upstream LSR. Section 2.3.2.4 and Section 4.2.2.4 root changes the LSR must choose a new upstream LSR. Sections
describes these procedures. When the best path to the root changes Section 2.4.2.4 and Section 4.3.2.4 describe these procedures.
the LSP may be broken and packet forwarding is interrupted, in that
case it needs to converge to a new upstream LSR ASAP. There are also
scenarios where the best path changed, but the LSP is still
forwarding packets. That happens when links come up or routing
metrics are changed. In that case it would like to build the new LSP
before it breaks the old LSP to minimize the traffic interruption.
The approuch described below deals with both scenarios and does not
require LDP to know which of the events above caused the upstream
router to change. The approuch below is an optional extention to the
MP LSP building procedures described in this draft.
7.1. Protocol event When the best path to the root changes the LSP may be broken
temporarily resulting in packet loss until the LSP "reconverges" to a
new upstream LSR. The goal of MBB when this happens is to keep the
duration of packet loss as short as possible. In addition, there are
scenarios where the best path from the LSR to the root changes but
the LSP continues to forward packets to the prevous next hop to the
root. That may occur when a link comes up or routing metrics change.
In such a case a new LSP should be established before the old LSP is
removed to limit the duration of packet loss. The procedures
described below deal with both scenarios in a way that an LSR does
not need to know which of the events described above caused its
upstream router for an MBB LSP to change.
An approach is to use additional signaling in LDP. Suppose a This MBB procedures are an optional extension to the MP LSP building
downstream LSR-D is changing to a new upstream LSR-U for FEC-A, this procedures described in this draft.
LSR-U may already be forwarding packets for this FEC-A. Based on the
existence of state for FEC-A, LSR-U will send a notification to the
LSR-D to initiate the switchover. The assumption is that if our
upstream LSR-U has state for the FEC-A and it has received a
notification from its upstream router, then this LSR is forwarding
packets for this FEC-A and it can send a notification back to
initiate the switchover. You could say there is an explicit
notification to tell the LSR it became part of the tree identified by
FEC-A. LSR-D can be in 3 different states.
1. There no state for a given FEC-A. 8.1. MBB overview
2. State for FEC-A has just been created and is waiting for The MBB procedues use additional LDP signaling.
notification.
3. State for FEC-A exists and notification was received. Suppose some event causes a downstream LSR-D to select a new upstream
LSR-U for FEC-A. The new LSR-U may already be forwarding packets for
FEC-A; that is, to downstream LSR's other than LSR-D. After LSR-U
receives a label for FEC-A from LSR-D, it will notify LSR-D when it
knows that the LSP for FEC-A has been established from the root to
itself. When LSR-D receives this MBB notification it will change its
next hop for the LSP root to LSR-U.
Suppose LSR-D sends a label mapping for FEC-A to LSR-U. LSR-U must The assumption is that if LSR-U has received an MBB notification from
only reply with a notification to LSR-D if it is in state #3 as its upstream router for the FEC-A LSP and has installed forwarding
described above. If LSR-U is in state 1 or 2, it should remember it state the LSP it is capable of forwarding packets on the LSP. At
has received a label mapping from LSR-D which is waiting for a that point LSR-U should signal LSR-D by means of an MBB notification
notification. As soon as LSR-U received a notification from its that it has become part of the tree identified by FEC-A and that
upstream LSR it can move to state #3 and trigger notifications to its LSR-D should initiate its switchover to the LSP.
downstream LSR's that requested it. More details will be added in
the next revision of the draft.
8. Security Considerations At LSR-U the LSP for FEC-A may be in 1 of 3 states.
1. There is no state for FEC-A.
2. State for FEC-A exists and LSR-U is waiting for MBB notification
that the LSP from the root to it exists.
3. State for FEC-A exists and the MBB notification has been
received.
After LSR-U receives LSR-D's Label Mapping message for FEC-A LSR-U
MUST NOT reply with an MBB notification to LSR-D until its state for
the LSP is state #3 above. If the state of the LSP at LSR-U is state
#1 or #2, LSR-U should remember receipt of the Label Mapping message
from LSR-D while waiting for an MBB notification from its upstream
LSR for the LSP. When LSR-U receives the MBB notification from its
upstream LSR it transitions to LSP state #3 and sends an MBB
notification to LSR-D.
8.2. The MBB Status code
As noted in Section 8.1, the procedures to establish an MBB MP LSP
are different from those to establish normal MP LSPs.
When a downstream LSR sends a Label Mapping message for MP LSP to its
upstream LSR it MAY include an LDP MP Status TLV that carries a MBB
Status Code to indicate MBB procedures apply to the LSP. This new
MBB Status Code MAY also appear in an LDP Notification message used
by an upstream LSR to signal LSP state #3 to the downstream LSR; that
is, that the upstream LSR's state for the LSP exists and that it has
received notification from its upstream LSR that the LSP is in state
#3.
The MBB Status is a type of the LDP MP Status Value Element as
described in Section 5.1. It is encoded as follows:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| MBB Type = 1 | Length = 1 | Status code |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
MBB Type: Type 1 (to be assigned by IANA)
Length: 1
Status code: 1 = MBB request
2 = MBB ack
8.3. The MBB capability
An LSR MAY advertise that it is capable of handling MBB LSPs using
the capability advertisement as defined in [6]. The LDP MP MBB
capability has the following format:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|1|0| LDP MP MBB Capability | Length = 1 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|1| Reserved |
+-+-+-+-+-+-+-+-+
Note: LDP MP MBB Capability (Pending IANA assignment)
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|1|0| LDP MP MBB Capability | Length = 1 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|1| Reserved |
+-+-+-+-+-+-+-+-+
If an LSR has not advertised that it is MBB capable, its LDP peers
MUST NOT send it messages which include MBB parameters. If an LSR
receives a Label Mapping message with a MBB parameter from downstream
LSR-D and its upstream LSR-U has not advertised that it is MBB
capable, the LSR MUST send an MBB notification immediatly to LSR-U
(see Section Section 8.4). If this happens an MBB MP LSP will not be
established, but normal a MP LSP will be the result.
8.4. The MBB procedures
8.4.1. Terminology
1. MBB LSP <X, Y>: A P2MP or MP2MP Make Before Break (MBB) LSP entry
with Root Node Address X and Opaque Value Y.
2. A(N, L): An Accepting element that consists of an upstream
Neighbor N and Local label L. This LSR assigned label L to
neighbor N for a specific MBB LSP. For an active element the
corresponding Label is stored in the label forwarding database.
3. iA(N, L): An inactive Accepting element that consists of an
upstream neighbor N and local Label L. This LSR assigned label L
to neighbor N for a specific MBB LSP. For an inactive element
the corresponding Label is not stored in the label forwarding
database.
4. F(N, L): A Forwarding state that consists of downstream Neighbor
N and Label L. This LSR is sending label packets with label L to
neighbor N for a specific FEC.
5. F'(N, L): A Forwarding state that has been marked for sending a
MBB Notification message to Neighbor N with Label L.
6. MBB Notification <X, Y, L>: A LDP notification message with a MP
LSP <X, Y>, Label L and MBB Status code 2.
7. MBB Label Map <X, Y, L>: A P2MP Label Map or MP2MP Label Map
downstream with a FEC element <X, Y>, Label L and MBB Status code
1.
8.4.2. Accepting elements
An accepting element represents a specific label value L that has
been advertised to a neighbor N for a MBB LSP <X, Y> and is a
candidate for accepting labels switched packets on. An LSR can have
two accepting elements for a specific MBB LSP <X, Y> LSP, only one of
them MUST be active. An active element is the element for which the
label value has been installed in the label forwarding database. An
inactive accepting element is created after a new upstream LSR is
chosen and is pending to replace the active element in the label
forwarding database. Inactive elements only exist temporarily while
switching to a new upstream LSR. Once the switch has been completed
only one active element remains. During network convergence it is
possible that an inactive accepting element is created while an other
inactive accepting element is pending. If that happens the older
inactive accepting element MUST be replaced with an newer inactive
element. If an accepting element is removed a Label Withdraw has to
be send for label L to neighbor N for <X, Y>.
8.4.3. Procedures for upstream LSR change
Suppose a node Z has a MBB LSP <X, Y> with an active accepting
element A(N1, L1). Due to a routing change it detects a new best
path for root X and selects a new upstream LSR N2. Node Z allocates
a new local label L2 and creates an inactive accepting element iA(N2,
L2). Node Z sends MBB Label Map <X, Y, L2>to N2 and waits for the
new upstream LSR N2 to respond with a MBB Notification for <X, Y,
L2>. During this transition phase there are two accepting elements,
the element A(N1, L1) still accepting packets from N1 over label L1
and the new inactive element iA(N2, L2).
While waiting for the MBB Notification from upstream LSR N2, it is
possible that an other transition occurs due to a routing change.
Suppose the new upstream LSR is N3. An inactive element iA(N3, L3)
is created and the old inactive element iA(N2, L2) MUST be removed.
A label withdraw MUST be sent to N2 for <X, Y, L2&gt. The MBB
Notification for <X, Y, L2> from N2 will be ignored because the
inactive element is removed.
It is possible that the MBB Notification from upstream LSR is never
received due to link or node failure. To prevent waiting
indefinitely for the MBB Notification a timeout SHOULD be applied.
As soon as the timer expires, the procedures in Section 8.4.5 are
applied as if a MBB Notification was received for the inactive
element.
8.4.4. Receiving a Label Map with MBB status code
Suppose node Z has state for a MBB LSP <X, Y> and receives a MBB
Label Map <X, Y, L2> from N2. A new forwarding state F(N2, L2) will
be added to the MP LSP if it did not already exist. If this MBB LSP
has an active accepting element or node Z is the root of the MBB LSP
a MBB notification <X, Y, L2)> is send to node N2. If node Z has an
inactive accepting element it marks the Forwarding state as <X, Y,
F'(N2, L2)>.
8.4.5. Receiving a Notification with MBB status code
Suppose node Z receives a MBB Notification <X, Y, L> from N. If node
Z has state for MBB LSP <X, Y> and an inactive accepting element
iA(N, L) that matches with N and L, we activate this accepting
element and install label L in the label forwarding database. If an
other active accepting was present it will be removed from the label
forwarding database.
If this MBB LSP <X, Y> also has Forwarding states marked for sending
MBB Notifications, like <X, Y, F'(N2, L2)>, MBB Notifications are
send to these downstream LSRs. If node Z receives a MBB Notification
for an accepting element that is not inactive or does not match the
Label value and Neighbor address, the MBB notification is ignored.
8.4.6. Node operation for MP2MP LSPs
The procedures described above apply to the downstream path of a
MP2MP LSP. The upstream path of the MP2MP is setup as normal without
including a MBB Status code. If the MBB procedures apply to a MP2MP
downstream FEC element, the upstream path to a node N is only
installed in the label forwarding database if node N is part of the
active accepting element. If node N is part of an inactive accepting
element, the upstream path is installed when this inactive accepting
element is activated.
9. Security Considerations
The same security considerations apply as for the base LDP The same security considerations apply as for the base LDP
specification, as described in [1]. specification, as described in [1].
9. IANA considerations 10. IANA considerations
This document creates a new name space (the LDP MP Opaque Value This document creates a new name space (the LDP MP Opaque Value
Element type) that is to be managed by IANA. Also, this document Element type) that is to be managed by IANA, and the allocation of
requires allocation of three new LDP FEC element types: the P2MP the value 1 for the type of Generic LSP Identifier.
type, the MP2MP-up and the MP2MP-down types.
10. Acknowledgments This document requires allocation of three new LDP FEC Element types:
1. the P2MP FEC type - requested value 0x04
2. the MP2MP-up FEC type - requested value 0x05
3. the MP2MP-down FEC type - requested value 0x06
This document requires the assignment of three new code points for
three new Capability Parameter TLVs, corresponding to the
advertisement of the P2MP, MP2MP and MBB capabilities. The values
requested are:
P2MP Capability Parameter - requested value 0x0508
MP2MP Capability Parameter - requested value 0x0509
MBB Capability Parameter - requested value 0x050A
This document requires the assignment of a LDP Status TLV code to
indicate a LDP MP Status TLV is following in the Notification
message. The value requested is:
LDP MP status - requested value 0x0000002C
This document requires the assigment of a new code point for a LDP MP
Status TLV. The value requested is:
LDP MP Status TLV Type - requested value 0x096D
This document creates a new name space (the LDP MP Status Value
Element type) that is to be managed by IANA, and the allocation of
the value 1 for the type of MBB Status.
11. Acknowledgments
The authors would like to thank the following individuals for their The authors would like to thank the following individuals for their
review and contribution: Nischal Sheth, Yakov Rekhter, Rahul review and contribution: Nischal Sheth, Yakov Rekhter, Rahul
Aggarwal, Arjen Boers, Eric Rosen, Nidhi Bhaskar, Toerless Eckert and Aggarwal, Arjen Boers, Eric Rosen, Nidhi Bhaskar, Toerless Eckert,
George Swallow. George Swallow, Jin Lizhong and Vanson Lim.
11. Contributing authors 12. Contributing authors
Below is a list of the contributing authors in alphabetical order: Below is a list of the contributing authors in alphabetical order:
Shane Amante Shane Amante
Level 3 Communications, LLC Level 3 Communications, LLC
1025 Eldorado Blvd 1025 Eldorado Blvd
Broomfield, CO 80021 Broomfield, CO 80021
US US
Email: Shane.Amante@Level3.com Email: Shane.Amante@Level3.com
Luyuan Fang Luyuan Fang
AT&T Cisco Systems
200 Laurel Avenue, Room C2-3B35 300 Beaver Brook Road
Middletown, NJ 07748 Boxborough, MA 01719
US US
Email: luyuanfang@att.com Email: lufang@cisco.com
Hitoshi Fukuda Hitoshi Fukuda
NTT Communications Corporation NTT Communications Corporation
1-1-6, Uchisaiwai-cho, Chiyoda-ku 1-1-6, Uchisaiwai-cho, Chiyoda-ku
Tokyo 100-8019, Tokyo 100-8019,
Japan Japan
Email: hitoshi.fukuda@ntt.com Email: hitoshi.fukuda@ntt.com
Yuji Kamite Yuji Kamite
NTT Communications Corporation NTT Communications Corporation
Tokyo Opera City Tower Tokyo Opera City Tower
3-20-2 Nishi Shinjuku, Shinjuku-ku, 3-20-2 Nishi Shinjuku, Shinjuku-ku,
Tokyo 163-1421, Tokyo 163-1421,
Japan Japan
Email: y.kamite@ntt.com Email: y.kamite@ntt.com
Kireeti Kompella Kireeti Kompella
Juniper Networks Juniper Networks
1194 N. Mathilda Ave. 1194 N. Mathilda Ave.
Sunnyvale, CA 94089 Sunnyvale, CA 94089
US US
Email: kireeti@juniper.net Email: kireeti@juniper.net
Ina Minei Ina Minei
Juniper Networks Juniper Networks
1194 N. Mathilda Ave. 1194 N. Mathilda Ave.
skipping to change at page 20, line 37 skipping to change at page 32, line 4
300 Beaver Brook Road 300 Beaver Brook Road
Boxborough, MA, 01719 Boxborough, MA, 01719
E-mail: rhthomas@cisco.com E-mail: rhthomas@cisco.com
Lei Wang Lei Wang
Telenor Telenor
Snaroyveien 30 Snaroyveien 30
Fornebu 1331 Fornebu 1331
Norway Norway
Email: lei.wang@telenor.com Email: lei.wang@telenor.com
IJsbrand Wijnands IJsbrand Wijnands
Cisco Systems, Inc. Cisco Systems, Inc.
De kleetlaan 6a De kleetlaan 6a
1831 Diegem 1831 Diegem
Belgium Belgium
E-mail: ice@cisco.com E-mail: ice@cisco.com
12. References 13. References
12.1. Normative References 13.1. Normative References
[1] Andersson, L., Doolan, P., Feldman, N., Fredette, A., and B. [1] Andersson, L., Doolan, P., Feldman, N., Fredette, A., and B.
Thomas, "LDP Specification", RFC 3036, January 2001. Thomas, "LDP Specification", RFC 3036, January 2001.
[2] Bradner, S., "Key words for use in RFCs to Indicate Requirement [2] Bradner, S., "Key words for use in RFCs to Indicate Requirement
Levels", BCP 14, RFC 2119, March 1997. Levels", BCP 14, RFC 2119, March 1997.
[3] Reynolds, J. and J. Postel, "Assigned Numbers", RFC 1700, [3] Reynolds, J., "Assigned Numbers: RFC 1700 is Replaced by an On-
October 1994. line Database", RFC 3232, January 2002.
[4] Roux, J., "Requirements for point-to-multipoint extensions to [4] Aggarwal, R., "MPLS Upstream Label Assignment and Context
the Label Distribution Protocol", Specific Label Space", draft-ietf-mpls-upstream-label-02 (work
draft-leroux-mpls-mp-ldp-reqs-03 (work in progress), in progress), March 2007.
February 2006.
[5] Aggarwal, R., "MPLS Upstream Label Assignment and Context [5] Aggarwal, R. and J. Roux, "MPLS Upstream Label Assignment for
Specific Label Space", draft-raggarwa-mpls-upstream-label-01 LDP", draft-ietf-mpls-ldp-upstream-01 (work in progress),
(work in progress), October 2005. March 2007.
[6] Aggarwal, R. and J. Roux, "MPLS Upstream Label Assignment for [6] Thomas, B., "LDP Capabilities",
RSVP-TE and LDP", draft-raggarwa-mpls-rsvp-ldp-upstream-00 (work draft-ietf-mpls-ldp-capabilities-00 (work in progress),
in progress), July 2005. May 2007.
12.2. Informative References 13.2. Informative References
[7] Andersson, L. and E. Rosen, "Framework for Layer 2 Virtual [7] Andersson, L. and E. Rosen, "Framework for Layer 2 Virtual
Private Networks (L2VPNs)", draft-ietf-l2vpn-l2-framework-05 Private Networks (L2VPNs)", RFC 4664, September 2006.
(work in progress), June 2004.
[8] Aggarwal, R., "Extensions to RSVP-TE for Point-to-Multipoint TE [8] Aggarwal, R., Papadimitriou, D., and S. Yasukawa, "Extensions
LSPs", draft-ietf-mpls-rsvp-te-p2mp-06 (work in progress), to Resource Reservation Protocol - Traffic Engineering
August 2006. (RSVP-TE) for Point-to-Multipoint TE Label Switched Paths
(LSPs)", RFC 4875, May 2007.
[9] Rosen, E. and R. Aggarwal, "Multicast in MPLS/BGP IP VPNs", [9] Roux, J., "Requirements for Point-To-Multipoint Extensions to
draft-ietf-l3vpn-2547bis-mcast-02 (work in progress), June 2006. the Label Distribution Protocol",
draft-ietf-mpls-mp-ldp-reqs-02 (work in progress), March 2007.
[10] Rosen, E. and R. Aggarwal, "Multicast in MPLS/BGP IP VPNs",
draft-ietf-l3vpn-2547bis-mcast-00 (work in progress),
June 2005.
Authors' Addresses Authors' Addresses
Ina Minei Ina Minei
Juniper Networks Juniper Networks
1194 N. Mathilda Ave. 1194 N. Mathilda Ave.
Sunnyvale, CA 94089 Sunnyvale, CA 94089
US US
Email: ina@juniper.net Email: ina@juniper.net
skipping to change at page 23, line 7 skipping to change at page 34, line 7
Bob Thomas Bob Thomas
Cisco Systems, Inc. Cisco Systems, Inc.
300 Beaver Brook Road 300 Beaver Brook Road
Boxborough 01719 Boxborough 01719
US US
Email: rhthomas@cisco.com Email: rhthomas@cisco.com
Full Copyright Statement Full Copyright Statement
Copyright (C) The Internet Society (2006). Copyright (C) The IETF Trust (2007).
This document is subject to the rights, licenses and restrictions This document is subject to the rights, licenses and restrictions
contained in BCP 78, and except as set forth therein, the authors contained in BCP 78, and except as set forth therein, the authors
retain all their rights. retain all their rights.
This document and the information contained herein are provided on an This document and the information contained herein are provided on an
"AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS
OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND
ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS
INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF
INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Intellectual Property Intellectual Property
The IETF takes no position regarding the validity or scope of any The IETF takes no position regarding the validity or scope of any
Intellectual Property Rights or other rights that might be claimed to Intellectual Property Rights or other rights that might be claimed to
pertain to the implementation or use of the technology described in pertain to the implementation or use of the technology described in
this document or the extent to which any license under such rights this document or the extent to which any license under such rights
might or might not be available; nor does it represent that it has might or might not be available; nor does it represent that it has
made any independent effort to identify any such rights. Information made any independent effort to identify any such rights. Information
 End of changes. 110 change blocks. 
229 lines changed or deleted 702 lines changed or added

This html diff was produced by rfcdiff 1.34. The latest version is available from http://tools.ietf.org/tools/rfcdiff/