draft-ietf-mboned-intro-multicast-00.txt | draft-ietf-mboned-intro-multicast-01.txt | |||
---|---|---|---|---|
INTERNET-DRAFT C. Semeria | INTERNET-DRAFT T. Maufer | |||
T. Maufer | C. Semeria | |||
Category: Informational 3Com Corporation | Category: Informational 3Com Corporation | |||
January 1997 | March 1997 | |||
Introduction to IP Multicast Routing | Introduction to IP Multicast Routing | |||
<draft-ietf-mboned-intro-multicast-00.txt> | <draft-ietf-mboned-intro-multicast-01.txt> | |||
Status of this Memo | Status of this Memo | |||
This document is an Internet Draft. Internet Drafts are working | This document is an Internet Draft. Internet Drafts are working | |||
documents of the Internet Engineering Task Force (IETF), its Areas, and | documents of the Internet Engineering Task Force (IETF), its Areas, and | |||
its Working Groups. Note that other groups may also distribute working | its Working Groups. Note that other groups may also distribute working | |||
documents as Internet Drafts. | documents as Internet Drafts. | |||
Internet Drafts are draft documents valid for a maximum of six months. | Internet Drafts are draft documents valid for a maximum of six months. | |||
Internet Drafts may be updated, replaced, or obsoleted by other | Internet Drafts may be updated, replaced, or obsoleted by other | |||
skipping to change at page 1, line 41 | skipping to change at page 1, line 41 | |||
ftp.isi.edu (US West Coast) | ftp.isi.edu (US West Coast) | |||
munnari.oz.au (Pacific Rim) | munnari.oz.au (Pacific Rim) | |||
FOREWORD | FOREWORD | |||
This document is introductory in nature. We have not attempted to | This document is introductory in nature. We have not attempted to | |||
describe every detail of each protocol, rather to give a concise | describe every detail of each protocol, rather to give a concise | |||
overview in all cases, with enough specifics to allow a reader to grasp | overview in all cases, with enough specifics to allow a reader to grasp | |||
the essential details and operation of protocols related to multicast | the essential details and operation of protocols related to multicast | |||
IP. Every effort has been made to ensure the accurate representation of | IP. Every effort has been made to ensure the accurate representation of | |||
any cited works, especially any works-in-pro- gress. For the complete | any cited works, especially any works-in-progress. For the complete | |||
details, we refer you to the relevant specification(s). | details, we refer you to the relevant specification(s). | |||
If internet-drafts are cited in this document, it is only because they | If internet-drafts are cited in this document, it is only because they | |||
are the only sources of certain technical information at the time of | are the only sources of certain technical information at the time of | |||
this writing. We expect that many of the internet-drafts which we have | this writing. We expect that many of the internet-drafts which we have | |||
cited will eventually become RFCs. See the shadow directories on the | cited will eventually become RFCs. See the shadow directories above for | |||
previous page for the status of any of these drafts, their follow-on | the status of any of these drafts, their follow-on drafts, or possibly | |||
drafts, or possibly the resulting RFCs. | the resulting RFCs. | |||
ABSTRACT | ABSTRACT | |||
The first part of this paper describes the benefits of multicasting, | The first part of this paper describes the benefits of multicasting, | |||
the MBone, Class D addressing, and the operation of the Internet Group | the MBone, Class D addressing, and the operation of the Internet Group | |||
Management Protocol (IGMP). The second section explores a number of | Management Protocol (IGMP). The second section explores a number of | |||
different techniques that may potentially be employed by multicast | different techniques that may potentially be employed by multicast | |||
routing protocols: | routing protocols: | |||
o Flooding | o Flooding | |||
skipping to change at page 2, line 26 | skipping to change at page 2, line 26 | |||
o Truncated Reverse Path Broadcasting (TRPB) | o Truncated Reverse Path Broadcasting (TRPB) | |||
o Reverse Path Multicasting (RPM) | o Reverse Path Multicasting (RPM) | |||
o "Shared-Tree" Techniques | o "Shared-Tree" Techniques | |||
The third part contains the main body of the paper. It describes how | The third part contains the main body of the paper. It describes how | |||
the previous techniques are implemented in multicast routing protocols | the previous techniques are implemented in multicast routing protocols | |||
available today (or under development). | available today (or under development). | |||
o Distance Vector Multicast Routing Protocol (DVMRP) | o Distance Vector Multicast Routing Protocol (DVMRP) | |||
o Multicast Extensions to OSPF (MOSPF) | o Multicast Extensions to OSPF (MOSPF) | |||
o Protocol-Independent Multicast (PIM) | o Protocol-Independent Multicast - Dense Mode (PIM-DM) | |||
o Protocol-Independent Multicast - Sparse Mode (PIM-SM) | ||||
o Core-Based Trees (CBT) | o Core-Based Trees (CBT) | |||
Table of Contents | Table of Contents | |||
Section | Section | |||
1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . INTRODUCTION | 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . INTRODUCTION | |||
1.1 . . . . . . . . . . . . . . . . . . . . . . . . . Multicast Groups | 1.1 . . . . . . . . . . . . . . . . . . . . . . . . . Multicast Groups | |||
1.2 . . . . . . . . . . . . . . . . . . . . . Group Membership Protocol | 1.2 . . . . . . . . . . . . . . . . . . . . . Group Membership Protocol | |||
1.3 . . . . . . . . . . . . . . . . . . . . Multicast Routing Protocols | 1.3 . . . . . . . . . . . . . . . . . . . . Multicast Routing Protocols | |||
1.3.1 . . . . . . . . . . . Multicast Routing vs. Multicast Forwarding | 1.3.1 . . . . . . . . . . . Multicast Routing vs. Multicast Forwarding | |||
skipping to change at page 2, line 55 | skipping to change at page 3, line 4 | |||
4.3 . . . . . . . . . Transmission and Delivery of Multicast Datagrams | 4.3 . . . . . . . . . Transmission and Delivery of Multicast Datagrams | |||
5 . . . . . . . . . . . . . . INTERNET GROUP MANAGEMENT PROTOCOL (IGMP) | 5 . . . . . . . . . . . . . . INTERNET GROUP MANAGEMENT PROTOCOL (IGMP) | |||
5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . IGMP Version 1 | 5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . IGMP Version 1 | |||
5.2 . . . . . . . . . . . . . . . . . . . . . . . . . . IGMP Version 2 | 5.2 . . . . . . . . . . . . . . . . . . . . . . . . . . IGMP Version 2 | |||
5.3 . . . . . . . . . . . . . . . . . . . . . . . . . . IGMP Version 3 | 5.3 . . . . . . . . . . . . . . . . . . . . . . . . . . IGMP Version 3 | |||
6 . . . . . . . . . . . . . . . . . . . MULTICAST FORWARDING TECHNIQUES | 6 . . . . . . . . . . . . . . . . . . . MULTICAST FORWARDING TECHNIQUES | |||
6.1 . . . . . . . . . . . . . . . . . . . . . "Simpleminded" Techniques | 6.1 . . . . . . . . . . . . . . . . . . . . . "Simpleminded" Techniques | |||
6.1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Flooding | 6.1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Flooding | |||
6.1.2 . . . . . . . . . . . . . . . . . . . . . . . . . . Spanning Tree | 6.1.2 . . . . . . . . . . . . . . . . . . . . . . . . . . Spanning Tree | |||
6.2 . . . . . . . . . . . . . . . . . . . Source-Based Tree Techniques | 6.2 . . . . . . . . . . . . . . . . . . . Source-Based Tree Techniques | |||
6.2.1 . . . . . . . . . . . . . . . . . Reverse Path Broadcasting (RPB) | ||||
6.2.1 . . . . . . . . . . . . . . . . . Reverse Path Broadcasting (RPB) | ||||
6.2.1.1 . . . . . . . . . . . . . Reverse Path Broadcasting: Operation | 6.2.1.1 . . . . . . . . . . . . . Reverse Path Broadcasting: Operation | |||
6.2.1.2. . . . . . . . . . . . . . . . . . RPB: Benefits and Limitations | 6.2.1.2 . . . . . . . . . . . . . . . . . RPB: Benefits and Limitations | |||
6.2.2 . . . . . . . . . . . Truncated Reverse Path Broadcasting (TRPB) | 6.2.2 . . . . . . . . . . . Truncated Reverse Path Broadcasting (TRPB) | |||
6.2.3 . . . . . . . . . . . . . . . . . Reverse Path Multicasting (RPM) | 6.2.3 . . . . . . . . . . . . . . . . . Reverse Path Multicasting (RPM) | |||
6.2.3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . Operation | 6.2.3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . Operation | |||
6.2.3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations | 6.2.3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations | |||
6.3 . . . . . . . . . . . . . . . . . . . . . . Shared Tree Techniques | 6.3 . . . . . . . . . . . . . . . . . . . . . . Shared Tree Techniques | |||
6.3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Operation | 6.3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Operation | |||
6.3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Benefits | 6.3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Benefits | |||
6.3.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations | 6.3.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations | |||
7 . . . . . . . . . SOURCE-BASED TREE ("DENSE MODE") ROUTING PROTOCOLS | 7 . . . . . . . . . . . . . . . . . . . "DENSE MODE" ROUTING PROTOCOLS | |||
7.1 . . . . . . . . Distance Vector Multicast Routing Protocol (DVMRP) | 7.1 . . . . . . . . Distance Vector Multicast Routing Protocol (DVMRP) | |||
7.1.1 . . . . . . . . . . . . . . . . . Physical and Tunnel Interfaces | 7.1.1 . . . . . . . . . . . . . . . . . Physical and Tunnel Interfaces | |||
7.1.2 . . . . . . . . . . . . . . . . . . . . . . . . . Basic Operation | 7.1.2 . . . . . . . . . . . . . . . . . . . . . . . . . Basic Operation | |||
7.1.3 . . . . . . . . . . . . . . . . . . . . . DVMRP Router Functions | 7.1.3 . . . . . . . . . . . . . . . . . . . . . DVMRP Router Functions | |||
7.1.4 . . . . . . . . . . . . . . . . . . . . . . . DVMRP Routing Table | 7.1.4 . . . . . . . . . . . . . . . . . . . . . . . DVMRP Routing Table | |||
7.1.5 . . . . . . . . . . . . . . . . . . . . . DVMRP Forwarding Table | 7.1.5 . . . . . . . . . . . . . . . . . . . . . DVMRP Forwarding Table | |||
7.1.6 . . . . . . . . . . . . . . . . . Hierarchical DVMRP (DVMRP v4.0) | ||||
7.1.6.1 . . . . . . . . . . Benefits of Hierarchical Multicast Routing | ||||
7.1.6.2 . . . . . . . . . . . . . . . . . . . Hierarchical Architecture | ||||
7.2 . . . . . . . . . . . . . . . Multicast Extensions to OSPF (MOSPF) | 7.2 . . . . . . . . . . . . . . . Multicast Extensions to OSPF (MOSPF) | |||
7.2.1 . . . . . . . . . . . . . . . . . . Intra-Area Routing with MOSPF | 7.2.1 . . . . . . . . . . . . . . . . . . Intra-Area Routing with MOSPF | |||
7.2.1.1 . . . . . . . . . . . . . . . . . . . . . Local Group Database | 7.2.1.1 . . . . . . . . . . . . . . . . . . . . . Local Group Database | |||
7.2.1.2 . . . . . . . . . . . . . . . . . Datagram's Shortest Path Tree | 7.2.1.2 . . . . . . . . . . . . . . . . . Datagram's Shortest Path Tree | |||
7.2.1.3 . . . . . . . . . . . . . . . . . . . . . . . Forwarding Cache | 7.2.1.3 . . . . . . . . . . . . . . . . . . . . . . . Forwarding Cache | |||
7.2.2 . . . . . . . . . . . . . . . . . . Mixing MOSPF and OSPF Routers | 7.2.2 . . . . . . . . . . . . . . . . . . Mixing MOSPF and OSPF Routers | |||
7.2.3 . . . . . . . . . . . . . . . . . . Inter-Area Routing with MOSPF | 7.2.3 . . . . . . . . . . . . . . . . . . Inter-Area Routing with MOSPF | |||
7.2.3.1 . . . . . . . . . . . . . . . . Inter-Area Multicast Forwarders | 7.2.3.1 . . . . . . . . . . . . . . . . Inter-Area Multicast Forwarders | |||
7.2.3.2 . . . . . . . . . . . Inter-Area Datagram's Shortest Path Tree | 7.2.3.2 . . . . . . . . . . . Inter-Area Datagram's Shortest Path Tree | |||
7.2.4 . . . . . . . . . Inter-Autonomous System Multicasting with MOSPF | 7.2.4 . . . . . . . . . Inter-Autonomous System Multicasting with MOSPF | |||
7.3 . . . . . . . . . . . . . . . Protocol-Independent Multicast (PIM) | 7.3 . . . . . . . . . . . . . . . Protocol-Independent Multicast (PIM) | |||
7.3.1 . . . . . . . . . . . . . . . . . . . . PIM - Dense Mode (PIM-DM) | 7.3.1 . . . . . . . . . . . . . . . . . . . . PIM - Dense Mode (PIM-DM) | |||
8 . . . . . . . . . . . . SHARED TREE ("SPARSE MODE") ROUTING PROTOCOLS | 8 . . . . . . . . . . . . . . . . . . . "SPARSE MODE" ROUTING PROTOCOLS | |||
8.1 . . . . . . . Protocol-Independent Multicast - Sparse Mode (PIM-SM) | 8.1 . . . . . . . Protocol-Independent Multicast - Sparse Mode (PIM-SM) | |||
8.1.1 . . . . . . . . . . . . . . Directly Attached Host Joins a Group | 8.1.1 . . . . . . . . . . . . . . Directly Attached Host Joins a Group | |||
8.1.2 . . . . . . . . . . . . Directly Attached Source Sends to a Group | 8.1.2 . . . . . . . . . . . . Directly Attached Source Sends to a Group | |||
8.1.3 . . . . . . . Shared Tree (RP-Tree) or Shortest Path Tree (SPT)? | 8.1.3 . . . . . . . Shared Tree (RP-Tree) or Shortest Path Tree (SPT)? | |||
8.1.4 . . . . . . . . . . . . . . . . . . . . . . . Unresolved Issues | 8.1.4 . . . . . . . . . . . . . . . . . . . . . . . Unresolved Issues | |||
8.2 . . . . . . . . . . . . . . . . . . . . . . Core-Based Trees (CBT) | 8.2 . . . . . . . . . . . . . . . . . . . . . . Core Based Trees (CBT) | |||
8.2.1 . . . . . . . . . . . . . . . . . . Joining a Group's Shared Tree | 8.2.1 . . . . . . . . . . . . . . . . . . Joining a Group's Shared Tree | |||
8.2.2 . . . . . . . . . . . . . . . . . . . Primary and Secondary Cores | 8.2.2 . . . . . . . . . . . . . . . . . . . . . Data Packet Forwarding | |||
8.2.3 . . . . . . . . . . . . . . . . . . . . . Data Packet Forwarding | 8.2.3 . . . . . . . . . . . . . . . . . . . . . . . Non-Member Sending | |||
8.2.4 . . . . . . . . . . . . . . . . . . . . . . . Non-Member Sending | 8.2.4 . . . . . . . . . . . . . . . . . CBT Multicast Interoperability | |||
8.2.5 . . . . . . . . . . . . . . . . . . Emulating Shortest-Path Trees | 9 . . . . . . INTEROPERABILITY FRAMEWORK FOR MULTICAST BORDER ROUTERS | |||
8.2.6 . . . . . . . . . . . . . . . . . CBT Multicast Interoperability | 9.1 . . . . . . . . . . . . . Requirements for Multicast Border Routers | |||
9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . REFERENCES | 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . REFERENCES | |||
9.1 . . . . . . . . . . . . . . . . . . . . Requests for Comments (RFCs) | 10.1 . . . . . . . . . . . . . . . . . . . Requests for Comments (RFCs) | |||
9.2 . . . . . . . . . . . . . . . . . . . . . . . . . . Internet Drafts | 10.2 . . . . . . . . . . . . . . . . . . . . . . . . . Internet-Drafts | |||
9.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Textbooks | 10.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Textbooks | |||
9.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other | 10.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other | |||
10 . . . . . . . . . . . . . . . . . . . . . . SECURITY CONSIDERATIONS | 11 . . . . . . . . . . . . . . . . . . . . . . SECURITY CONSIDERATIONS | |||
11 . . . . . . . . . . . . . . . . . . . . . . . . . AUTHORS' ADDRESSES | 12 . . . . . . . . . . . . . . . . . . . . . . . . . . ACKNOWLEDGEMENTS | |||
13 . . . . . . . . . . . . . . . . . . . . . . . . . AUTHORS' ADDRESSES | ||||
1. Introduction | 1. INTRODUCTION | |||
There are three fundamental types of IPv4 addresses: unicast, | There are three fundamental types of IPv4 addresses: unicast, | |||
broadcast, and multicast. A unicast address is used to transmit a | broadcast, and multicast. A unicast address is used to transmit a | |||
packet to a single destination. A broadcast address is used to send a | packet to a single destination. A broadcast address is used to send a | |||
datagram to an entire subnetwork. A multicast address is designed to | datagram to an entire subnetwork. A multicast address is designed to | |||
enable the delivery of datagrams to a set of hosts that have been | enable the delivery of datagrams to a set of hosts that have been | |||
configured as members of a multicast group across various | configured as members of a multicast group across various | |||
subnetworks. | subnetworks. | |||
Multicasting is not connection-oriented. A multicast datagram is | Multicasting is not connection-oriented. A multicast datagram is | |||
skipping to change at page 5, line 44 | skipping to change at page 5, line 44 | |||
======================================================================= | ======================================================================= | |||
1.3 Multicast Routing Protocols | 1.3 Multicast Routing Protocols | |||
Multicast routers execute a multicast routing protocol to define | Multicast routers execute a multicast routing protocol to define | |||
delivery paths that enable the forwarding of multicast datagrams | delivery paths that enable the forwarding of multicast datagrams | |||
across an internetwork. | across an internetwork. | |||
1.3.1 Multicast Routing vs. Multicast Forwarding | 1.3.1 Multicast Routing vs. Multicast Forwarding | |||
Multicast routing protocols supply the necessary data to enable the | Multicast routing protocols establish or help establish the distribution | |||
forwarding of multicast packets. In the case of unicast routing, | tree for a given group, which enables multicast forwarding of packets | |||
protocols are used to build a forwarding table (commonly called a | addressed to the group. In the case of unicast, routing protocols are | |||
routing table). Unicast destinations are entered in the routing table, | also used to build a forwarding table (commonly called a routing table). | |||
and associated with a metric and a next-hop router toward the | Unicast destinations are entered in the routing table, and associated | |||
destination. Multicast routing protocols are usually unicast routing | with a metric and a next-hop router toward the destination. The key | |||
protocols that facilitate the determination of routes toward a source, | difference between unicast forwarding and multicast forwarding is that | |||
not a destination. Multicast routing protocols are also used to build | multicast packets must be forwarded away from their source. If a packet | |||
a forwarding table. | is ever forwarded back toward its source, a forwarding loop could have | |||
The key difference between unicast forwarding and multicast forwarding | ||||
is that multicast packets must be forwarded away from a source. If a | ||||
packet ever goes back toward the source, a forwarding loop could be | ||||
formed, possibly leading to a multicast "storm." | formed, possibly leading to a multicast "storm." | |||
A common misconception is that multicast routing protocols pass around | Each routing protocol constructs a forwarding table in its own way; the | |||
information about groups, represented by class D addresses. In fact, as | forwarding table tells each router that for a certain source, or for a | |||
long as a router can determine what direction the source is (relative to | given source sending to a certain group (called a (source, group) pair), | |||
itself) and where all the downstream receivers are, then it can build | packets are expected to arrive on a certain "inbound" or "upstream" | |||
a forwarding table. The forwarding table tells the router that for a | interface and must be copied to certain (set of) "outbound" or | |||
certain source sending to a certain group (or in other words, for a | "downstream" interface(s) in order to reach all known subnetworks with | |||
certain (source, group) pair), the packets must all arrive on a certain | group members. | |||
interface and be copied to certain "downstream" interface(s). | ||||
2. MULTICAST SUPPORT FOR EMERGING INTERNET APPLICATIONS | 2. MULTICAST SUPPORT FOR EMERGING INTERNET APPLICATIONS | |||
Today, the majority of Internet applications rely on point-to-point | Today, the majority of Internet applications rely on point-to-point | |||
transmission. The utilization of point-to-multipoint transmission has | transmission. The utilization of point-to-multipoint transmission has | |||
traditionally been limited to local area network applications. Over the | traditionally been limited to local area network applications. Over the | |||
past few years the Internet has seen a rise in the number of new | past few years the Internet has seen a rise in the number of new | |||
applications that rely on multicast transmission. Multicast IP | applications that rely on multicast transmission. Multicast IP | |||
conserves bandwidth by forcing the network to do packet replication only | conserves bandwidth by forcing the network to do packet replication only | |||
when necessary, and offers an attractive alternative to unicast | when necessary, and offers an attractive alternative to unicast | |||
transmission for the delivery of network ticker tapes, live stock | transmission for the delivery of network ticker tapes, live stock | |||
quotes, multiparty videoconferencing, and shared whiteboard applications | quotes, multiparty videoconferencing, and shared whiteboard applications | |||
(among others). It is important to note that the applications for IP | (among others). It is important to note that the applications for IP | |||
Multicast are not solely limited to the Internet. Multicast IP can also | Multicast are not solely limited to the Internet. Multicast IP can also | |||
play an important role in large commercial internetworks. | play an important role in large commercial internetworks. | |||
[This space was intentionally left blank.] | ||||
2.1 Reducing Network Load | 2.1 Reducing Network Load | |||
Assume that a stock ticker application is required to transmit packets | Assume that a stock ticker application is required to transmit packets | |||
to 100 stations within an organization's network. Unicast transmission | to 100 stations within an organization's network. Unicast transmission | |||
to this set of stations will require the periodic transmission of 100 | to this set of stations will require the periodic transmission of 100 | |||
packets where many packets may in fact be traversing the same link(s). | packets where many packets may in fact be traversing the same link(s). | |||
Multicast transmission is the ideal solution for this type of | Multicast transmission is the ideal solution for this type of | |||
application since it requires only a single packet stream to be | application since it requires only a single packet stream to be | |||
transmitted by the source which is replicated at forks in the multicast | transmitted by the source which is replicated at forks in the multicast | |||
delivery tree. | delivery tree. | |||
Broadcast transmission is not an effective solution for this type of | Broadcast transmission is not an effective solution for this type of | |||
application since it affects the CPU performance of each and every | application since it affects the CPU performance of each and every | |||
station that sees the packet. Besides, it wastes bandwidth. | station that sees the packet. Besides, it wastes bandwidth. | |||
2.2 Resource Discovery | 2.2 Resource Discovery | |||
Some applications implement multicast group addresses instead of | Some applications utilize multicast instead of broadcast transmission | |||
broadcasts to transmit packets to group members residing on the same | to transmit packets to group members residing on the same subnetwork. | |||
network. However, there is no reason to limit the extent of a multicast | However, there is no reason to limit the extent of a multicast | |||
transmission to a single LAN. The time-to-live (TTL) field in the IP | transmission to a single LAN. The time-to-live (TTL) field in the IP | |||
header can be used to limit the range (or "scope") of a multicast | header can be used to limit the range (or "scope") of a multicast | |||
transmission. | transmission. | |||
2.3 Support for Datacasting Applications | 2.3 Support for Datacasting Applications | |||
Since 1992, the IETF has conducted a series of "audiocast" experiments | Since 1992, the IETF has conducted a series of "audiocast" experiments | |||
in which live audio and video were multicast from the IETF meeting site | in which live audio and video were multicast from the IETF meeting site | |||
to destinations around the world. In this case, "datacasting" takes | to destinations around the world. In this case, "datacasting" takes | |||
compressed audio and video signals from the source station and transmits | compressed audio and video signals from the source station and transmits | |||
skipping to change at page 7, line 53 | skipping to change at page 5, line 126 | |||
3. THE INTERNET'S MULTICAST BACKBONE (MBone) | 3. THE INTERNET'S MULTICAST BACKBONE (MBone) | |||
The Internet Multicast Backbone (MBone) is an interconnected set of | The Internet Multicast Backbone (MBone) is an interconnected set of | |||
subnetworks and routers that support the delivery of IP multicast | subnetworks and routers that support the delivery of IP multicast | |||
traffic. The goal of the MBone is to construct a semipermanent IP | traffic. The goal of the MBone is to construct a semipermanent IP | |||
multicast testbed to enable the deployment of multicast applications | multicast testbed to enable the deployment of multicast applications | |||
without waiting for the ubiquitous deployment of multicast-capable | without waiting for the ubiquitous deployment of multicast-capable | |||
routers in the Internet. | routers in the Internet. | |||
The MBone has grown from 40 subnets in four different countries in 1992, | The MBone has grown from 40 subnets in four different countries in 1992, | |||
to more than 2800 subnets in over 25 countries by April 1996. With new | to more than 3400 subnets in over 25 countries by March 1997. With | |||
multicast applications and multicast-based services appearing, it seems | new multicast applications and multicast-based services appearing, it | |||
seems likely that the use of multicast technology in the Internet will | ||||
likely that the use of multicast technology in the Internet will keep | keep growing at an ever-increasing rate. | |||
growing at an ever-increasing rate. | ||||
The MBone is a virtual network that is layered on top of sections of the | The MBone is a virtual network that is layered on top of sections of the | |||
physical Internet. It is composed of islands of multicast routing | physical Internet. It is composed of islands of multicast routing | |||
capability connected to other islands by virtual point-to-point links | capability connected to other islands by virtual point-to-point links | |||
called "tunnels." The tunnels allow multicast traffic to pass through | called "tunnels." The tunnels allow multicast traffic to pass through | |||
the non-multicast-capable parts of the Internet. Tunneled IP multicast | the non-multicast-capable parts of the Internet. Tunneled IP multicast | |||
packets are encapsulated as IP-over-IP (i.e., the protocol number is set | packets are encapsulated as IP-over-IP (i.e., the protocol number is set | |||
to 4) so they look like normal unicast packets to intervening routers. | to 4) so they look like normal unicast packets to intervening routers. | |||
The encapsulation is added on entry to a tunnel and stripped off on exit | The encapsulation is added on entry to a tunnel and stripped off on exit | |||
from a tunnel. This set of multicast routers, their directly-connected | from a tunnel. This set of multicast routers, their directly-connected | |||
subnetworks, and the interconnecting tunnels comprise the MBone. | subnetworks, and the interconnecting tunnels comprise the MBone. | |||
Since the MBone and the Internet have different topologies, multicast | ||||
routers execute a separate routing protocol to decide how to forward | ||||
multicast packets. The majority of the MBone routers currently use the | ||||
Distance Vector Multicast Routing Protocol (DVMRP), although some | ||||
portions of the MBone execute either Multicast OSPF (MOSPF) or the | ||||
Protocol-Independent Multicast (PIM) routing protocols. The operation | ||||
of each of these protocols is discussed later in this paper. | ||||
======================================================================== | ======================================================================== | |||
+++++++ | +++++++ | |||
/ |Island | \ | / |Island | \ | |||
/T/ | A | \T\ | /T/ | A | \T\ | |||
/U/ +++++++++ \U\ | /U/ +++++++ \U\ | |||
/N/ | \N\ | /N/ | \N\ | |||
/N/ | \N\ | /N/ | \N\ | |||
/E/ | \E\ | /E/ | \E\ | |||
/L/ | \L\ | /L/ | \L\ | |||
++++++++ +++++++++ ++++++++ | ++++++++ +++++++ ++++++++ | |||
| Island | | Island| ---------| Island | | | Island | | Island| ---------| Island | | |||
| B | | C | Tunnel | D | | | B | | C | Tunnel | D | | |||
++++++++++ +++++++++ --------- ++++++++ | ++++++++ +++++++ --------- ++++++++ | |||
\ \ | | \ \ | | |||
\T\ | | \T\ | | |||
\U\ | | \U\ | | |||
\N\ | | \N\ | | |||
\N\ +++++++++ | \N\ +++++++ | |||
\E\ |Island | | \E\ |Island | | |||
\L\| E | | \L\| E | | |||
\+++++++++ | v v | _ | |||
|_|-|- - >|Router| <- + - + - + -> |Router|<- -|-|_| | ||||
Figure 2: Internet Multicast Backbone (MBone) | '_' | | '_' | |||
_ | | _ | ||||
======================================================================== | |_|-| |-|_| | |||
'_' | | '_' | ||||
Since the MBone and the Internet have different topologies, multicast | v v | |||
routers execute a separate routing protocol to decide how to forward | ||||
multicast packets. The majority of the MBone routers currently use the | ||||
Distance Vector Multicast Routing Protocol (DVMRP), although some | ||||
portions of the MBone execute either Multicast OSPF (MOSPF) or the | ||||
Protocol-Independent Multicast (PIM) routing protocols. The operation | ||||
of each of these protocols is discussed later in this paper. | ||||
As multicast routing software features become more widely available on | ||||
the routers of the Internet, providers may gradually decide to use | ||||
"native" multicast as an alternative to using lots of tunnels. | ||||
The MBone carries audio and video multicasts of Internet Engineering | LEGEND | |||
Task Force (IETF) meetings, NASA Space Shuttle Missions, US House and | ||||
Senate sessions, and live satellite weather photos. The session | ||||
directory (SDR) tool provides users with a listing of the active | ||||
multicast sessions on the MBone and allows them to create and/or join | ||||
a session. | ||||
4. MULTICAST ADDRESSING | <- - - -> Group Membership Protocol | |||
<-+-+-+-> Multicast Routing Protocol | ||||
A multicast address is assigned to a set of receivers defining a | Figure 1: Multicast IP Delivery Service | |||
multicast group. Senders use the multicast address as the destination | ======================================================================= | |||
IP address of a packet that is to be transmitted to all group members. | ||||
4.1 Class D Addresses | 1.3 Multicast Routing Protocols | |||
An IP multicast group is identified by a Class D address. Class D | Multicast routers execute a multicast routing protocol to define | |||
addresses have their high-order four bits set to "1110" followed by | delivery paths that enable the forwarding of multicast datagrams | |||
a 28-bit multicast group ID. Expressed in standard "dotted-decimal" | across an internetwork. | |||
notation, multicast group addresses range from 224.0.0.0 to | ||||
239.255.255.255 (shorthand: 224.0.0.0/4). | ||||
Figure 3 shows the format of a 32-bit Class D address. | 1.3.1 Multicast Routing vs. Multicast Forwarding | |||
======================================================================== | Multicast routing protocols establish or help establish the distribution | |||
tree for a given group, which enables multicast forwarding of packets | ||||
addressed to the group. In the case of unicast, routing protocols are | ||||
also used to build a forwarding table (commonly called a routing table). | ||||
Unicast destinations are entered in the routing table, and associated | ||||
with a metric and a next-hop router toward the destination. The key | ||||
difference between unicast forwarding and multicast forwarding is that | ||||
multicast packets must be forwarded away from their source. If a packet | ||||
is ever forwarded back toward its source, a forwarding loop could have | ||||
formed, possibly leading to a multicast "storm." | ||||
0 1 2 3 31 | Each routing protocol constructs a forwarding table in its own way; the | |||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | forwarding table tells each router that for a certain source, or for a | |||
|1|1|1|0| Multicast Group ID | | given source sending to a certain group (called a (source, group) pair), | |||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | packets are expected to arrive on a certain "inbound" or "upstream" | |||
|------------------------28 bits------------------------| | interface and must be copied to certain (set of) "outbound" or | |||
"downstream" interface(s) in order to reach all known subnetworks with | ||||
group members. | ||||
Figure 3: Class D Multicast Address Format | 2. MULTICAST SUPPORT FOR EMERGING INTERNET APPLICATIONS | |||
======================================================================== | ||||
The Internet Assigned Numbers Authority (IANA) maintains a list of | Today, the majority of Internet applications rely on point-to-point | |||
registered IP multicast groups. The base address 224.0.0.0 is reserved | transmission. The utilization of point-to-multipoint transmission has | |||
and cannot be assigned to any group. The block of multicast addresses | traditionally been limited to local area network applications. Over the | |||
ranging from 224.0.0.1 to 224.0.0.255 is reserved for permanent | past few years the Internet has seen a rise in the number of new | |||
assignment to various uses, including routing protocols and other | applications that rely on multicast transmission. Multicast IP | |||
protocols that require a well-known permanent address. Multicast | conserves bandwidth by forcing the network to do packet replication only | |||
routers should not forward any multicast datagram with destination | when necessary, and offers an attractive alternative to unicast | |||
addresses in this range, (regardless of the packet's TTL). | transmission for the delivery of network ticker tapes, live stock | |||
quotes, multiparty videoconferencing, and shared whiteboard applications | ||||
(among others). It is important to note that the applications for IP | ||||
Multicast are not solely limited to the Internet. Multicast IP can also | ||||
play an important role in large commercial internetworks. | ||||
Some of the well-known groups include: | 2.1 Reducing Network Load | |||
"all systems on this subnet" 224.0.0.1 | Assume that a stock ticker application is required to transmit packets | |||
"all routers on this subnet" 224.0.0.2 | to 100 stations within an organization's network. Unicast transmission | |||
"all DVMRP routers" 224.0.0.4 | to this set of stations will require the periodic transmission of 100 | |||
"all OSPF routers" 224.0.0.5 | packets where many packets may in fact be traversing the same link(s). | |||
"all OSPF designated routers" 224.0.0.6 | Multicast transmission is the ideal solution for this type of | |||
"all RIP2 routers" 224.0.0.9 | application since it requires only a single packet stream to be | |||
"all PIM routers" 224.0.0.13 | transmitted by the source which is replicated at forks in the multicast | |||
delivery tree. | ||||
The remaining groups ranging from 224.0.1.0 to 239.255.255.255 are | Broadcast transmission is not an effective solution for this type of | |||
assigned to various multicast applications or remain unassigned. From | application since it affects the CPU performance of each and every | |||
this range, the addresses from 239.0.0.0 to 239.255.255.255 are being | station that sees the packet. Besides, it wastes bandwidth. | |||
reserved for site-local "administratively scoped" applications, not | ||||
Internet-wide applications. | ||||
The complete list may be found in the Assigned Numbers RFC (RFC 1700 or | 2.2 Resource Discovery | |||
its successor) or at the IANA Web Site: | ||||
<URL:http://www.isi.edu/div7/iana/assignments.html> | Some applications utilize multicast instead of broadcast transmission | |||
to transmit packets to group members residing on the same subnetwork. | ||||
However, there is no reason to limit the extent of a multicast | ||||
transmission to a single LAN. The time-to-live (TTL) field in the IP | ||||
(hex); to be clear, the range from 01-00-5E-00-00-00 | ||||
to 01-00-5E-FF-FF-FF is reserved for IP multicast groups. | ||||
4.2 Mapping a Class D Address to an IEEE-802 MAC Address | A simple procedure was developed to map Class D addresses to this | |||
reserved MAC-layer multicast address block. This allows IP multicasting | ||||
to easily take advantage of the hardware-level multicasting supported by | ||||
network interface cards. | ||||
The IANA has been allocated a reserved portion of the IEEE-802 MAC-layer | The mapping between a Class D IP address and an IEEE-802 (e.g., FDDI, | |||
multicast address space. All of the addresses in IANA's reserved block | Ethernet) MAC-layer multicast address is obtained by placing the low- | |||
begin with 01-00-5E (hex). A simple procedure was developed to map | order 23 bits of the Class D address into the low-order 23 bits of | |||
Class D addresses to this reserved address block. This allows IP | IANA's reserved MAC-layer multicast address block. This simple | |||
multicasting to easily take advantage of the hardware-level multicasting | procedure removes the need for an explicit protocol for multicast | |||
supported by network interface cards. | address resolution on LANs akin to ARP for unicast. All LAN stations | |||
know this simple transformation, and can easily send any IP multicast | ||||
over any IEEE-802-based LAN. | ||||
For example, the mapping between a Class D IP address and an IEEE-802 | Figure 4 illustrates how the multicast group address 234.138.8.5 | |||
(e.g., Ethernet) multicast address is obtained by placing the low-order | (or EA-8A-08-05 expressed in hex) is mapped into an IEEE-802 multicast | |||
23 bits of the Class D address into the low-order 23 bits of IANA's | address. Note that the high-order nine bits of the IP address are not | |||
reserved address block. | mapped into the MAC-layer multicast address. | |||
Figure 4 illustrates how the multicast group address 224.10.8.5 | The mapping in Figure 4 places the low-order 23 bits of the IP multicast | |||
(E0-0A-08-05) is mapped into an IEEE-802 multicast address. | group ID into the low order 23 bits of the IEEE-802 multicast address. | |||
Note that the mapping may place up to multiple IP groups into the same | ||||
IEEE-802 address because the upper five bits of the IP class D address | ||||
are not used. Thus, there is a 32-to-1 ratio of IP class D addresses to | ||||
valid MAC-layer multicast addresses. In practice, there is a small | ||||
chance of collisions, should multiple groups happen to pick class D | ||||
addresses that map to the same MAC-layer multicast address. However, | ||||
chances are that higher-layer protocols will let hosts interpret which | ||||
packets are for them (i.e., the chances of two different groups picking | ||||
the same class D address and the same set of UDP ports is extremely | ||||
unlikely). For example, the class D addresses 224.10.8.5 (E0-0A-08-05) | ||||
and 225.138.8.5 (E1-8A-08-05) map to the same IEEE-802 MAC-layer | ||||
multicast address (01-00-5E-0A-08-05) used in this example. | ||||
======================================================================== | ======================================================================== | |||
Class D Address: 224.10.8.5 (E0-0A-08-05) | Class D Address: 234.138.8.5 (EA-8A-08-05) | |||
| E 0 | 0 | | E A | 8 | |||
Class-D IP |_______ _______|__ _ _ _ | Class-D IP |_______ _______|__ _ _ _ | |||
Address |-+-+-+-+-+-+-+-|-+ - - - | Address |-+-+-+-+-+-+-+-|-+ - - - | |||
|1 1 1 0 0 0 0 0|0 | |1 1 1 0 1 0 1 0|1 | |||
|-+-+-+-+-+-+-+-|-+ - - - | |-+-+-+-+-+-+-+-|-+ - - - | |||
................... | ................... | |||
IEEE-802 ....not......... | IEEE-802 ....not......... | |||
MAC-Layer .............. | MAC-Layer .............. | |||
Multicast ....mapped.. | Multicast ....mapped.. | |||
Address ........... | Address ........... | |||
|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+ - - - | |-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+ - - - | |||
|0 0 0 0 0 0 0 1|0 0 0 0 0 0 0 0|0 1 0 1 1 1 1 0|0 | |0 0 0 0 0 0 0 1|0 0 0 0 0 0 0 0|0 1 0 1 1 1 1 0|0 | |||
|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+ - - - | |-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+ - - - | |||
|_______ _______|_______ _______|_______ _______|_______ | |_______ _______|_______ _______|_______ _______|_______ | |||
| 0 1 | 0 0 | 5 E | 0 | | 0 1 | 0 0 | 5 E | 0 | |||
[Address mapping below continued from half above] | [Address mapping below continued from half above] | |||
| 0 A | 0 8 | 0 5 | | | 8 A | 0 8 | 0 5 | | |||
|_______ _______|_______ _______|_______ _______| Class-D IP | |_______ _______|_______ _______|_______ _______| Class-D IP | |||
- - - +-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-| Address | - - - -|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-| Address | |||
| 0 0 0 1 0 1 0|0 0 0 0 1 0 0 0|0 0 0 0 0 1 0 1| | | 0 0 0 1 0 1 0|0 0 0 0 1 0 0 0|0 0 0 0 0 1 0 1| | |||
- - - +-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-| | - - - -|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-| | |||
\____________ ____________________/ | \____________ ____________________/ | |||
\___ ___/ | \___ ___/ | |||
\ / | \ / | |||
| | | | |||
23 low-order bits mapped | 23 low-order bits mapped | |||
| | | | |||
v | v | |||
- - - +-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-| IEEE-802 | - - - -|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-| IEEE-802 | |||
| 0 0 0 1 0 1 0|0 0 0 0 1 0 0 0|0 0 0 0 0 1 0 1| MAC-Layer | | 0 0 0 1 0 1 0|0 0 0 0 1 0 0 0|0 0 0 0 0 1 0 1| MAC-Layer | |||
- - - +-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-| Multicast | - - - -|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-| Multicast | |||
|_______ _______|_______ _______|_______ _______| Address | |_______ _______|_______ _______|_______ _______| Address | |||
| 0 A | 0 8 | 0 5 | | | 0 A | 0 8 | 0 5 | | |||
Figure 4: Mapping between Class D and IEEE-802 Multicast Addresses | Figure 4: Mapping between Class D and IEEE-802 Multicast Addresses | |||
======================================================================== | ======================================================================== | |||
The mapping in Figure 4 places the low-order 23 bits of the IP multicast | ||||
group ID into the low order 23 bits of the IEEE-802 multicast address. | ||||
Note that the mapping may place up to 32 different IP groups into the | ||||
same IEEE-802 address because the upper 5 bits of the IP multicast | ||||
group ID are not used. For example, the multicast addresses | ||||
224.138.8.5 (E0-8A-08-05) and 225.10.8.5 (E1-0A-08-05) would also be | ||||
mapped to the same IEEE-802 multicast address (01-00-5E-0A-08-05) used | ||||
in this example. | ||||
4.3 Transmission and Delivery of Multicast Datagrams | 4.3 Transmission and Delivery of Multicast Datagrams | |||
When the sender and receivers are members of the same (LAN) subnetwork, | When the sender and receivers are members of the same (LAN) subnetwork, | |||
the transmission and reception of multicast frames is a straightforward | the transmission and reception of multicast frames is a straightforward | |||
process. The source station simply addresses the IP packet to the | process. The source station simply addresses the IP packet to the | |||
multicast group, the network interface card maps the Class D address to | multicast group, the network interface card maps the Class D address to | |||
the corresponding IEEE-802 multicast address, and the frame is sent. | the corresponding IEEE-802 multicast address, and the frame is sent. | |||
Receivers that wish to capture the frame notify their MAC and IP layers | Receivers that wish to capture the frame notify their MAC and IP layers | |||
that they want to receive datagrams addressed to the group. | that they want to receive datagrams addressed to the group. | |||
skipping to change at page 12, line 35 | skipping to change at page 12, line 29 | |||
forwarding. In addition, each router needs to implement a group | forwarding. In addition, each router needs to implement a group | |||
membership protocol that allows it to learn about the existence of group | membership protocol that allows it to learn about the existence of group | |||
members on its directly attached subnetworks. | members on its directly attached subnetworks. | |||
5. INTERNET GROUP MANAGEMENT PROTOCOL (IGMP) | 5. INTERNET GROUP MANAGEMENT PROTOCOL (IGMP) | |||
The Internet Group Management Protocol (IGMP) runs between hosts and | The Internet Group Management Protocol (IGMP) runs between hosts and | |||
their immediately-neighboring multicast routers. The mechanisms of the | their immediately-neighboring multicast routers. The mechanisms of the | |||
protocol allow a host to inform its local router that it wishes to | protocol allow a host to inform its local router that it wishes to | |||
receive transmissions addressed to a specific multicast group. Also, | receive transmissions addressed to a specific multicast group. Also, | |||
routers periodically query the LAN to determine if known group members | routers periodically query the LAN to determine if any group members are | |||
are still active. If there is more than one router on the LAN | still active. If there is more than one IP multicast router on the LAN, | |||
performing IP multicasting, one of the routers is elected "querier" and | one of the routers is elected "querier" and assumes the responsibility of | |||
assumes the responsibility of querying the LAN for group members. | querying the LAN for the presence of any group members. | |||
Based on the group membership information learned from the IGMP, a | Based on the group membership information learned from the IGMP, a | |||
router is able to determine which (if any) multicast traffic needs to be | router is able to determine which (if any) multicast traffic needs to be | |||
forwarded to each of its "leaf" subnetworks. Multicast routers use this | forwarded to each of its "leaf" subnetworks. Multicast routers use this | |||
information, in conjunction with a multicast routing protocol, to | information, in conjunction with a multicast routing protocol, to | |||
support IP multicasting across the Internet. | support IP multicasting across the Internet. | |||
5.1 IGMP Version 1 | 5.1 IGMP Version 1 | |||
IGMP Version 1 was specified in RFC-1112. According to the | IGMP Version 1 was specified in RFC-1112. According to the | |||
specification, multicast routers periodically transmit Host Membership | specification, multicast routers periodically transmit Host Membership | |||
Query messages to determine which host groups have members on their | Query messages to determine which host groups have members on their | |||
directlyattached networks. Query messages are addressed to the | directly-attached networks. IGMP Query messages are addressed to the | |||
all-hosts group (224.0.0.1) and have an IP TTL = 1. This means that | all-hosts group (224.0.0.1) and have an IP TTL = 1. This means that | |||
Query messages sourced from a router are transmitted onto the | Query messages sourced from a router are transmitted onto the | |||
directly-attached subnetwork but are not forwarded by any other | directly-attached subnetwork but are not forwarded by any other | |||
multicast routers. | multicast routers. | |||
When a host receives an IGMP Query message, it responds with a Host | ||||
Membership Report for each group to which it belongs, sent to each group | ||||
to which it belongs. (This is an important point: While IGMP Queries | ||||
======================================================================== | ======================================================================== | |||
Group 1 _____________________ | Group 1 _____________________ | |||
____ ____ | multicast | | ____ ____ | multicast | | |||
| | | | | router | | | | | | | router | | |||
|_H2_| |_H4_| |_____________________| | |_H2_| |_H4_| |_____________________| | |||
---- ---- +-----+ | | ---- ---- +-----+ | | |||
| | <-----|Query| | | | | <-----|Query| | | |||
| | +-----+ | | | | +-----+ | | |||
| | | | | | | | |||
skipping to change at page 13, line 31 | skipping to change at page 13, line 28 | |||
____ ____ ____ | ____ ____ ____ | |||
| | | | | | | | | | | | | | |||
|_H1_| |_H3_| |_H5_| | |_H1_| |_H3_| |_H5_| | |||
---- ---- ---- | ---- ---- ---- | |||
Group 2 Group 1 Group 1 | Group 2 Group 1 Group 1 | |||
Group 2 | Group 2 | |||
Figure 5: Internet Group Management Protocol-Query Message | Figure 5: Internet Group Management Protocol-Query Message | |||
======================================================================== | ======================================================================== | |||
When a host receives an IGMP Query message, it responds with a Host | ||||
Membership Report for each group to which it belongs, sent to each group | ||||
to which it belongs. (This is an important point: While IGMP Queries | ||||
are sent to the "all hosts on this subnet" class D address (224.0.0.1), | are sent to the "all hosts on this subnet" class D address (224.0.0.1), | |||
IGMP Reports are sent to the group(s) to which the host(s) belong. | IGMP Reports are sent to the group(s) to which the host(s) belong. | |||
Reports have a TTL of 1, and thus are not forwarded beyond the local | IGMP Reports, like Queries, are sent with the IP TTL = 1, and thus are | |||
subnetwork.) | not forwarded beyond the local subnetwork.) | |||
In order to avoid a flurry of Reports, each host starts a randomly- | In order to avoid a flurry of Reports, each host starts a randomly- | |||
chosen Report delay timer for each of its group memberships. If, during | chosen Report delay timer for each of its group memberships. If, during | |||
the delay period, another Report is heard for the same group, each other | the delay period, another Report is heard for the same group, every | |||
host in that group resets its timer to a new random value. This | other host in that group must reset its timer to a new random value. | |||
procedure spreads Reports out over a period of time and minimizes Report | This procedure spreads Reports out over a period of time and thus | |||
traffic for each group that has at least one member on a given | minimizes Report traffic for each group that has at least one member on | |||
subnetwork. | a given subnetwork. | |||
It should be noted that multicast routers do not need to be directly | It should be noted that multicast routers do not need to be directly | |||
addressed since their interfaces are required to promiscuously receive | addressed since their interfaces are required to promiscuously receive | |||
all multicast IP traffic. Also, a router does not need to maintain a | all multicast IP traffic. Also, a router does not need to maintain a | |||
detailed list of which hosts belong to each multicast group; the router | detailed list of which hosts belong to each multicast group; the router | |||
only needs to know that at least one group member is present on a given | only needs to know that at least one group member is present on a given | |||
network interface. | network interface. | |||
Multicast routers periodically transmit Queries to update their | Multicast routers periodically transmit IGMP Queries to update their | |||
knowledge of the group members present on each network interface. If the | knowledge of the group members present on each network interface. If | |||
router does not receive a Report from any members of a particular group | the router does not receive a Report from any members of a particular | |||
after a number of Queries, the router assumes that group members are no | group after a number of Queries, the router assumes that group members | |||
longer present on an interface. Assuming this is a leaf subnet, this | ||||
interface is removed from the delivery tree for this (source, group) | are no longer present on an interface. Assuming this is a leaf subnet | |||
pair. Multicasts will continue to be sent on this interface if the | (i.e., a subnet with group members but no multicast routers connecting | |||
router can tell (via multicast routing protocols) that there are | to additional group members further downstream), this interface is | |||
additional group members further downstream reachable via this | removed from the delivery tree(s) for this group. Multicasts will | |||
interface. | continue to be sent on this interface only if the router can tell (via | |||
multicast routing protocols) that there are additional group members | ||||
further downstream reachable via this interface. | ||||
When a host first joins a group, it immediately transmits an IGMP Report | When a host first joins a group, it immediately transmits an IGMP Report | |||
for the group rather than waiting for a router's IGMP Query. This | for the group rather than waiting for a router's IGMP Query. This | |||
reduces the "join latency" for the first host to join a given group on | reduces the "join latency" for the first host to join a given group on | |||
a particular subnetwork. | a particular subnetwork. "Join latency" is measured from the time when | |||
a host's first IGMP Report is sent, until the transmission of the first | ||||
packet for that group onto that host's subnetwork. Of course, if the | ||||
group is already active, the join latency is precisely zero. | ||||
5.2 IGMP Version 2 | 5.2 IGMP Version 2 | |||
IGMP Version 2 was distributed as part of the IP Multicasting (Version | IGMP version 2 was distributed as part of the Distance Vector Multicast | |||
3.3 through Version 3.8) code package. Initially, there was no detailed | Routing Protocol (DVMRP) implementation ("mrouted") source code, from | |||
specification for IGMP Version 2 other than this source code. However, | version 3.3 through 3.8. Initially, there was no detailed specification | |||
the complete specification has recently been published in <draft-ietf- | for IGMP version 2 other than this source code. However, the complete | |||
idmr-igmp-v2-05.txt> which will update the informal specification | specification has recently been published in <draft-ietf-idmr-igmp- | |||
contained in Appendix I of RFC-1112. IGMP Version 2 enhances and | v2-06.txt> which will update the specification contained in the first | |||
extends IGMP Version 1 while maintaining backward compatibility with | appendix of RFC-1112. IGMP version 2 extends IGMP version 1 while | |||
Version 1 hosts. | maintaining backward compatibility with version 1 hosts. | |||
IGMP Version 2 defines a procedure for the election of the multicast | IGMP version 2 defines a procedure for the election of the multicast | |||
querier for each LAN. In IGMP Version 2, the router with the lowest IP | querier for each LAN. In IGMP version 2, the multicast router with the | |||
address on the LAN is elected the multicast querier. In IGMP Version 1, | lowest IP address on the LAN is elected the multicast querier. In IGMP | |||
the querier election was determined by the multicast routing protocol. | version 1, the querier election was determined by the multicast routing | |||
This could lead to potential problems because each multicast routing | protocol. | |||
protocol might use unique methods for determining the multicast querier. | ||||
IGMP Version 2 defines a new type of Query message: the Group-Specific | IGMP version 2 defines a new type of Query message: the Group-Specific | |||
Query. Group-Specific Query messages allow a router to transmit a Query | Query. Group-Specific Query messages allow a router to transmit a Query | |||
to a specific multicast group rather than all groups residing on a | to a specific multicast group rather than all groups residing on a | |||
directly attached subnetwork. | directly attached subnetwork. | |||
Finally, IGMP Version 2 defines a Leave Group message to lower IGMP's | Finally, IGMP version 2 defines a Leave Group message to lower IGMP's | |||
"leave latency." When the last host to respond to a Query with a Report | "leave latency." When the last host to respond to a Query with a Report | |||
wishes to leave that specific group, the host transmits a Leave Group | wishes to leave that specific group, the host transmits a Leave Group | |||
message to the all-routers group (224.0.0.2) with the group field set to | message to the all-routers group (224.0.0.2) with the group field set to | |||
the group to be left. In response to a Leave Group message, the router | the group being left. In response to a Leave Group message, the router | |||
begins the transmission of Group-Specific Query messages on the inter- | begins the transmission of Group-Specific Query messages on the | |||
face that received the Leave Group message. If there are no Reports in | interface that received the Leave Group message. If there are no | |||
response to the Group-Specific Query messages, then if this is a leaf | Reports in response to the Group-Specific Query messages, then (if this | |||
is a leaf subnet) this interface is removed from the delivery tree(s) | ||||
for this group (as was the case of IGMP version 1). Again, multicasts | ||||
will continue to be sent on this interface if the router can tell (via | ||||
multicast routing protocols) that there are additional group members | ||||
further downstream reachable via this interface. | ||||
subnet, this interface is removed from the delivery tree for this | "Leave latency" is measured from a router's perspective. In version 1 | |||
(source, group) pair (as was the case of IGMP version 1). Again, | of IGMP, leave latency was the time from a router's hearing the last | |||
multicasts will continue to be sent on this interface if the router can | Report for a given group, until the router aged out that interface from | |||
tell (via multicast routing protocols) that there are additional group | the delivery tree for that group (assuming this is a leaf subnet, of | |||
members further downstream reachable via this interface. | course). Note that the only way for the router to tell that this was | |||
the LAST group member is that no reports are heard in some multiple of | ||||
the Query Interval (this is on the order of minutes). IGMP version 2, | ||||
with the addition of the Leave Group message, allows a group member to | ||||
more quickly inform the router that it is done receiving traffic for a | ||||
group. The router then must determine if this host was the last member | ||||
of this group on this subnetwork. To do this, the router quickly | ||||
queries the subnetwork for other group members via the Group-Specific | ||||
Query message. If no members send reports after several of these Group- | ||||
Specific Queries, the router can infer that the last member of that | ||||
group has, indeed, left the subnetwork. The benefit of lowering the | ||||
leave latency is that prune messages can be sent as soon as possible | ||||
after the last member host drops out of the group, instead of having to | ||||
wait for several minutes worth of Query intervals to pass. If a group | ||||
was experiencing high traffic levels, it can be very beneficial to stop | ||||
transmitting data for this group as soon as possible. | ||||
5.3 IGMP Version 3 | 5.3 IGMP Version 3 | |||
IGMP Version 3 is a preliminary draft specification published in | IGMP version 3 is a preliminary draft specification published in | |||
<draft-cain-igmp-00.txt>. IGMP Version 3 introduces support for Group- | <draft-cain-igmp-00.txt>. IGMP version 3 introduces support for Group- | |||
Source Report messages so that a host can elect to receive traffic from | Source Report messages so that a host can elect to receive traffic from | |||
specific sources of a multicast group. An Inclusion Group-Source Report | specific sources of a multicast group. An Inclusion Group-Source Report | |||
message allows a host to specify the IP addresses of the specific | message allows a host to specify the IP addresses of the specific | |||
sources it wants to receive. An Exclusion Group-Source Report message | sources it wants to receive. An Exclusion Group-Source Report message | |||
allows a host to explicitly identify the sources that it does not want | allows a host to explicitly identify the sources that it does not want | |||
to receive. With IGMP Version 1 and Version 2, if a host wants to | to receive. With IGMP version 1 and version 2, if a host wants to | |||
receive any traffic for a group, the traffic from all sources for the | receive any traffic for a group, the traffic from all sources for the | |||
group must be forwarded onto the host's subnetwork. | group must be forwarded onto the host's subnetwork. | |||
IGMP Version 3 will help conserve bandwidth by allowing a host to select | IGMP version 3 will help conserve bandwidth by allowing a host to select | |||
the specific sources from which it wants to receive traffic. Also, | the specific sources from which it wants to receive traffic. Also, | |||
multicast routing protocols will be able to make use this information to | multicast routing protocols will be able to make use this information to | |||
conserve bandwidth when constructing the branches of their multicast | conserve bandwidth when constructing the branches of their multicast | |||
delivery trees. | delivery trees. | |||
Finally, support for Leave Group messages first introduced in IGMP | Finally, support for Leave Group messages first introduced in IGMP | |||
Version 2 has been enhanced to support Group-Source Leave messages. | version 2 has been enhanced to support Group-Source Leave messages. | |||
This feature allows a host to leave an entire group or to specify the | This feature allows a host to leave an entire group or to specify the | |||
specific IP address(es) of the (source, group) pair(s) that it wishes to | specific IP address(es) of the (source, group) pair(s) that it wishes | |||
leave. | to leave. Note that at this time, not all existing multicast routing | |||
protocols have mechanisms to support such requests from group members. | ||||
This is one issue that will be addressed during the development of | ||||
IGMP version 3. | ||||
6. MULTICAST FORWARDING TECHNIQUES | 6. MULTICAST FORWARDING TECHNIQUES | |||
IGMP provides the final step in a multicast packet delivery service | IGMP provides the final step in a multicast packet delivery service | |||
since it is only concerned with the forwarding of multicast traffic from | since it is only concerned with the forwarding of multicast traffic from | |||
a router to group members on its directly-attached subnetworks. IGMP is | a router to group members on its directly-attached subnetworks. IGMP is | |||
not concerned with the delivery of multicast packets between neighboring | not concerned with the delivery of multicast packets between neighboring | |||
routers or across an internetwork. | routers or across an internetwork. | |||
To provide an internetwork delivery service, it is necessary to define | To provide an internetwork delivery service, it is necessary to define | |||
skipping to change at page 16, line 43 | skipping to change at page 16, line 55 | |||
senders, groups, or routers. Also, the ability to handle arbitrary | senders, groups, or routers. Also, the ability to handle arbitrary | |||
topologies may not be present or may only be present in limited ways. | topologies may not be present or may only be present in limited ways. | |||
6.1.1 Flooding | 6.1.1 Flooding | |||
The simplest technique for delivering multicast datagrams to all routers | The simplest technique for delivering multicast datagrams to all routers | |||
in an internetwork is to implement a flooding algorithm. The flooding | in an internetwork is to implement a flooding algorithm. The flooding | |||
procedure begins when a router receives a packet that is addressed to a | procedure begins when a router receives a packet that is addressed to a | |||
multicast group. The router employs a protocol mechanism to determine | multicast group. The router employs a protocol mechanism to determine | |||
whether or not it has seen this particular packet before. If it is the | whether or not it has seen this particular packet before. If it is the | |||
first reception of the packet, the packet is forwarded on all | first reception of the packet, the packet is forwarded on all interfaces | |||
interfaces--except the one on which it arrived--guaranteeing that the | ||||
multicast packet reaches all routers in the internetwork. If the router | (except the one on which it arrived) guaranteeing that the multicast | |||
has seen the packet before, then the packet is discarded. | packet reaches all routers in the internetwork. If the router has seen | |||
the packet before, then the packet is discarded. | ||||
A flooding algorithm is very simple to implement since a router does not | A flooding algorithm is very simple to implement since a router does not | |||
have to maintain a routing table and only needs to keep track of the | have to maintain a routing table and only needs to keep track of the | |||
most recently seen packets. However, flooding does not scale for | most recently seen packets. However, flooding does not scale for | |||
Internet-wide applications since it generates a large number of | Internet-wide applications since it generates a large number of | |||
duplicate packets and uses all available paths across the internetwork | duplicate packets and uses all available paths across the internetwork | |||
instead of just a limited number. Also, the flooding algorithm makes | instead of just a limited number. Also, the flooding algorithm makes | |||
inefficient use of router memory resources since each router is required | inefficient use of router memory resources since each router is required | |||
to maintain a distinct table entry for each recently seen packet. | to maintain a distinct table entry for each recently seen packet. | |||
skipping to change at page 17, line 38 | skipping to change at page 17, line 51 | |||
6.2 Source-Based Tree Techniques | 6.2 Source-Based Tree Techniques | |||
The following techniques all generate a source-based tree by various | The following techniques all generate a source-based tree by various | |||
means. The techniques differ in the efficiency of the tree building | means. The techniques differ in the efficiency of the tree building | |||
process, and the bandwidth and router resources (i.e., state tables) | process, and the bandwidth and router resources (i.e., state tables) | |||
used to build a source-based tree. | used to build a source-based tree. | |||
6.2.1 Reverse Path Broadcasting (RPB) | 6.2.1 Reverse Path Broadcasting (RPB) | |||
A more efficient solution than building a single spanning tree for the | A more efficient solution than building a single spanning tree for the | |||
entire internetwork would be to build a group-specific spanning tree for | entire internetwork would be to build a spanning tree for each potential | |||
each potential source [subnetwork]. These spanning trees would result | source [subnetwork]. These spanning trees would result in source-based | |||
in source-based delivery trees emanating from the subnetwork directly | delivery trees emanating from the subnetworks directly connected to the | |||
connected to the source station. Since there are many potential sources | ||||
for a group, a different delivery tree is constructed emanating from | ||||
each active source. | ||||
6.2.1.1 Reverse Path Broadcasting: Operation | ||||
The fundamental algorithm to construct these source-based trees is | ||||
referred to as Reverse Path Broadcasting (RPB). The RPB algorithm is | ||||
actually quite simple. For each (source, group) pair, if a packet | ||||
arrives on a link that the local router believes to be on the shortest | ||||
path back toward the packet's source, then the router forwards the | ||||
packet on all interfaces except the incoming interface. If the packet | ||||
does not arrive on the interface that is on the shortest path back | ||||
======================================================================== | ======================================================================== | |||
A Sample Internetwork | A Sample Internetwork | |||
#----------------# | #----------------# | |||
/ |\ / \ | / |\ / \ | |||
| | \ / \ | | | \ / \ | |||
| | \ / \ | | | \ / \ | |||
| | \ / \ | | | \ / \ | |||
skipping to change at page 19, line 5 | skipping to change at page 19, line 5 | |||
| | | | |||
# | # | |||
LEGEND | LEGEND | |||
# Router | # Router | |||
RR Root Router | RR Root Router | |||
Figure 6: Spanning Tree | Figure 6: Spanning Tree | |||
======================================================================== | ======================================================================== | |||
toward the source, then the packet is discarded. The interface over | source stations. Since there are many potential sources for a group, a | |||
which the router expects to receive multicast packets from a particular | different delivery tree is constructed rooted at each active source. | |||
source is referred to as the "parent" link. The outbound links over | ||||
which the router forwards the multicast packet are called "child" links | 6.2.1.1 Reverse Path Broadcasting: Operation | |||
for this group. | ||||
The fundamental algorithm to construct these source-based trees is | ||||
referred to as Reverse Path Broadcasting (RPB). The RPB algorithm is | ||||
actually quite simple. For each source, if a packet arrives on a link | ||||
that the local router believes to be on the shortest path back toward | ||||
the packet's source, then the router forwards the packet on all | ||||
interfaces except the incoming interface. If the packet does not | ||||
arrive on the interface that is on the shortest path back toward the | ||||
source, then the packet is discarded. The interface over which the | ||||
router expects to receive multicast packets from a particular source is | ||||
referred to as the "parent" link. The outbound links over which the | ||||
router forwards the multicast packet are called "child" links for this | ||||
source. | ||||
This basic algorithm can be enhanced to reduce unnecessary packet | This basic algorithm can be enhanced to reduce unnecessary packet | |||
duplication. If the local router making the forwarding decision can | duplication. If the local router making the forwarding decision can | |||
determine whether a neighboring router on a child link is "downstream," | determine whether a neighboring router on a child link is "downstream," | |||
then the packet is multicast toward the neighbor. (A "downstream" | then the packet is multicast toward the neighbor. (A "downstream" | |||
neighbor is a neighboring router which considers the local router to be | neighbor is a neighboring router which considers the local router to be | |||
on the shortest path back toward a given source.) Otherwise, the packet | on the shortest path back toward a given source.) Otherwise, the packet | |||
is not forwarded on the potential child link since the local router | is not forwarded on the potential child link since the local router | |||
knows that the neighboring router will just discard the packet (since it | knows that the neighboring router will just discard the packet (since it | |||
will arrive on a non-parent link for the (source, group) pair, relative | will arrive on a non-parent link for the source, relative to that | |||
to that downstream router). | downstream router). | |||
======================================================================== | ======================================================================== | |||
Source | Source | |||
. ^ | | ^ | |||
. | shortest path back to the | | : shortest path back to the | |||
. | source for THIS router | | : source for THIS router | |||
. | | | : | |||
"parent link" | "parent link" | |||
_ | _ | |||
______|!2|_____ | ______|!2|_____ | |||
| | | | | | |||
--"child -|!1| |!3| - "child -- | --"child -|!1| |!3| - "child -- | |||
link" | ROUTER | link" | link" | ROUTER | link" | |||
|_______________| | |_______________| | |||
Figure 7: Reverse Path Broadcasting - Forwarding Algorithm | Figure 7: Reverse Path Broadcasting - Forwarding Algorithm | |||
======================================================================== | ======================================================================== | |||
The information to make this "downstream" decision is relatively easy to | The information to make this "downstream" decision is relatively easy to | |||
derive from a link-state routing protocol since each router maintains a | derive from a link-state routing protocol since each router maintains a | |||
topological database for the entire routing domain. If a distance- | topological database for the entire routing domain. If a distance- | |||
vector routing protocol is employed, a neighbor can either advertise its | vector routing protocol is employed, a neighbor can either advertise its | |||
previous hop for the (source, group) pair as part of its routing update | previous hop for the source as part of its routing update messages or | |||
messages or "poison reverse" the route toward a source if it is not on | "poison reverse" the route toward a source if it is not on the | |||
the distribution tree for that source. Either of these techniques | distribution tree for that source. Either of these techniques allows an | |||
allows an upstream router to determine if a downstream neighboring | upstream router to determine if a downstream neighboring router is on an | |||
router is on an active branch of the delivery tree for a certain source | active branch of the delivery tree for a certain source. | |||
sending to a certain group. | ||||
Please refer to Figure 8 for a discussion describing the basic operation | Please refer to Figure 8 for a discussion describing the basic operation | |||
of the enhanced RPB algorithm. | of the enhanced RPB algorithm. | |||
====================================================================== | ======================================================================== | |||
Source Station------>O | Source Station------>O | |||
A # | A # | |||
+|+ | +|+ | |||
+ | + | + | + | |||
+ O + | + O + | |||
+ + | + + | |||
1 2 | 1 2 | |||
+ + | + + | |||
+ + | + + | |||
skipping to change at page 20, line 43 | skipping to change at page 20, line 53 | |||
O O O | O O O | |||
LEGEND | LEGEND | |||
O Leaf | O Leaf | |||
+ + Shortest-path | + + Shortest-path | |||
- - Branch | - - Branch | |||
# Router | # Router | |||
Figure 8: Reverse Path Broadcasting - Example | Figure 8: Reverse Path Broadcasting - Example | |||
======================================================================= | ======================================================================== | |||
Note that the source station (S) is attached to a leaf subnetwork | Note that the source station (S) is attached to a leaf subnetwork | |||
directly connected to Router A. For this example, we will look at the | directly connected to Router A. For this example, we will look at the | |||
RPB algorithm from Router B's perspective. Router B receives the | RPB algorithm from Router B's perspective. Router B receives the | |||
multicast packet from Router A on link 1. Since Router B considers link | multicast packet from Router A on link 1. Since Router B considers link | |||
1 to be the parent link for the (source, group) pair, it forwards the | 1 to be the parent link for the (source, group) pair, it forwards the | |||
packet on link 4, link 5, and the local leaf subnetworks if they contain | packet on link 4, link 5, and the local leaf subnetworks if they contain | |||
group members. Router B does not forward the packet on link 3 because | group members. Router B does not forward the packet on link 3 because | |||
it knows from routing protocol exchanges that Router C considers link 2 | it knows from routing protocol exchanges that Router C considers link 2 | |||
as its parent link for the (source, group) pair. Router B knows that if | as its parent link for the source. Router B knows that if it were to | |||
it were to forward the packet on link 3, it would be discarded by Router | forward the packet on link 3, it would be discarded by Router C since | |||
C since the packet would not be arriving on Router C's parent link for | the packet would not be arriving on Router C's parent link for this | |||
this (source, group) pair. | source. | |||
6.2.1.2 RPB: Benefits and Limitations | 6.2.1.2 RPB: Benefits and Limitations | |||
The key benefit to reverse path broadcasting is that it is reasonably | The key benefit to reverse path broadcasting is that it is reasonably | |||
efficient and easy to implement. It does not require that the router | efficient and easy to implement. It does not require that the router | |||
know about the entire spanning tree, nor does it require a special | know about the entire spanning tree, nor does it require a special | |||
mechanism to stop the forwarding process (as flooding does). In | mechanism to stop the forwarding process (as flooding does). In | |||
addition, it guarantees efficient delivery since multicast packets | addition, it guarantees efficient delivery since multicast packets | |||
always follow the "shortest" path from the source station to the | always follow the "shortest" path from the source station to the | |||
destination group. Finally, the packets are distributed over multiple | destination group. Finally, the packets are distributed over multiple | |||
links, resulting in better network utilization since a different tree is | links, resulting in better network utilization since a different tree is | |||
computed for each (source, group) pair. | computed for each source. | |||
One of the major limitations of the RPB algorithm is that it does not | One of the major limitations of the RPB algorithm is that it does not | |||
take into account multicast group membership when building the delivery | take into account multicast group membership when building the delivery | |||
tree for a (source, group) pair. As a result, datagrams may be | tree for a source. As a result, datagrams may be unnecessarily | |||
unnecessarily forwarded to subnetworks that have no members in the | forwarded onto subnetworks that have no members in a destination group. | |||
destination group. | ||||
6.2.2 Truncated Reverse Path Broadcasting (TRPB) | 6.2.2 Truncated Reverse Path Broadcasting (TRPB) | |||
Truncated Reverse Path Broadcasting (TRPB) was developed to overcome the | Truncated Reverse Path Broadcasting (TRPB) was developed to overcome the | |||
limitations of Reverse Path Broadcasting. With the help of IGMP, | limitations of Reverse Path Broadcasting. With information provided by | |||
multicast routers determine the group memberships on each leaf | IGMP, multicast routers determine the group memberships on each leaf | |||
subnetwork and avoid forwarding datagrams onto a leaf subnetwork if it | subnetwork and avoid forwarding datagrams onto a leaf subnetwork if it | |||
does not contain at least one member of the destination group. Thus, | does not contain at least one member of a given destination group. Thus, | |||
the delivery tree is "truncated" by the router if a leaf subnetwork has | the delivery tree is "truncated" by the router if a leaf subnetwork has | |||
no group members. | no group members. | |||
Figure 9 illustrates the operation of TRPB algorithm. In this example | Figure 9 illustrates the operation of TRPB algorithm. In this example | |||
the router receives a multicast packet on its parent link for the | the router receives a multicast packet on its parent link for the | |||
(Source, G1) pair. The router forwards the datagram on interface 1 | Source. The router forwards the datagram on interface 1 since that | |||
since that interface has at least one member of G1. The router does not | interface has at least one member of G1. The router does not forward | |||
forward the datagram to interface 3 since this interface has no members | the datagram to interface 3 since this interface has no members in the | |||
in the destination group. The datagram is forwarded on interface 4 if | destination group. The datagram is forwarded on interface 4 if and only | |||
and only if a downstream router considers this subnetwork to be part of | if a downstream router considers this subnetwork to be part of its | |||
its "parent link" for the (Source, G1) pair. | "parent link" for the Source. | |||
TRPB removes some limitations of RPB but it solves only part of the | ||||
problem. It eliminates unnecessary traffic on leaf subnetworks but it | ||||
does not consider group memberships when building the branches of the | ||||
delivery tree. | ||||
====================================================================== | ====================================================================== | |||
Source | Source | |||
. | | | : | |||
. | | : | |||
. | (Source, G1) | | : (Source, G1) | |||
. v | v | |||
| | | | |||
"parent link" | "parent link" | |||
| | | | |||
"child link" ___ | "child link" ___ | |||
G1 _______|2|_____ | G1 _______|2|_____ | |||
\ | | | \ | | | |||
G3\\ _____ ___ ROUTER ___ ______ / G2 | G3\\ _____ ___ ROUTER ___ ______ / G2 | |||
\| hub |--|1| |3|-----|switch|/ | \| hub |--|1| |3|-----|switch|/ | |||
/|_____| ^-- ___ -- ^ |______|\ | /|_____| ^-- ___ -- ^ |______|\ | |||
/ ^ |______|4|_____| ^ \ | / ^ |______|4|_____| ^ \ | |||
G1 ^ ^--- ^ G3 | G1 ^ .^--- ^ G3 | |||
^ ^ | ^ | ^ .^ | ^ | |||
Forward->->-^ "child link" Truncate | ^ .^ "child link" ^ | |||
| | Forward | Truncate | |||
Figure 9: Truncated Reverse Path Broadcasting - (TRPB) | Figure 9: Truncated Reverse Path Broadcasting - (TRPB) | |||
====================================================================== | ====================================================================== | |||
TRPB removes some limitations of RPB but it solves only part of the | ||||
problem. It eliminates unnecessary traffic on leaf subnetworks but it | ||||
does not consider group memberships when building the branches of the | ||||
delivery tree. | ||||
6.2.3 Reverse Path Multicasting (RPM) | 6.2.3 Reverse Path Multicasting (RPM) | |||
Reverse Path Multicasting (RPM) is an enhancement to Reverse Path | Reverse Path Multicasting (RPM) is an enhancement to Reverse Path | |||
Broadcasting and Truncated Reverse Path Broadcasting. | Broadcasting and Truncated Reverse Path Broadcasting. | |||
RPM creates a delivery tree that spans only: | RPM creates a delivery tree that spans only: | |||
o Subnetworks with group members, and | o Subnetworks with group members, and | |||
o Routers and subnetworks along the shortest | o Routers and subnetworks along the shortest | |||
skipping to change at page 22, line 50 | skipping to change at page 23, line 4 | |||
path to subnetworks with group members. | path to subnetworks with group members. | |||
RPM allows the source-based "shortest-path" tree to be pruned so that | RPM allows the source-based "shortest-path" tree to be pruned so that | |||
datagrams are only forwarded along branches that lead to active members | datagrams are only forwarded along branches that lead to active members | |||
of the destination group. | of the destination group. | |||
6.2.3.1 Operation | 6.2.3.1 Operation | |||
When a multicast router receives a packet for a (source, group) pair, | When a multicast router receives a packet for a (source, group) pair, | |||
the first packet is forwarded following the TRPB algorithm across all | the first packet is forwarded following the TRPB algorithm across all | |||
routers in the internetwork. Routers on the edge of the network (which | routers in the internetwork. Routers on the edge of the network (which | |||
have only leaf subnetworks) are called leaf routers. The TRPB algorithm | have only leaf subnetworks) are called leaf routers. The TRPB algorithm | |||
guarantees that each leaf router will receive at least the first | guarantees that each leaf router will receive at least the first | |||
multicast packet. If there is a group member on one of its leaf | multicast packet. If there is a group member on one of its leaf | |||
subnetworks, a leaf router forwards the packet based on this group | ||||
subnetworks, a leaf router forwards the packet based on this IGMP Report | membership information. | |||
(or a statically-defined local group on an interface). | ||||
======================================================================== | ======================================================================== | |||
Source | Source | |||
. | | | : | |||
. | (Source, G) | | : (Source, G) | |||
. | | ||||
| v | | v | |||
| | | | |||
| | ||||
o-#-G | o-#-G | |||
|********** | |********** | |||
^ | * | ^ | * | |||
, | * | , | * | |||
^ | * o | ^ | * o | |||
, | * / | , | * / | |||
o-#-o #*********** | o-#-o #*********** | |||
^ |\ ^ |\ * | ^ |\ ^ |\ * | |||
^ | o ^ | G * | ^ | o ^ | G * | |||
, | , | * | , | , | * | |||
skipping to change at page 23, line 52 | skipping to change at page 24, line 7 | |||
If none of the subnetworks connected to the leaf router contain group | If none of the subnetworks connected to the leaf router contain group | |||
members, the leaf router may transmit a "prune" message on its parent | members, the leaf router may transmit a "prune" message on its parent | |||
link, informing the upstream router that it should not forward packets | link, informing the upstream router that it should not forward packets | |||
for this particular (source, group) pair on the child interface on which | for this particular (source, group) pair on the child interface on which | |||
it received the prune message. Prune messages are sent just one hop | it received the prune message. Prune messages are sent just one hop | |||
back toward the source. | back toward the source. | |||
An upstream router receiving a prune message is required to store the | An upstream router receiving a prune message is required to store the | |||
prune information in memory. If the upstream router has no recipients | prune information in memory. If the upstream router has no recipients | |||
on local leaf subnetworks and has received prune messages on each of the | on local leaf subnetworks and has received prune messages from each | |||
child interfaces for this (source, group) pair, then the upstream router | downstream neighbor on each of the child interfaces for this (source, | |||
group) pair, then the upstream router does not need to receive | ||||
does not need to receive additional packets for the (source, group) | additional packets for the (source, group) pair. This implies that the | |||
pair. This implies that the upstream router can also generate a prune | upstream router can also generate a prune message of its own, one hop | |||
message of its own, one hop further back toward the source. This | further back toward the source. This cascade of prune messages results | |||
cascade of prune messages results in an active multicast delivery tree, | in an active multicast delivery tree, consisting exclusively of "live" | |||
consisting exclusively of "live" branches (i.e., branches that lead to | branches (i.e., branches that lead to active receivers). | |||
active receivers). | ||||
Since both the group membership and internetwork topology can change | Since both the group membership and internetwork topology can change | |||
dynamically , the pruned state of the multicast delivery tree must be | dynamically , the pruned state of the multicast delivery tree must be | |||
refreshed periodically. At regular intervals, the prune information | refreshed periodically. At regular intervals, the prune information | |||
expires from the memory of all routers and the next packet for the | expires from the memory of all routers and the next packet for the | |||
(source, group) pair is forwarded toward all downstream routers. This | (source, group) pair is forwarded toward all downstream routers. This | |||
results in a new burst of prune messages allowing the multicast | allows "stale state" (prune state for groups that are no longer active) | |||
forwarding tree to adapt to the ever-changing multicast delivery | to be reclaimed by the multicast routers. | |||
requirements of the internetwork. | ||||
6.2.3.2 Limitations | 6.2.3.2 Limitations | |||
Despite the improvements offered by the RPM algorithm, there are still | Despite the improvements offered by the RPM algorithm, there are still | |||
several scaling issues that need to be addressed when attempting to | several scaling issues that need to be addressed when attempting to | |||
develop an Internet-wide delivery service. The first limitation is that | develop an Internet-wide delivery service. The first limitation is that | |||
multicast packets must be periodically flooded across every router in | multicast packets must be periodically flooded across every router in | |||
the internetwork, onto every leaf subnetwork. This flooding is wasteful | the internetwork, onto every leaf subnetwork. This flooding is wasteful | |||
of bandwidth (until the updated prune state is constructed). | of bandwidth (until the updated prune state is constructed). | |||
skipping to change at page 24, line 49 | skipping to change at page 24, line 52 | |||
groups increase) to place such a burden on routers that are not on every | groups increase) to place such a burden on routers that are not on every | |||
(or perhaps any) active delivery tree. Shared tree techniques are an | (or perhaps any) active delivery tree. Shared tree techniques are an | |||
attempt to address these scaling issues, which become quite acute when | attempt to address these scaling issues, which become quite acute when | |||
most groups' senders and receivers are sparsely distributed across the | most groups' senders and receivers are sparsely distributed across the | |||
internetwork. | internetwork. | |||
6.3 Shared Tree Techniques | 6.3 Shared Tree Techniques | |||
The most recent additions to the set of multicast forwarding techniques | The most recent additions to the set of multicast forwarding techniques | |||
are based on a shared delivery tree. Unlike shortest-path tree | are based on a shared delivery tree. Unlike shortest-path tree | |||
algorithms which build a source-based tree for each (source, group) | algorithms which build a source-based tree for each source, or each | |||
pair, shared tree algorithms construct a single delivery tree that is | (source, group) pair, shared tree algorithms construct a single delivery | |||
shared by all members of a group. The shared tree approach is quite | tree that is shared by all members of a group. The shared tree approach | |||
similar to the spanning tree algorithm except it allows the definition | is quite similar to the spanning tree algorithm except it allows the | |||
of a different shared tree for each group. Stations that wish to | ||||
receive traffic for a multicast group are required to explicitly join | ||||
the shared delivery tree. Multicast traffic for each group is sent and | definition of a different shared tree for each group. Stations wishing | |||
received over the same delivery tree, regardless of the source. | to receive traffic for a multicast group must explicitly join the shared | |||
delivery tree. Multicast traffic for each group is sent and received | ||||
over the same delivery tree, regardless of the source. | ||||
6.3.1 Operation | 6.3.1 Operation | |||
A shared tree may involve a single router, or set of routers, which | A shared tree may involve a single router, or set of routers, which | |||
comprise the "core" of a multicast delivery tree. Figure 11 illustrates | comprise(s) the "core" of a multicast delivery tree. Figure 11 | |||
how a single multicast delivery tree is shared by all sources and | illustrates how a single multicast delivery tree is shared by all | |||
receivers for a multicast group. | sources and receivers for a multicast group. | |||
======================================================================== | ======================================================================== | |||
Source Source Source | Source Source Source | |||
| | | | | | | | |||
| | | | | | | | |||
v v v | v v v | |||
[#] * * * * * [#] * * * * * [#] | [#] * * * * * [#] * * * * * [#] | |||
* | * | |||
skipping to change at page 25, line 35 | skipping to change at page 25, line 37 | |||
| * | | | * | | |||
join | * | join | join | * | join | |||
| [#] | | | [#] | | |||
[x] [x] | [x] [x] | |||
: : | : : | |||
member member | member member | |||
host host | host host | |||
LEGEND | LEGEND | |||
[#] Shared Tree Core Routers | [#] Shared Tree "Core" Routers | |||
* * Shared Tree Backbone | * * Shared Tree Backbone | |||
[x] Member-hosts' directly-attached routers | [x] Member-hosts' directly-attached routers | |||
Figure 11: Shared Multicast Delivery Tree | Figure 11: Shared Multicast Delivery Tree | |||
======================================================================== | ======================================================================== | |||
The directly attached router for each station wishing to belong to a | ||||
particular multicast group is required to send a "join" message toward | ||||
the shared tree of the particular multicast group. The directly | ||||
attached router only needs to know the address of one of the group's | ||||
core routers in order to transmit a join request (via unicast). The | ||||
join request is processed by all intermediate routers, each of which | ||||
identifies the interface on which the join was received as belonging to | ||||
the group's delivery tree. The intermediate routers continue to forward | ||||
the join message toward the core, marking local downstream interfaces | ||||
until the request reaches a core router (or a router that is already on | ||||
the active delivery tree). This procedure allows each member-host's | ||||
directly-attached router to define a branch providing the shortest path | ||||
between itself and a core router which is part of the group's shared | ||||
delivery tree. | ||||
Similar to other multicast forwarding algorithms, shared tree algorithms | Similar to other multicast forwarding algorithms, shared tree algorithms | |||
do not require that the source of a multicast packet be a member of a | do not require that the source of a multicast packet be a member of a | |||
destination group. Packets sourced by a non-group member are simply | destination group in order to send to a group. | |||
unicast toward the core until they reach the first router that is a | ||||
member of the group's delivery tree. When the unicast packet reaches a | ||||
member of the delivery tree, the packet is multicast to all outgoing | ||||
interfaces that are part of the tree except the incoming link. This | ||||
guarantees that traffic follows the shortest path from source station to | ||||
the shared tree. It also ensures that multicast packets are forwarded to | ||||
all routers on the core tree which in turn forward the traffic to all | ||||
receivers that have joined the shared tree. | ||||
6.3.2 Benefits | 6.3.2 Benefits | |||
In terms of scalability, shared tree techniques have several advantages | In terms of scalability, shared tree techniques have several advantages | |||
over source-based trees. Shared tree algorithms make efficient use of | over source-based trees. Shared tree algorithms make efficient use of | |||
router resources since they only require a router to maintain state | router resources since they only require a router to maintain state | |||
information for each group, not for each (source, group) pair. (Remember | information for each group, not for each source, or for each (source, | |||
that source-based tree techniques required all routers in an | group) pair. (Remember that source-based tree techniques required all | |||
internetwork to either be a) on the delivery tree for a given (source, | routers in an internetwork to either a) be on the delivery tree for a | |||
group) pair, or b) have prune state for that (source, group) pair: So | given source or (source, group) pair, or b) to have prune state for | |||
the entire internetwork must participate in the source-based tree | that source or (source, group) pair: So the entire internetwork must | |||
protocol.) This improves the scalability of applications with many | participate in the source-based tree protocol.) This improves the | |||
active senders since the number of source stations is no longer a | scalability of applications with many active senders since the number of | |||
scaling issue. Also, shared tree algorithms conserve network bandwidth | source stations is no longer a scaling issue. Also, shared tree | |||
since they do not require that multicast packets be periodically flooded | algorithms conserve network bandwidth since they do not require that | |||
across all multicast routers in the internetwork onto every leaf | multicast packets be periodically flooded across all multicast routers | |||
subnetwork. This can offer significant bandwidth savings, especially | in the internetwork onto every leaf subnetwork. This can offer | |||
across low-bandwidth WAN links, and when receivers sparsely populate the | significant bandwidth savings, especially across low-bandwidth WAN | |||
domain of operation. Finally, since receivers are required to | links, and when receivers sparsely populate the domain of operation. | |||
explicitly join the shared delivery tree, data only ever flows over | Finally, since receivers are required to explicitly join the shared | |||
those links that lead to active receivers. | delivery tree, data only ever flows over those links that lead to active | |||
receivers. | ||||
6.3.3 Limitations | 6.3.3 Limitations | |||
Despite these benefits, there are still several limitations to protocols | Despite these benefits, there are still several limitations to protocols | |||
that are based on a shared tree algorithm. Shared trees may result in | that are based on a shared tree algorithm. Shared trees may result in | |||
traffic concentration and bottlenecks near core routers since traffic | traffic concentration and bottlenecks near core routers since traffic | |||
from all sources traverses the same set of links as it approaches the | from all sources traverses the same set of links as it approaches the | |||
core. In addition, a single shared delivery tree may create suboptimal | core. In addition, a single shared delivery tree may create suboptimal | |||
routes (a shortest path between the source and the shared tree, a | routes (a shortest path between the source and the shared tree, a | |||
suboptimal path across the shared tree, a shortest path between the | suboptimal path across the shared tree, a shortest path between the | |||
skipping to change at page 27, line 4 | skipping to change at page 26, line 36 | |||
Despite these benefits, there are still several limitations to protocols | Despite these benefits, there are still several limitations to protocols | |||
that are based on a shared tree algorithm. Shared trees may result in | that are based on a shared tree algorithm. Shared trees may result in | |||
traffic concentration and bottlenecks near core routers since traffic | traffic concentration and bottlenecks near core routers since traffic | |||
from all sources traverses the same set of links as it approaches the | from all sources traverses the same set of links as it approaches the | |||
core. In addition, a single shared delivery tree may create suboptimal | core. In addition, a single shared delivery tree may create suboptimal | |||
routes (a shortest path between the source and the shared tree, a | routes (a shortest path between the source and the shared tree, a | |||
suboptimal path across the shared tree, a shortest path between the | suboptimal path across the shared tree, a shortest path between the | |||
egress core router and the receiver's directly attached router) | egress core router and the receiver's directly attached router) | |||
resulting in increased delay which may be a critical issue for some | resulting in increased delay which may be a critical issue for some | |||
multimedia applications. (Simulations indicate that latency over a | multimedia applications. (Simulations indicate that latency over a | |||
shared tree may be approximately 10% larger than source-based trees in | shared tree may be approximately 10% larger than source-based trees in | |||
many cases, but by the same token, this may be negligible for many | many cases, but by the same token, this may be negligible for many | |||
applications.) Finally, new algorithms need to be developed to support | applications.) Finally, expanding-ring searches are not supported | |||
all aspects of core management which include core router selection and | inside shared-tree domains. | |||
(potentially) dynamic placement strategies. | ||||
7. SOURCE-BASED TREE ("DENSE MODE") ROUTING PROTOCOLS | 7. "DENSE MODE" ROUTING PROTOCOLS | |||
An established set of multicast routing protocols define a source-based | Certain multicast routing protocols are designed to work well in | |||
delivery tree which provides the shortest path between the source and | environments that have plentiful bandwidth and where it is reasonable | |||
each receiver. | to assume that receivers are rather densely distributed. In such | |||
scenarios, it is very reasonable to use periodic flooding, or other | ||||
bandwidth-intensive techniques that would not necessarily be very | ||||
scalable over a wide-area network. In section 8, we will examine | ||||
different protocols that are specifically geared toward efficient WAN | ||||
operation, especially for groups that have widely dispersed (i.e., | ||||
sparse) membership. | ||||
These routing protocols include: | These routing protocols include: | |||
o Distance Vector Multicast Routing Protocol (DVMRP), | o Distance Vector Multicast Routing Protocol (DVMRP), | |||
o Multicast Extensions to Open Shortest Path First (MOSPF), | o Multicast Extensions to Open Shortest Path First (MOSPF), | |||
o Protocol Independent Multicast - Dense Mode (PIM-DM). | o Protocol Independent Multicast - Dense Mode (PIM-DM). | |||
Each of these routing protocols is designed to operate in an environment | These protocols' underlying designs assume that the amount of protocol | |||
where group members are relatively densely populated and internetwork | overhead (in terms of the amount of state that must be maintained by | |||
bandwidth is plentiful. Their underlying designs assume that the amount | each router, the number of router CPU cycles required, and the amount of | |||
of protocol overhead (in terms of the amount of state that must be | bandwidth consumed by protocol operation) is appropriate since receivers | |||
maintained by each router, the number of router CPU cycles required, and | densely populate the area of operation. | |||
the amount of bandwidth consumed by protocol operation) is appropriate | ||||
since receivers densely populate the area of operation. | ||||
7.1. Distance Vector Multicast Routing Protocol (DVMRP) | 7.1. Distance Vector Multicast Routing Protocol (DVMRP) | |||
The Distance Vector Multicast Routing Protocol (DVMRP) is a | The Distance Vector Multicast Routing Protocol (DVMRP) is a distance- | |||
distance-vector routing protocol designed to support the forwarding of | vector routing protocol designed to support the forwarding of multicast | |||
multicast datagrams through an internetwork. DVMRP constructs | datagrams through an internetwork. DVMRP constructs source-based | |||
source-based multicast delivery trees using variants of the Reverse Path | multicast delivery trees using the Reverse Path Multicasting (RPM) | |||
Broadcasting (RPB) algorithm. Originally, the entire MBone ran DVMRP. | algorithm. Originally, the entire MBone ran only DVMRP. Today, over | |||
Today, over half of the MBone routers still run some version of DVMRP. | half of the MBone routers still run some version of DVMRP. | |||
DVMRP was first defined in RFC-1075. The original specification was | DVMRP was first defined in RFC-1075. The original specification was | |||
derived from the Routing Information Protocol (RIP) and employed the | derived from the Routing Information Protocol (RIP) and employed the | |||
Truncated Reverse Path Broadcasting (TRPB) technique. The major | Truncated Reverse Path Broadcasting (TRPB) technique. The major | |||
difference between RIP and DVMRP is that RIP was concerned with | difference between RIP and DVMRP is that RIP calculates the next-hop | |||
calculating the next-hop to a destination, while DVMRP is concerned with | toward a destination, while DVMRP computes the previous-hop back toward | |||
computing the previous-hop back to a source. It is important to note | a source. Since mrouted 3.0, DVMRP has employed the Reverse Path | |||
that the latest mrouted version 3.8 and vendor implementations have | Multicasting (RPM) algorithm. Thus, the latest implementations of DVMRP | |||
extended DVMRP to employ the Reverse Path Multicasting (RPM) algorithm. | are quite different from the original RFC specification in many regards. | |||
This means that the latest implementations of DVMRP are quite different | There is an active effort within the IETF Inter-Domain Multicast Routing | |||
from the original RFC specification in many regards. There is an active | (IDMR) working group to specify DVMRP version 3 in a standard form. | |||
effort within the IETF Inter-Domain Multicast Routing (IDMR) working | ||||
group to specify DVMRP version 3 in a standard form (as opposed to the | ||||
current spec, which is written in C). | ||||
The current DVMRP v3 Internet-Draft is: | The current DVMRP v3 Internet-Draft is: | |||
<draft-ietf-idmr-dvmrp-v3-03.txt>, or | <draft-ietf-idmr-dvmrp-v3-04.txt>, or | |||
<draft-ietf-idmr-dvmrp-v3-03.ps> | <draft-ietf-idmr-dvmrp-v3-04.ps> | |||
7.1.1 Physical and Tunnel Interfaces | 7.1.1 Physical and Tunnel Interfaces | |||
The ports of a DVMRP router may be either a physical interface to a | The ports of a DVMRP router may be either a physical interface to a | |||
directly-attached subnetwork or a tunnel interface to another multicast | directly-attached subnetwork or a tunnel interface to another multicast- | |||
island. All interfaces are configured with a metric that specifies the | capable island. All interfaces are configured with a metric specifying | |||
cost for the given port and a TTL threshold that limits the scope of a | cost for the given port, and a TTL threshold that limits the scope of a | |||
multicast transmission. In addition, each tunnel interface must be | multicast transmission. In addition, each tunnel interface must be | |||
explicitly configured with two additional parameters: the IP address of | explicitly configured with two additional parameters: The IP address of | |||
the local router's interface and the IP address of the remote router's | the local router's tunnel interface and the IP address of the remote | |||
interface. | router's interface. | |||
======================================================================== | ======================================================================== | |||
TTL Scope | TTL Scope | |||
Threshold | Threshold | |||
________________________________________________________________________ | ________________________________________________________________________ | |||
0 Restricted to the same host | 0 Restricted to the same host | |||
1 Restricted to the same subnetwork | 1 Restricted to the same subnetwork | |||
15 Restricted to the same site | 15 Restricted to the same site | |||
63 Restricted to the same region | 63 Restricted to the same region | |||
127 Worldwide | 127 Worldwide | |||
191 Worldwide; limited bandwidth | 191 Worldwide; limited bandwidth | |||
255 Unrestricted in scope | 255 Unrestricted in scope | |||
Table 1: TTL Scope Control Values | Table 1: TTL Scope Control Values | |||
======================================================================== | ======================================================================== | |||
A multicast router will only forward a multicast datagram across an | A multicast router will only forward a multicast datagram across an | |||
interface if the TTL field in the IP header is greater than the TTL | interface if the TTL field in the IP header is greater than the TTL | |||
threshold assigned to the interface. Table 1 lists the conven- tional | threshold assigned to the interface. Table 1 lists the conventional | |||
TTL values that are used to restrict the scope of an IP multicast. For | TTL values that are used to restrict the scope of an IP multicast. For | |||
example, a multicast datagram with a TTL of less than 16 is restricted | example, a multicast datagram with a TTL of less than 16 is restricted | |||
to the same site and should not be forwarded across an interface to | to the same site and should not be forwarded across an interface to | |||
other sites in the same region. | other sites in the same region. | |||
TTL-based scoping is not always useful, so the IETF MBoneD working group | TTL-based scoping is not always sufficient for all applications. | |||
is considering the definition and usage of a range of multi- cast | Conflicts arise when trying to simultaneously enforce limits on | |||
addresses for "administrative" scoping. In other words, such addresses | topology, geography, and bandwidth. In particular, TTL-based scoping | |||
would be usable within a certain administrative scope, a corporate | cannot handle overlapping regions, which is a necessary characteristic | |||
network, for instance, but would not be forwarded across the global | of administrative regions. In light of these issues, "administrative" | |||
MBone. At the moment, the range from 239.0.0.0 through 239.255.255.255 | scoping was created in 1994, to provide a way to do scoping based on | |||
is being reserved for administratively scoped applications, but the | multicast address. Certain addresses would be usable within a given | |||
structure and usage of this block has yet to be completely formalized. | administrative scope (e.g., a corporate internetwork) but would not be | |||
forwarded onto the global MBone. This allows for privacy, and address | ||||
reuse within the class D address space. The range from 239.0.0.0 to | ||||
239.255.255.255 has been reserved for administrative scoping. While | ||||
administrative scoping has been in limited use since 1994 or so, it has | ||||
yet to be widely deployed. The IETF MBoneD working group is working on | ||||
the deployment of administrative scoping. For additional information, | ||||
please see <draft-ietf-mboned-admin-ip-space-01.txt> or its successor, | ||||
entitled "Administratively Scoped IP Multicast." | ||||
7.1.2 Basic Operation | 7.1.2 Basic Operation | |||
DVMRP implements the Reverse Path Multicasting (RPM) algorithm. | DVMRP implements the Reverse Path Multicasting (RPM) algorithm. | |||
According to RPM, the first datagram for any (source, group) pair is | According to RPM, the first datagram for any (source, group) pair is | |||
forwarded across the entire internetwork (providing the packet's TTL and | forwarded across the entire internetwork (providing the packet's TTL and | |||
router interface thresholds permit this). The initial datagram is | router interface thresholds permit this). Upon receiving this traffic, | |||
delivered to all leaf routers which transmit prune messages back toward | leaf routers may transmit prune messages back toward the source if there | |||
the source if there are no group members on their directly attached leaf | are no group members on their directly-attached leaf subnetworks. The | |||
subnetworks. The prune messages result in the removal of branches from | ||||
the tree that do not lead to group members, thus creating a source-based | prune messages remove all branches that do not lead to group members | |||
shortest path tree with all leaves having group members. After a period | from the tree, leaving a source-based shortest path tree. | |||
of time, the pruned branches grow back and the next datagram for the | ||||
(source, group) pair is forwarded across the entire internetwork | After a period of time, the prune state for each (source, group) pair | |||
resulting in a new set of prune messages. | expires to reclaim stale prune state (from groups that are no longer in | |||
use). If those groups are actually still in use, a subsequent datagram | ||||
for the (source, group) pair will be flooded across all downstream | ||||
routers. This flooding will result in a new set of prune messages, | ||||
serving to regenerate the source-based shortest-path tree for this | ||||
(source, group) pair. In current implementations of RPM (notably | ||||
DVMRP), prune messages are not reliably transmitted, so the prune | ||||
lifetime must be kept short to compensate for lost prune messages. | ||||
DVMRP also implements a mechanism to quickly "graft" back a previously | DVMRP also implements a mechanism to quickly "graft" back a previously | |||
pruned branch of a group's delivery tree. If a router that previously | pruned branch of a group's delivery tree. If a router that had sent a | |||
sent a prune message for a (source, group) pair discovers new group | prune message for a (source, group) pair discovers new group members on | |||
members on a leaf network, it sends a graft message to the group's | a leaf network, it sends a graft message to the previous-hop router for | |||
previous-hop router. When an upstream router receives a graft message, | this source. When an upstream router receives a graft message, it | |||
it cancels out the previously-received prune message. Graft messages | cancels out the previously-received prune message. Graft messages | |||
will cascade back toward the source (until reaching the nearest "live" | cascade (reliably) hop-by-hop back toward the source until they reach | |||
branch point on the delivery tree), thus allowing previously pruned | the nearest "live" branch point on the delivery tree. In this way, | |||
branches to be quickly restored as part of the active delivery tree. | previously-pruned branches are quickly restored to a given delivery | |||
tree. | ||||
7.1.3 DVMRP Router Functions | 7.1.3 DVMRP Router Functions | |||
When there is more than one DVMRP router on a subnetwork, the Dominant | In Figure 13, Router C is downstream and may potentially receive | |||
Router is responsible for the periodic transmission of IGMP Host | datagrams from the source subnetwork from Router A or Router B. If | |||
Membership Query messages. Upon initialization, a DVMRP router | Router A's metric to the source subnetwork is less than Router B's | |||
considers itself to be the Dominant Router for the subnetwork until it | metric, then Router A is dominant over Router B for this source. | |||
receives a Host Membership Query message from a neighbor router with a | ||||
lower IP address. Figure 12 illustrates how the router with the lowest | ||||
IP address functions as the Dominant Router for the subnetwork. | ||||
In order to avoid duplicate multicast datagrams when there is more than | ||||
one DVMRP router on a subnetwork, one router is elected the Dominant | ||||
Router for the particular source subnetwork (see fig. 12). In Figure | ||||
13, Router C is downstream and may potentially receive datagrams from | ||||
the source subnetwork from Router A or Router B. If Router A's metric | ||||
to the source subnetwork is less than Router B's metric, then Router A | ||||
is dominant over Router B for this source. | ||||
This means that Router A will forward traffic from the source sub- | This means that Router A will forward any traffic from the source | |||
network and Router B will discard traffic from that source subnet- work. | subnetwork and Router B will discard traffic received from that source. | |||
However, if Router A's metric is equal to Router B's metric, then | However, if Router A's metric is equal to Router B's metric, then the | |||
router with the lower IP address on its downstream interface (child | router with the lower IP address on its downstream interface (child | |||
link) becomes the Dominant Router for this source. Note that on a | link) becomes the Dominant Router for this source. Note that on a | |||
subnetwork with multiple routers forwarding to groups with multiple | subnetwork with multiple routers forwarding to groups with multiple | |||
sources, different routers may be dominant for each source. | sources, different routers may be dominant for each source. | |||
======================================================================== | 7.1.4 DVMRP Routing Table | |||
_____________ _____________ | The DVMRP process periodically exchanges routing table updates with its | |||
| Router A | | Router B | | DVMRP neighbors. These updates are logically independent of those | |||
| | | DR | | generated by any unicast Interior Gateway Protocol. | |||
------------- ------------- | ||||
128.2.3.4 | <-Query | 128.2.1.1 | ||||
| | | ||||
--------------------------------------------------------------------- | ||||
| | ||||
128.2.3.1 | | ||||
_____________ | ||||
| Router C | | ||||
| | | ||||
------------- | ||||
Figure 12. DVMRP Dominant Router Election | Since the DVMRP was developed to route multicast and not unicast | |||
======================================================================== | traffic, a router will probably run multiple routing processes in | |||
practice: One to support the forwarding of unicast traffic and another | ||||
to support the forwarding of multicast traffic. (This can be convenient: | ||||
A router can be configured to only route multicast IP, with no unicast | ||||
======================================================================== | ======================================================================== | |||
To | To | |||
.-<-<-<-<-<-<-Source Subnetwork->->->->->->->->--. | .-<-<-<-<-<-<-Source Subnetwork->->->->->->->->--. | |||
v v | v v | |||
| | | | | | |||
parent link parent link | parent link parent link | |||
| | | | | | |||
_____________ _____________ | _____________ _____________ | |||
skipping to change at page 30, line 51 | skipping to change at page 30, line 32 | |||
parent link | parent link | |||
| | | | |||
_____________ | _____________ | |||
| Router C | | | Router C | | |||
| | | | | | |||
------------- | ------------- | |||
| | | | |||
child link | child link | |||
| | | | |||
Figure 13. DVMRP Dominant Router in a Redundant Topology | Figure 12. DVMRP Dominant Router in a Redundant Topology | |||
======================================================================== | ======================================================================== | |||
7.1.4 DVMRP Routing Table | ||||
The DVMRP process periodically exchanges routing table updates with its | ||||
DVMRP neighbors. These updates are logically independent of those | ||||
generated by any unicast Interior Gateway Protocol. | ||||
Since the DVMRP was developed to route multicast and not unicast | ||||
traffic, a router will probably run multiple routing processes in | ||||
practice: One to support the forwarding of unicast traffic and another | ||||
to support the forwarding of multicast traffic. (This can be convenient: | ||||
A router can be configured to only route multicast IP, with no unicast | ||||
IP routing. This may be a useful capability in firewalled | IP routing. This may be a useful capability in firewalled | |||
environments.) | environments.) | |||
Consider Figure 13: There are two types of routers in this figure: | Again, consider Figure 12: There are two types of routers in this | |||
dominant and subordinate; assume in this example that Router B is | figure: dominant and subordinate; assume in this example that Router B | |||
dominant, Router A is subordinate, and Router C is part of the | is dominant, Router A is subordinate, and Router C is part of the | |||
downstream distribution tree. In general, which routers are dominant | downstream distribution tree. In general, which routers are dominant | |||
or subordinate may be different for each source! A subordinate router | or subordinate may be different for each source! A subordinate router | |||
is one that is NOT on the shortest path tree back toward a source. The | is one that is NOT on the shortest path tree back toward a source. The | |||
dominant router can tell this because the subordinate router will | dominant router can tell this because the subordinate router will | |||
'poison-reverse' the route for this source in its routing updates which | 'poison-reverse' the route for this source in its routing updates which | |||
are sent on the common LAN (i.e., Router A sets the metric for this | are sent on the common LAN (i.e., Router A sets the metric for this | |||
source to 'infinity'). The dominant router keeps track of subordinate | source to 'infinity'). The dominant router keeps track of subordinate | |||
routers on a per-source basis...it never needs or expects to receive a | routers on a per-source basis...it never needs or expects to receive a | |||
prune message from a subordinate router. Only routers that are truly on | prune message from a subordinate router. Only routers that are truly on | |||
the downstream distribution tree will ever need to send prunes to the | the downstream distribution tree will ever need to send prunes to the | |||
skipping to change at page 31, line 35 | skipping to change at page 31, line 4 | |||
dominant router can tell this because the subordinate router will | dominant router can tell this because the subordinate router will | |||
'poison-reverse' the route for this source in its routing updates which | 'poison-reverse' the route for this source in its routing updates which | |||
are sent on the common LAN (i.e., Router A sets the metric for this | are sent on the common LAN (i.e., Router A sets the metric for this | |||
source to 'infinity'). The dominant router keeps track of subordinate | source to 'infinity'). The dominant router keeps track of subordinate | |||
routers on a per-source basis...it never needs or expects to receive a | routers on a per-source basis...it never needs or expects to receive a | |||
prune message from a subordinate router. Only routers that are truly on | prune message from a subordinate router. Only routers that are truly on | |||
the downstream distribution tree will ever need to send prunes to the | the downstream distribution tree will ever need to send prunes to the | |||
dominant router. If a dominant router on a LAN has received either a | dominant router. If a dominant router on a LAN has received either a | |||
poison-reversed route for a source, or prunes for all groups emanating | poison-reversed route for a source, or prunes for all groups emanating | |||
from that source subnetwork, then it may itself send a prune upstream | from that source subnetwork, then it may itself send a prune upstream | |||
toward the source (assuming also that IGMP has told it that there are no | toward the source (assuming also that IGMP has told it that there are no | |||
local receivers for any group from this source). | local receivers for any group from this source). | |||
A sample routing table for a DVMRP router is shown in Figure 14. Unlike | A sample routing table for a DVMRP router is shown in Figure 13. Unlike | |||
======================================================================== | ======================================================================== | |||
Source Subnet From Metric Status TTL | Source Subnet From Metric Status TTL | |||
Prefix Mask Gateway | Prefix Mask Gateway | |||
128.1.0.0 255.255.0.0 128.7.5.2 3 Up 200 | 128.1.0.0 255.255.0.0 128.7.5.2 3 Up 200 | |||
128.2.0.0 255.255.0.0 128.7.5.2 5 Up 150 | 128.2.0.0 255.255.0.0 128.7.5.2 5 Up 150 | |||
128.3.0.0 255.255.0.0 128.6.3.1 2 Up 150 | 128.3.0.0 255.255.0.0 128.6.3.1 2 Up 150 | |||
128.3.0.0 255.255.0.0 128.6.3.1 4 Up 200 | 128.3.0.0 255.255.0.0 128.6.3.1 4 Up 200 | |||
Figure 14: DVMRP Routing Table | Figure 13: DVMRP Routing Table | |||
======================================================================== | ======================================================================== | |||
the table that would be created by a unicast routing protocol such as | the table that would be created by a unicast routing protocol such as | |||
the RIP, OSPF, or the BGP, the DVMRP routing table contains Source | the RIP, OSPF, or the BGP, the DVMRP routing table contains Source | |||
Prefixes and From-Gateways instead of Destination Prefixes and Next-Hop | Prefixes and From-Gateways instead of Destination Prefixes and Next-Hop | |||
Gateways. | Gateways. | |||
The routing table represents the shortest path (source-based) spanning | The routing table represents the shortest path (source-based) spanning | |||
tree to every possible source prefix in the internetwork--the Reverse | tree to every possible source prefix in the internetwork--the Reverse | |||
Path Broadcasting (RPB) tree. The DVMRP routing table does not | Path Broadcasting (RPB) tree. The DVMRP routing table does not | |||
skipping to change at page 32, line 45 | skipping to change at page 32, line 16 | |||
Since the DVMRP routing table is not aware of group membership, the | Since the DVMRP routing table is not aware of group membership, the | |||
DVMRP process builds a forwarding table based on a combination of the | DVMRP process builds a forwarding table based on a combination of the | |||
information contained in the multicast routing table, known groups, and | information contained in the multicast routing table, known groups, and | |||
received prune messages. The forwarding table represents the local | received prune messages. The forwarding table represents the local | |||
router's understanding of the shortest path source-based delivery tree | router's understanding of the shortest path source-based delivery tree | |||
for each (source, group) pair--the Reverse Path Multicasting (RPM) tree. | for each (source, group) pair--the Reverse Path Multicasting (RPM) tree. | |||
======================================================================== | ======================================================================== | |||
Source Multicast TTL InPort OutPorts | Source Multicast TTL InIntf OutIntf(s) | |||
Prefix Group | Prefix Group | |||
128.1.0.0 224.1.1.1 200 1 Pr 2p3p | 128.1.0.0 224.1.1.1 200 1 Pr 2p3p | |||
224.2.2.2 100 1 2p3 | 224.2.2.2 100 1 2p3 | |||
224.3.3.3 250 1 2 | 224.3.3.3 250 1 2 | |||
128.2.0.0 224.1.1.1 150 2 2p3 | 128.2.0.0 224.1.1.1 150 2 2p3 | |||
Figure 15: DVMRP Forwarding Table | Figure 14: DVMRP Forwarding Table | |||
======================================================================== | ======================================================================== | |||
The forwarding table for a sample DVMRP router is shown in Figure 15. | The forwarding table for a sample DVMRP router is shown in Figure 14. | |||
The elements in this display include the following items: | The elements in this display include the following items: | |||
Source Prefix The subnetwork sending multicast datagrams | Source Prefix The subnetwork sending multicast datagrams | |||
to the specified groups (one group per row). | to the specified groups (one group per row). | |||
Multicast Group The Class D IP address to which multicast | Multicast Group The Class D IP address to which multicast | |||
datagrams are addressed. Note that a given | datagrams are addressed. Note that a given | |||
Source Prefix may contain sources for several | Source Prefix may contain sources for several | |||
Multicast Groups. | Multicast Groups. | |||
InPort The parent port for the (source, group) pair. | InIntf The parent interface for the (source, group) | |||
A 'Pr' in this column indicates that a prune | pair. A 'Pr' in this column indicates that a | |||
message has been sent to the upstream router | prune message has been sent to the upstream | |||
(the From-Gateway for this Source Prefix in | router (the From-Gateway for this Source Prefix | |||
the DVMRP routing table). | in the DVMRP routing table). | |||
OutPorts The child ports over which multicast datagrams | ||||
for this (source, group) pair are forwarded. | ||||
A 'p' in this column indicates that the router | ||||
has received a prune message(s) from a (all) | ||||
downstream router(s) on this port. | ||||
7.1.6 Hierarchical DVMRP (DVMRP v4.0) | ||||
The rapid growth of the MBone is placing ever-increasing demands on its | ||||
routers. Essentially, today's MBone is deployed as a single, "flat" | ||||
routing domain where each router is required to maintain detailed | ||||
routing information to every possible subnetwork on the MBone. As the | ||||
number of subnetworks continues to increase, the size of the routing | ||||
tables and of the periodic update messages will continue to grow. If | ||||
nothing is done about these issues, the processing and memory | ||||
capabilities of the MBone routers will eventually be depleted and | ||||
routing on the MBone will be degraded, or fail. | ||||
To overcome these potential scaling issues, a hierarchical version of | ||||
the DVMRP is under development. In hierarchical routing, the MBone | ||||
would be divided into a number of individual routing domains. Each | ||||
routing domain executes its own instance of an "intra-domain" multicast | ||||
routing protocol. Another protocol, or another instance of the same | ||||
protocol, would be used for routing between the individual domains. | ||||
7.1.6.1 Benefits of Hierarchical Multicast Routing | ||||
Hierarchical routing reduces the demand for router resources because | ||||
each router only needs to know the explicit details about routing | ||||
packets to destinations within its own domain, but needs to know little | ||||
or nothing about the detailed topological structure of any of the other | ||||
domains. The protocol running between the domains is envisioned to | ||||
maintain information about the interconnection of the domains, but not | ||||
about the internal topology of each domain. | ||||
======================================================================== | ||||
_________ | ||||
________ _________ / \ | ||||
/ \ / \ | Region D | | ||||
___________ |Region B|-L2-| |-L2-\___________/ | ||||
/ \-L2-\________/ | | ___________ | ||||
| | | | | | / \ | ||||
| Region A | L2 L2 | Region C |-L2-| Region E | | ||||
| | | | | | | | | ||||
\___________/ ________ | | \_____________/ | ||||
/ \-L2-| | | ||||
|Region F| \___________/ | ||||
\________/ | ||||
Figure 16. Hierarchical DVMRP | ||||
======================================================================== | ||||
In addition to reducing the amount of routing information, there are | ||||
several other benefits to be gained from the development and deployment | ||||
of a hierarchical version of the DVMRP: | ||||
o Different multicast routing protocols may be deployed | ||||
in each region of the MBone. This permits the testing | ||||
and deployment of new protocols on a domain-by-domain | ||||
basis. | ||||
o The effects of an individual link or router failures | ||||
are limited to only those routers operating within a | ||||
single domain. Likewise, the effects of any change to | ||||
the topological interconnection of regions is limited | ||||
to only inter-domain routers. These enhancements are | ||||
especially important when deploying a distance-vector | ||||
routing protocol which can result in relatively long | ||||
convergence times. | ||||
o The count-to-infinity problem associated with distance- | ||||
vector routing protocols places limitations on the | ||||
maximum diameter of the MBone topology. Hierarchical | ||||
routing limits these diameter constraints to a single | ||||
domain, instead of to the entire MBone. | ||||
7.1.6.2 Hierarchical Architecture | ||||
Hierarchical DVMRP proposes the creation of non-intersecting regions | ||||
where each region has a unique Region-ID. The routers internal to a | ||||
region execute any multicast routing protocols such as DVMRP, MOSPF, | ||||
PIM, or CBT as a "Level 1" (L1) protocol. Each region is required to | ||||
have at least one "boundary router" which is responsible for providing | ||||
inter-regional connectivity. The boundary routers execute DVMRP as a | ||||
"Level 2" (L2) protocol to forward traffic between regions. | ||||
The L2 routers exchange routing information in the form of Region-IDs | ||||
instead of the individual subnetwork prefixes contained within each | ||||
region. With DVMRP as the L2 protocol, the inter-regional multicast | ||||
delivery tree is constructed based on the (region_ID, group) pair rather | ||||
than the usual (source, group) pair. | ||||
When a multicast packet originates within a region, it is forwarded | OutIntf(s) The child interfaces over which multicast | |||
according to the L1 protocol to all subnetworks containing group | datagrams for this (source, group) pair are | |||
members. In addition, the datagram is forwarded to each of the boundary | forwarded. A 'p' in this column indicates | |||
routers (L2) configured for the source region. The L2 routers tag the | that the router has received a prune message(s) | |||
packet with the Region-Id and place it inside an encapsulation header | from a (all) downstream router(s) on this port. | |||
for delivery to other regions. When the packet arrives at a remote | ||||
region, the encapsulation header is removed before delivery to group | ||||
members by the L1 routers. | ||||
7.2. Multicast Extensions to OSPF (MOSPF) | 7.2. Multicast Extensions to OSPF (MOSPF) | |||
Version 2 of the Open Shortest Path First (OSPF) routing protocol is | Version 2 of the Open Shortest Path First (OSPF) routing protocol is | |||
defined in RFC-1583. It is an Interior Gateway Protocol (IGP) | defined in RFC-1583. OSPF is an Interior Gateway Protocol (IGP) that | |||
specifically designed to distribute unicast topology information among | distributes unicast topology information among routers belonging to a | |||
routers belonging to a single Autonomous System. OSPF is based on | single OSPF "Autonomous System." OSPF is based on link-state algorithms | |||
link-state algorithms which permit rapid route calculation with a | which permit rapid route calculation with a minimum of routing protocol | |||
minimum of routing protocol traffic. In addition to efficient route | traffic. In addition to efficient route calculation, OSPF is an open | |||
calculation, OSPF is an open standard that supports hierarchical | standard that supports hierarchical routing, load balancing, and the | |||
routing, load balancing, and the import of external routing information. | import of external routing information. | |||
The Multicast Extensions to OSPF (MOSPF) are defined in RFC-1584. MOSPF | The Multicast Extensions to OSPF (MOSPF) are defined in RFC-1584. MOSPF | |||
routers maintain a current image of the network topology through the | routers maintain a current image of the network topology through the | |||
unicast OSPF link-state routing protocol. MOSPF enhances the OSPF | unicast OSPF link-state routing protocol. The multicast extensions to | |||
protocol by providing the ability to route multicast IP traffic. The | OSPF are built on top of OSPF Version 2 so that a multicast routing | |||
multicast extensions to OSPF are built on top of OSPF Version 2 so that | capability can be incrementally introduced into an OSPF Version 2 | |||
a multicast routing capability can be incrementally introduced into an | routing domain. Routers running MOSPF will interoperate with non-MOSPF | |||
OSPF Version 2 routing domain. The enhancements that have been added | routers when forwarding unicast IP data traffic. MOSPF does not support | |||
are backwards-compatible so that routers running MOSPF will interoperate | tunnels. | |||
with non-multicast OSPF routers when forwarding unicast IP data traffic. | ||||
Note that MOSPF, unlike DVMRP, does not provide support for tunnels. | ||||
7.2.1 Intra-Area Routing with MOSPF | 7.2.1 Intra-Area Routing with MOSPF | |||
Intra-Area Routing describes the basic routing algorithm employed by | Intra-Area Routing describes the basic routing algorithm employed by | |||
MOSPF. This elementary algorithm runs inside a single OSPF area and | MOSPF. This elementary algorithm runs inside a single OSPF area and | |||
supports multicast forwarding when a source and all destination group | supports multicast forwarding when a source and all destination group | |||
members reside in the same OSPF area, or when the entire OSPF Autonomous | members reside in the same OSPF area, or when the entire OSPF Autonomous | |||
System is a single area. The following discussion assumes that the | System is a single area (and the source is inside that area...). The | |||
reader is familiar with the basic operation of the OSPF routing | following discussion assumes that the reader is familiar with OSPF. | |||
protocol. | ||||
7.2.1.1 Local Group Database | 7.2.1.1 Local Group Database | |||
Similar to the DVMRP, MOSPF routers use the Internet Group Management | Similar to all other multicast routing protocols, MOSPF routers use the | |||
Protocol (IGMP) to monitor multicast group membership on directly- | Internet Group Management Protocol (IGMP) to monitor multicast group | |||
attached subnetworks. MOSPF routers are required to implement a "local | membership on directly-attached subnetworks. MOSPF routers maintain a | |||
group database" which maintains a list of directly attached groups and | "local group database" which lists directly-attached groups and | |||
determines the local router's responsibility for delivering multicast | determines the local router's responsibility for delivering multicast | |||
datagrams to these groups. | datagrams to these groups. | |||
On any given subnetwork, the transmission of IGMP Host Membership | On any given subnetwork, the transmission of IGMP Host Membership | |||
Queries is performed solely by the Designated Router (DR). Also, the | Queries is performed solely by the Designated Router (DR). However, | |||
responsibility of listening to IGMP Host Membership Reports is performed | the responsibility of listening to IGMP Host Membership Reports is | |||
only by the Designated Router (DR) and the Backup Designated Router | performed by not only the Designated Router (DR) but also the Backup | |||
(BDR). This means that in a mixed environment containing both MOSPF and | Designated Router (BDR). Therefore, in a mixed LAN containing both | |||
OSPF routers, an MOSPF router must be elected the DR for the subnetwork | MOSPF and OSPF routers, an MOSPF router must be elected the DR for the | |||
if IGMP Queries are to be generated. This can be achieved by simply | subnetwork. This can be achieved by setting the OSPF RouterPriority to | |||
assigning all non-MOSPF routers a RouterPriority of 0 to prevent them | zero in each non-MOSPF router to prevent them from becoming the (B)DR. | |||
from becoming the DR or BDR, thus allowing an MOSPF router to become the | ||||
DR for the subnetwork. | ||||
The DR is responsible for communicating group membership information to | The DR is responsible for communicating group membership information to | |||
all other routers in the OSPF area by flooding Group-Membership LSAs. | all other routers in the OSPF area by flooding Group-Membership LSAs. | |||
The DR originates a separate Group-Membership LSA for each multicast | Similar to Router-LSAs and Network-LSAs, Group-Membership LSAs are only | |||
group having one or more entries in the DR's local group database. | flooded within a single area. | |||
Similar to Router-LSAs and Network-LSAs, Group-Membership LSAs are | ||||
flooded throughout a single area only. This ensures that all remotely- | ||||
originated multicast datagrams are forwarded to the specified subnetwork | ||||
for distribution to local group members. | ||||
7.2.1.2 Datagram's Shortest Path Tree | 7.2.1.2 Datagram's Shortest Path Tree | |||
The datagram's shortest path tree describes the path taken by a | The datagram's shortest path tree describes the path taken by a | |||
multicast datagram as it travels through the internetwork from the | multicast datagram as it travels through the area from the source | |||
source subnetwork to each of the individual group members. The shortest | subnetwork to each of the group members' subnetworks. The shortest | |||
path tree for each (source, group) pair is built "on demand" when a | path tree for each (source, group) pair is built "on demand" when a | |||
router receives the first multicast datagram for a particular (source, | router receives the first multicast datagram for a particular (source, | |||
group) pair. | group) pair. | |||
When the initial datagram arrives, the source subnetwork is located in | When the initial datagram arrives, the source subnetwork is located in | |||
the MOSPF link state database. The MOSPF link state database is simply | the MOSPF link state database. The MOSPF link state database is simply | |||
the standard OSPF link state database with the addition of Group- | the standard OSPF link state database with the addition of Group- | |||
Membership LSAs. Based on the Router- and Network-LSAs in the | Membership LSAs. Based on the Router- and Network-LSAs in the OSPF | |||
MOSPF link state database, a source-based shortest-path tree is | link state database, a source-based shortest-path tree is constructed | |||
constructed using Dijkstra's algorithm. After the tree is built, Group- | using Dijkstra's algorithm. After the tree is built, Group-Membership | |||
Membership LSAs are used to prune those branches that do not lead to | LSAs are used to prune the tree such that the only remaining branches | |||
subnetworks containing members of this group. The output of these | lead to subnetworks containing members of this group. The output of | |||
algorithms is a pruned source-based tree rooted at the datagram's | these algorithms is a pruned source-based tree rooted at the datagram's | |||
source. | source. | |||
To forward multicast datagrams to downstream members of a group, each | ||||
router must determine its position in the datagram's shortest path | ||||
tree. Assume that Figure 17 illustrates the shortest path tree | ||||
for a particular (source, group) pair. Router E's upstream node is | ||||
======================================================================== | ======================================================================== | |||
S | S | |||
| | | | |||
| | | | |||
A # | A # | |||
/ \ | / \ | |||
/ \ | / \ | |||
1 2 | 1 2 | |||
/ \ | / \ | |||
skipping to change at page 37, line 34 | skipping to change at page 33, line 103 | |||
/ \ \ | / \ \ | |||
/ \ \ | / \ \ | |||
6 7 8 | 6 7 8 | |||
/ \ \ | / \ \ | |||
G # # H # I | G # # H # I | |||
LEGEND | LEGEND | |||
# Router | # Router | |||
Figure 17. Shortest Path Tree for a (S, G) pair | Figure 15. Shortest Path Tree for a (S, G) pair | |||
======================================================================== | ======================================================================== | |||
Router B and there are two downstream interfaces: one connecting to | To forward multicast datagrams to downstream members of a group, each | |||
Subnetwork 6 and another connecting to Subnetwork 7. | router must determine its position in the datagram's shortest path tree. | |||
Assume that Figure 15 illustrates the shortest path tree for a given | ||||
(source, group) pair. Router E's upstream node is Router B and there | ||||
are two downstream interfaces: one connecting to Subnetwork 6 and | ||||
another connecting to Subnetwork 7. | ||||
Note the following properties of the basic MOSPF routing algorithm: | Note the following properties of the basic MOSPF routing algorithm: | |||
o For a given multicast datagram, all routers within an OSPF | o For a given multicast datagram, all routers within an OSPF | |||
area calculate the same source-based shortest path delivery | area calculate the same source-based shortest path delivery | |||
tree. Tie-breakers have been defined to guarantee that if | tree. Tie-breakers have been defined to guarantee that if | |||
several equal-cost paths exist, all routers agree on a single | several equal-cost paths exist, all routers agree on a single | |||
path through the area. Unlike unicast OSPF, MOSPF does not | path through the area. Unlike unicast OSPF, MOSPF does not | |||
support the concept of equal-cost multipath routing. | support the concept of equal-cost multipath routing. | |||
o Synchronized link state databases containing Group-Membership | o Synchronized link state databases containing Group-Membership | |||
LSAs allow an MOSPF router to effectively perform the Reverse | LSAs allow an MOSPF router to build a source-based shortest- | |||
Path Multicasting (RPM) computation "in memory." Unlike the | path tree in memory, working forward from the source to the | |||
DVMRP, this means that the first datagram of a new transmis- | group member(s). Unlike the DVMRP, this means that the first | |||
sion does not have to be flooded to all routers in an area. | datagram of a new transmission does not have to be flooded to | |||
all routers in an area. | ||||
o The "on demand" construction of the source-based delivery tree | o The "on demand" construction of the source-based delivery tree | |||
has the benefit of spreading calculations over time, resulting | has the benefit of spreading calculations over time, resulting | |||
in a lesser impact for participating routers. Of course, this | in a lesser impact for participating routers. Of course, this | |||
may strain the CPU(s) in a router if many (source, group) pairs | may strain the CPU(s) in a router if many new (source, group) | |||
appear at about the same time, or if there are a lot of events | pairs appear at about the same time, or if there are a lot of | |||
which force the router to flush and rebuild its forwarding cache. | events which force the MOSPF process to flush and rebuild its | |||
In a stable topology with long-lived multicast sessions, these | forwarding cache. In a stable topology with long-lived | |||
effects should be minimal. | multicast sessions, these effects should be minimal. | |||
7.2.1.3 Forwarding Cache | 7.2.1.3 Forwarding Cache | |||
Each MOSPF router makes its forwarding decision based on the contents of | Each MOSPF router makes its forwarding decision based on the contents of | |||
its forwarding cache. The forwarding cache is built from the source- | its forwarding cache. Contrary to DVMRP, MOSPF forwarding is not RPF- | |||
based shortest-path tree for each (source, group) pair and the router's | based. The forwarding cache is built from the source-based shortest- | |||
local group database. After the router discovers its position in the | path tree for each (source, group) pair, and the router's local group | |||
shortest path tree, a forwarding cache entry is created containing the | database. After the router discovers its position in the shortest path | |||
(source, group) pair, the upstream interface, and the downstream | tree, a forwarding cache entry is created containing the (source, group) | |||
interface(s). At this point, all resources associated with the creation | pair, its expected upstream interface, and the necessary downstream | |||
of the tree are deleted. From this point on, the forwarding cache entry | interface(s). The forwarding cache entry is now used to quickly | |||
is used to quickly forward all subsequent datagrams from this source to | forward all subsequent datagrams from this source to this group. If | |||
this group. | a new source begins sending to a new group, MOSPF must first calculate | |||
the distribution tree so that it may create a cache entry that can be | ||||
used to forward the packet. | ||||
Figure 18 displays the forwarding cache for an example MOSPF router. | Figure 16 displays the forwarding cache for an example MOSPF router. | |||
The elements in the display include the following items: | The elements in the display include the following items: | |||
Dest. Group The destination group address to which matching | Dest. Group A known destination group address to which | |||
datagrams are forwarded. | datagrams are currently being forwarded, or to | |||
which traffic was sent "recently" (i.e., since | ||||
the last topology or group membership or other | ||||
event which (re-)initialized MOSPF's forwarding | ||||
cache. | ||||
Source The datagram's source host address. Each (Dest. | Source The datagram's source host address. Each (Dest. | |||
Group, Source) pair uniquely identifies a | Group, Source) pair uniquely identifies a | |||
separate forwarding cache entry. | separate forwarding cache entry. | |||
======================================================================== | ======================================================================== | |||
Dest. Group Source Upstream Downstream TTL | Dest. Group Source Upstream Downstream TTL | |||
224.1.1.1 128.1.0.2 11 12 13 5 | 224.1.1.1 128.1.0.2 11 12 13 5 | |||
224.1.1.1 128.4.1.2 11 12 13 2 | 224.1.1.1 128.4.1.2 11 12 13 2 | |||
224.1.1.1 128.5.2.2 11 12 13 3 | 224.1.1.1 128.5.2.2 11 12 13 3 | |||
224.2.2.2 128.2.0.3 12 11 7 | 224.2.2.2 128.2.0.3 12 11 7 | |||
Figure 18: MOSPF Forwarding Cache | Figure 16: MOSPF Forwarding Cache | |||
======================================================================== | ======================================================================== | |||
Upstream The interface from which a matching datagram | Upstream Datagrams matching this row's Dest. Group and | |||
must be received | Source must be received on this interface. | |||
Downstream The interface(s) over which a matching datagram | ||||
will be forwarded to reach known Destination | ||||
group members | ||||
TTL The minimum number of hops a datagram will | ||||
travel to reach the multicast group members. | ||||
This allows the router to discard datagrams | ||||
that do not have a high enough TTL to reach a | ||||
certain group member. | ||||
The information in the forwarding cache is not aged or periodically | ||||
refreshed. It is maintained as long as there are system resources | ||||
available (e.g., memory) or until the next topology change. In general, | ||||
the contents of the forwarding cache will change when: | ||||
o The topology of the OSPF internetwork changes, forcing all of | ||||
the shortest path trees to be recalculated. (Once the cache | ||||
has been flushed, entries are not rebuilt until another packet | ||||
for one of the previous (Dest. Group, Source) pairs is | ||||
received.) | ||||
o There is a change in the Group-Membership LSAs indicating that | ||||
the distribution of individual group members has changed. | ||||
7.2.2 Mixing MOSPF and OSPF Routers | ||||
MOSPF routers can be combined with non-multicast OSPF routers. This | ||||
permits the gradual deployment of MOSPF and allows experimentation with | ||||
multicast routing on a limited scale. When MOSPF and non-MOSPF routers | ||||
are mixed within an Autonomous System, all routers will interoperate in | ||||
the forwarding of unicast datagrams. | ||||
It is important to note that an MOSPF router is required to eliminate | ||||
all non-multicast OSPF routers when it builds its source-based shortest- | ||||
path delivery tree. An MOSPF router can easily determine the multicast | ||||
capability of any other router based on the setting of the multicast- | ||||
capable bit (MC-bit) in the Options field of each router's link state | ||||
advertisements. The omission of non-multicast routers can create a | ||||
number of potential problems when forwarding multicast traffic: | ||||
o The Designated Router for a multi-access network must be an | ||||
MOSPF router. If a non-multicast OSPF router is elected the | ||||
DR, the subnetwork will not be selected to forward multicast | ||||
datagrams since a non-multicast DR cannot generate Group- | ||||
Membership LSAs for its subnetwork (because it is not running | ||||
IGMP, so it won't hear IGMP Host Membership Reports). To use | ||||
MOSPF, it is a good idea to ensure that at least two of the | ||||
MOSPF routers on each LAN have higher router_priority values | ||||
than any non-MOSPF routers. A possible strategy would be to | ||||
configure any non-MOSPF routers with a router_priority of | ||||
zero, so that they cannot become (B)DR. | ||||
o Multicast datagrams may be forwarded along suboptimal routes | ||||
since the shortest path between two points may require traversal | ||||
of a non-multicast OSPF router. | ||||
o Even though there is unicast connectivity to a destination, | Downstream If a datagram matching this row's Dest. Group | |||
there may not be multicast connectivity. For example, the | and Source is received on the correct Upstream | |||
network may partition with respect to multicast connectivity | interface, then it is forwarded across the listed | |||
since the only path between two points could require traversal | Downstream interfaces. | |||
of a non-multicast-capable OSPF router. | ||||
o The forwarding of multicast and unicast datagrams between | TTL The minimum number of hops a datagram must cross | |||
two points may follow entirely different paths through the | to reach any of the Dest. Group's members. An | |||
internetwork. This may make some routing problems a bit more | MOSPF router may discard a stem is a single area (and the source is inside that area...). The | |||
challenging to debug. | following discussion assumes that the reader is familiar with OSPF. | |||
7.2.3 Inter-Area Routing with MOSPF | 7.2.1.1 Local Group Database | |||
Inter-area routing involves the case where a datagram's source and some | Similar to all other multicast routing protocols, MOSPF routers use the | |||
of its destination group members reside in different OSPF areas. It | Internet Group Management Protocol (IGMP) to monitor multicast group | |||
should be noted that the forwarding of multicast datagrams continues to | membership on directly-attached subnetworks. MOSPF routers maintain a | |||
be determined by the contents of the forwarding cache which is still | "local group database" which lists directly-attached groups and | |||
built from the local group database and the datagram source-based trees. | determines the local router's responsibility for delivering multicast | |||
The major differences are related to the way that group membership | datagrams to these groups. | |||
information is propagated and the way that the inter-area source-based | ||||
tree is constructed. | ||||
7.2.3.1 Inter-Area Multicast Forwarders | On any given subnetwork, the transmission of IGMP Host Membership | |||
Queries is performed solely by the Designated Router (DR). However, | ||||
the responsibility of listening to IGMP Host Membership Reports is | ||||
performed by not only the Designated Router (DR) but also the Backup | ||||
Designated Router (BDR). Therefore, in a mixed LAN containing both | ||||
MOSPF and OSPF routers, an MOSPF router must be elected the DR for the | ||||
subnetwork. This can be achieved by setting the OSPF RouterPriority to | ||||
zero in each non-MOSPF router to prevent them from becoming the (B)DR. | ||||
In MOSPF, a subset of an area's Area Border Routers (ABRs) function as | The DR is responsible for communicating group membership information to | |||
"inter-area multicast forwarders." An inter-area multicast forwarder is | all other routers in the OSPF area by flooding Group-Membership LSAs. | |||
responsible for the forwarding of group membership information and | Similar to Router-LSAs and Network-LSAs, Group-Membership LSAs are only | |||
multicast datagrams between areas. Configuration parameters determine | flooded within a single area. | |||
whether or not a particular ABR also functions as an inter-area | ||||
multicast forwarder. | ||||
Inter-area multicast forwarders summarize their attached areas' group | 7.2.1.2 Datagram's Shortest Path Tree | |||
membership information to the backbone by originating new Group- | ||||
Membership LSAs into the backbone area. It is important to note that | ||||
the summarization of group membership in MOSPF is asymmetric. This | ||||
means that group membership information from non-backbone areas is | ||||
flooded into the backbone. However, group membership from the backbone | ||||
or from other non-backbone areas is not flooded into any non-backbone | ||||
area(s). | ||||
To permit the forwarding of multicast traffic between areas, MOSPF | The datagram's shortest path tree describes the path taken by a | |||
introduces the concept of a "wild-card multicast receiver." A wild-card | multicast datagram as it travels through the area from the source | |||
multicast receiver is a router that receives all multicast traffic | subnetwork to each of the group members' subnetworks. The shortest | |||
path tree for each (source, group) pair is built "on demand" when a | ||||
router receives the first multicast datagram for a particular (source, | ||||
group) pair. | ||||
generated in an area, regardless of the multicast group membership. In | When the initial datagram arrives, the source subnetwork is located in | |||
non-backbone areas, all inter-area multicast forwarders operate as | the MOSPF link state database. The MOSPF link state database is simply | |||
wild-card multicast receivers. This guarantees that all multicast | the standard OSPF link state database with the addition of Group- | |||
traffic originating in a non-backbone area is delivered to its inter- | Membership LSAs. Based on the Router- and Network-LSAs in the OSPF | |||
area multicast forwarder, and then if necessary into the backbone area. | link state database, a source-based shortest-path tree is constructed | |||
using Dijkstra's algorithm. After the tree is built, Group-Membership | ||||
LSAs are used to prune the tree such that the only remaining branches | ||||
lead to subnetworks containing members of this group. The output of | ||||
these algorithms is a pruned source-based tree rooted at the datagram's | ||||
source. | ||||
======================================================================== | ======================================================================== | |||
------------------------- | S | |||
/ Backbone Area \ | | | |||
| | | | | |||
| ^ ^ | | A # | |||
| ___|___ ___|___ | | / \ | |||
\__| |___| |__/ | / \ | |||
|---*---| |---*---| | 1 2 | |||
| | | / \ | |||
_______ _______ | B # # C | |||
/ \ / \ | / \ \ | |||
| Area | | Area | | / \ \ | |||
| 1 | | 2 | | 3 4 5 | |||
|-------| |-------| | / \ \ | |||
D # # E # F | ||||
/ \ \ | ||||
/ \ \ | ||||
6 7 8 | ||||
/ \ \ | ||||
G # # H # I | ||||
LEGEND | LEGEND | |||
^ | # Router | |||
| Group Membership LSAs | ||||
_____ | ||||
|_____| Area Border Router and | ||||
Inter-Area Multicast Forwarder | ||||
* Wild-Card Multicast | ||||
Receiver Interface | ||||
Figure 19. Inter-Area Routing Architecture | Figure 15. Shortest Path Tree for a (S, G) pair | |||
======================================================================== | ======================================================================== | |||
Since the backbone has group membership knowledge for all areas, the | To forward multicast datagrams to downstream members of a group, each | |||
datagram can then be forwarded to group members residing in the | router must determine its position in the datagram's shortest path tree. | |||
backbone and other non-backbone areas. The backbone area does not | Assume that Figure 15 illustrates the shortest path tree for a given | |||
require wild-card multicast receivers because the routers in the | (source, group) pair. Router E's upstream node is Router B and there | |||
backbone area have complete knowledge of group membership information | are two downstream interfaces: one connecting to Subnetwork 6 and | |||
for the entire OSPF system. | another connecting to Subnetwork 7. | |||
7.2.3.2 Inter-Area Datagram Shortest-Path Tree | ||||
In the case of inter-area multicast routing, it is often impossible to | ||||
build a complete datagram shortest-path delivery tree. Incomplete trees | ||||
are created because detailed topological and group membership | ||||
information for each OSPF area is not distributed between OSPF areas. | ||||
To overcome these limitations, topological estimates are made through | ||||
the use of wild-card receivers and OSPF Summary-Links LSAs. | ||||
There are two cases that need to be considered when constructing an | ||||
inter-area shortest-path delivery tree. The first involves the | ||||
condition when the source subnetwork is located in the same area as the | ||||
router performing the calculation. The second situation occurs when the | ||||
======================================================================== | Note the following properties of the basic MOSPF routing algorithm: | |||
---------------------------------- | o For a given multicast datagram, all routers within an OSPF | |||
| S | | is flooded into the | |||
| | Area 1 | | backbone, but group membership from the backbone (or from any other | |||
| | | | non-backbone areas) is not flooded into any non-backbone area(s). | |||
| # | | ||||
| / \ | | ||||
| / \ | | ||||
| / \ | | ||||
| / \ | | ||||
| O-# #-O | | ||||
| / \ \ | | ||||
| / \ \ | | ||||
| / \ \ | | ||||
| / \ \ | | ||||
| O-# # #-O | | ||||
| / \ \ | | ||||
| / \ \ | | ||||
| / \ \ | | ||||
| / \ \ | | ||||
| O-# #-O --- | | ||||
----------------------------| ? |- | ||||
--- | ||||
To | ||||
Backbone | ||||
LEGEND | ||||
S Source Subnetwork | To permit the forwarding of multicast traffic between areas, MOSPF | |||
O Subnet Containing Group Members | introduces the concept of a "wild-card multicast receiver." A wild-card | |||
# Intra-Area MOSPF Router | multicast receiver is a router that receives all multicast traffic | |||
? WildCard Multicast Receiver | generated in an area. In non-backbone areas, all inter-area multicast | |||
forwarders operate as wild-card multicast receivers. This guarantees | ||||
that all multicast traffic originating in any non-backbone area is | ||||
delivered to its inter-area multicast forwarder, and then if necessary | ||||
into the backbone area. Since the backbone knows group membership for | ||||
all areas, the datagram can be forwarded to the appropriate location(s) | ||||
in the OSPF autonomous system, if only it is forwarded into the backbone | ||||
by the source area's multicast ABR. | ||||
Figure 20. Datagram Shortest Path Tree (Source in Same Area) | 7.2.3.2 Inter-Area Datagram's Shortest-Path Tree | |||
======================================================================== | ||||
source subnetwork is located in a different area than the router | In the case of inter-area multicast routing, it is usually impossible to | |||
performing the calculation. | build a complete shortest-path delivery tree. Incomplete trees are a | |||
fact of life because each OSPF area's complete topological and group | ||||
membership information is not distributed between OSPF areas. | ||||
Topological estimates are made through the use of wild-card receivers | ||||
and OSPF Summary-Links LSAs. | ||||
If the source of a multicast datagram resides in the same area as the | If the source of a multicast datagram resides in the same area as the | |||
router performing the calculation, the pruning process must be careful | router performing the calculation, the pruning process must be careful | |||
to ensure that branches leading to other areas are not removed from the | to ensure that branches leading to other areas are not removed from the | |||
tree. Only those branches having no group members nor wild-card | tree. Only those branches having no group members nor wild-card | |||
multicast receivers are pruned. Branches containing wild-card multicast | multicast receivers are pruned. Branches containing wild-card multicast | |||
receivers must be retained since the local routers do not know if there | receivers must be retained since the local routers do not know whether | |||
are group members residing in other areas. | there are any group members residing in other areas. | |||
======================================================================== | ||||
S | ||||
| | ||||
# | ||||
| | ||||
Summary-Links LSA | ||||
| | ||||
--- | ||||
------------| ? |----------------- | ||||
| --- Area 1 | | ||||
| | | | ||||
| # | | ||||
| / \ | | ||||
| / \ | | ||||
| / \ | | ||||
| / \ | | ||||
| O-# #-O | | ||||
| / \ \ | | ||||
| / \ \ | | ||||
| / \ \ | | ||||
| / \ \ | | ||||
| O-# # #-O | | ||||
| / \ \ | | ||||
| / \ \ | | ||||
| / \ \ | | ||||
| / \ \ | | ||||
| O-# #-O #-O | | ||||
---------------------------------- | ||||
LEGEND | ||||
S Source Subnetwork | ||||
O Subnet Containing Group Members | ||||
# Inter-Area MOSPF Router | ||||
? Intra-Area Multicast Forwarder | ||||
Figure 21. Shortest Path Tree (Source in Different Area) | ||||
======================================================================== | ||||
If the source of a multicast datagram resides in a different area than | If the source of a multicast datagram resides in a different area than | |||
the router performing the calculation, the details describing the local | the router performing the calculation, the details describing the local | |||
topology surrounding the source station are not known. However, this | topology surrounding the source station are not known. However, this | |||
information can be estimated using information provided by Summary-Links | information can be estimated using information provided by Summary-Links | |||
LSAs for the source subnetwork. In this case, the base of the tree | LSAs for the source subnetwork. In this case, the base of the tree | |||
begins with branches directly connecting the source subnetwork to each | begins with branches directly connecting the source subnetwork to each | |||
of the local area's inter-area multicast forwarders. The inter-area | of the local area's inter-area multicast forwarders. Datagrams sourced | |||
multicast forwarders must be included in the tree since any multicast | from outside the local area will enter the area via one of its inter- | |||
datagrams originating outside the local area will enter the area via an | area multicast forwarders, so they all must be part of the candidate | |||
inter-area multicast forwarder. | distribution tree. | |||
Since each inter-area multicast forwarder is also an ABR, it must | Since each inter-area multicast forwarder is also an ABR, it must | |||
maintain a separate link state database for each attached area. This | maintain a separate link state database for each attached area. Thus | |||
means that each inter-area multicast forwarder is required to calculate | each inter-area multicast forwarder is required to calculate a separate | |||
a separate forwarding tree for each of its attached areas. After the | forwarding tree for each of its attached areas. | |||
individual trees are calculated, they are merged into a single | ||||
forwarding cache entry for the (source, group) pair and then the | ||||
individual trees are discarded. | ||||
7.2.4 Inter-Autonomous System Multicasting with MOSPF | 7.2.4 Inter-Autonomous System Multicasting with MOSPF | |||
Inter-Autonomous System multicasting involves the situation where a | Inter-Autonomous System multicasting involves the situation where a | |||
datagram's source and at least some of its destination group members | datagram's source or some of its destination group members are in | |||
reside in different OSPF Autonomous Systems. It should be emphasized | different OSPF Autonomous Systems. In OSPF terminology, "inter-AS" | |||
that in OSPF terminology "inter-AS" communication also refers to | communication also refers to connectivity between an OSPF domain and | |||
connectivity between an OSPF domain and another routing domain which | another routing domain which could be within the same Autonomous System | |||
could be within the same Autonomous System from the perspective of an | from the perspective of an Exterior Gateway Protocol. | |||
Exterior Gateway Protocol. | ||||
To facilitate inter-AS multicast routing, selected Autonomous System | To facilitate inter-AS multicast routing, selected Autonomous System | |||
Boundary Routers (ASBRs) are configured as "inter-AS multicast | Boundary Routers (ASBRs) are configured as "inter-AS multicast | |||
forwarders." MOSPF makes the assumption that each inter-AS multicast | forwarders." MOSPF makes the assumption that each inter-AS multicast | |||
forwarder executes an inter-AS multicast routing protocol (e.g., DVMRP) | forwarder executes an inter-AS multicast routing protocol which forwards | |||
which forwards multicast datagrams in a reverse path forwarding (RPF) | multicast datagrams in a reverse path forwarding (RPF) manner. Since | |||
manner. Each inter-AS multicast forwarder functions as a wild-card | the publication of the MOSPF RFC, a term has been defined for such a | |||
multicast receiver in each of its attached areas. This guarantees that | router: Multicast Border Router. See section 9 for an overview of the | |||
each inter-AS multicast forwarder remains on all pruned shortest-path | MBR concepts. Each inter-AS multicast forwarder is a wildcard multicast | |||
trees and receives all multicast datagrams, regardless of the multicast | receiver in each of its attached areas. This guarantees that each | |||
group membership. | inter-AS multicast forwarder remains on all pruned shortest-path trees | |||
and receives all multicast datagrams. | ||||
Three cases need to be considered when describing the construction of an | ||||
inter-AS shortest-path delivery tree. The first occurs when the source | ||||
subnetwork is located in the same area as the router performing the | ||||
calculation. For the second case, the source subnetwork resides in a | ||||
different area than the router performing the calculation. The final | ||||
case occurs when the source subnetwork is located in a different AS | ||||
than the router performing the calculation. | ||||
The first two cases are similar to the inter-area examples described in | ||||
the previous section. The only enhancement is that inter-AS multicast | ||||
forwarders must also be included on the pruned shortest path delivery | ||||
tree. Branches containing inter-AS multicast forwarders must be | ||||
retained since the local routers do not know if there are group members | ||||
residing in other Autonomous Systems. When a multicast datagram arrives | ||||
at an inter-AS multicast forwarder, it is the responsibility of the ASBR | ||||
to determine whether the datagram should be forwarded outside of the | ||||
local Autonomous System. | ||||
Figure 22 illustrates a sample inter-AS shortest path delivery tree when | ||||
the source subnetwork resides in the same area as the router performing | ||||
the calculation. | ||||
======================================================================== | ||||
----------------------------------- | ||||
| S Area 1 | | ||||
| | | | ||||
| # | | ||||
| / \ | | ||||
| / \ | | ||||
| / \ | | ||||
| / \ | | ||||
| O-# #-O | | ||||
| / \ \ | | ||||
| / \ \ | | ||||
| / \ \ | | ||||
| / \ \ | | ||||
| O-# # #-O | | ||||
| / \ \ | | ||||
| / \ \ | | ||||
| / \ \ | | ||||
| / \ \ | | ||||
| / #-O \ | | ||||
| --- --- | | ||||
------| & |------------------| ? |- | ||||
--- --- | ||||
To other Autonomous To Backbone | ||||
Systems | ||||
LEGEND | ||||
S Source Subnetwork | ||||
O Subnet Containing Group Members | ||||
# Intra-Area MOSPF Router | ||||
? Inter-Area Multicast Forwarder | ||||
& Inter-AS Multicast Forwarder | ||||
Figure 22. Inter-AS Datagram Shortest Path Tree (Source in Same Area) | ||||
======================================================================== | ||||
If the source of a multicast datagram resides in a different Autonomous | ||||
System than the router performing the calculation, the details | ||||
describing the local topology surrounding the source station are not | ||||
known. However, this information can be estimated using the multicast- | ||||
capable AS-External Links describing the source subnetwork. In this | ||||
case, the base of the tree begins with branches directly connecting the | ||||
source subnetwork to each of the local area's inter-AS multicast | ||||
forwarders. | ||||
======================================================================== | ||||
S | ||||
| | ||||
: | ||||
| | ||||
AS-External links | ||||
| | ||||
--- | ||||
------------| & |----------------- | ||||
| --- | | ||||
| / \ | | ||||
| / \ Area 1 | | ||||
| / \ | | ||||
| / \ | | ||||
| O-# #-O | | ||||
| / \ \ | | ||||
| / \ \ | | ||||
| / \ \ | | ||||
| / \ \ | | ||||
| O-# # #-O | | ||||
| / \ \ | | ||||
| / \ \ | | ||||
| / \ \ | | ||||
| / \ \ | | ||||
| / #-O #-O | | ||||
| --- | | ||||
------| ? |----------------------- | ||||
--- | ||||
To | ||||
Backbone | ||||
LEGEND | ||||
S Source Subnetwork | ||||
O Subnet Containing Group Members | ||||
# Intra-Area MOSPF Router | ||||
? Inter-Area Multicast Forwarder | ||||
& Inter-AS Multicast Forwarder | ||||
Figure 23. Inter-AS Datagram Shortest Path Tree (Source in Different AS) | The details of inter-AS forwarding are very similar to inter-area | |||
======================================================================== | forwarding. On the "inside" of the OSPF domain, the multicast ASBR | |||
must conform to all the requirements of intra-area and inter-area | ||||
forwarding. Within the OSPF domain, group members are reached by the | ||||
usual forward path computations, and paths to external sources are | ||||
approximated by a reverse-path source-based tree, with the multicast | ||||
ASBR standing in for the actual source. When the source is within the | ||||
OSPF AS, and there are external group members, it falls to the inter- | ||||
AS multicast forwarders, in their role as wildcard receivers, to make | ||||
sure that the data gets out of the OSPF domain and sent off in the | ||||
correct direction. | ||||
Figure 23 shows a sample inter-AS shortest-path delivery tree when the | 7.3 Protocol-Independent Multicast (PIM) | |||
inter-AS multicast forwarder resides in the same area as the router | ||||
performing the calculation. If the inter-AS multicast forwarder is | ||||
located in a different area than the router performing the calculation, | ||||
the topology surrounding the source is approximated by combining the | ||||
Summary-ASBR Link with the multicast capable AS-External Link. | ||||
As a final point, it is important to note that AS External Links are not | The Protocol Independent Multicast (PIM) routing protocols have been | |||
imported into Stub areas. If the source is located outside of the stub | developed by the Inter-Domain Multicast Routing (IDMR) working group of | |||
area, the topology surrounding the source is estimated by the Default | the IETF. The objective of the IDMR working group is to develop one--or | |||
Summary Links originated by the stub area's intra-area multicast | possibly more than one--standards-track multicast routing protocol(s) | |||
forwarder rather than the AS-External Links. | ||||
7.3 Protocol-Independent Multicast (PIM) | that can provide scaleable multicast routing across the Internet. | |||
The Protocol Independent Multicast (PIM) routing protocol is currently | PIM is actually two protocols: PIM - Dense Mode (PIM-DM) and PIM - | |||
under development by the Inter-Domain Multicast Routing (IDMR) working | Sparse Mode (PIM-SM). In the remainder of this introduction, any | |||
group of the IETF. The objective of the IDMR working group is to | references to "PIM" apply equally well to either of the two protocols... | |||
develop one--or possibly more than one--standards-track multicast | there is no intention to imply that there is only one PIM protocol. | |||
routing protocol(s) that can provide scaleable inter-domain multicast | While PIM-DM and PIM-SM share part of their names, and they do have | |||
routing across the Internet. | related control messages, they are actually two completely independent | |||
protocols. | ||||
PIM receives its name because it is not dependent on the mechanisms | PIM receives its name because it is not dependent on the mechanisms | |||
provided by any particular unicast routing protocol. However, any | provided by any particular unicast routing protocol. However, any | |||
implementation supporting PIM requires the presence of a unicast routing | implementation supporting PIM requires the presence of a unicast routing | |||
protocol to provide routing table information and to adapt to topology | protocol to provide routing table information and to adapt to topology | |||
changes. | changes. | |||
PIM makes a clear distinction between a multicast routing protocol that | PIM makes a clear distinction between a multicast routing protocol that | |||
is designed for dense environments and one that is designed for sparse | is designed for dense environments and one that is designed for sparse | |||
environments. Dense-mode refers to a protocol that is designed to | environments. Dense-mode refers to a protocol that is designed to | |||
operate in an environment where group members are relatively densely | operate in an environment where group members are relatively densely | |||
packed and bandwidth is plentiful. Sparse-mode refers to a protocol | packed and bandwidth is plentiful. Sparse-mode refers to a protocol | |||
that is optimized for environments where group members are distributed | that is optimized for environments where group members are distributed | |||
across many regions of the Internet and bandwidth is not necessarily | across many regions of the Internet and bandwidth is not necessarily | |||
widely available. It is important to note that sparse-mode does not | widely available. It is important to note that sparse-mode does not | |||
imply that the group has a few members, just that they are widely | imply that the group has a few members, just that they are widely | |||
dispersed across the Internet. | dispersed across the Internet. | |||
The designers of PIM argue that DVMRP and MOSPF were developed for | The designers of PIM-SM argue that DVMRP and MOSPF were developed for | |||
environments where group members are densely distributed. They emphasize | environments where group members are densely distributed, and bandwidth | |||
that when group members and senders are sparsely distributed across a | is relatively plentiful. They emphasize that when group members and | |||
wide area, DVMRP and MOSPF do not provide the most efficient multicast | senders are sparsely distributed across a wide area, DVMRP and MOSPF | |||
delivery service. DVMRP periodically sends multicast packets over | do not provide the most efficient multicast delivery service. The | |||
many links that do not lead to group members, while MOSPF can send group | DVMRP periodically sends multicast packets over many links that do not | |||
membership information over links that do not lead to senders or | lead to group members, while MOSPF can send group membership | |||
receivers. | information over links that do not lead to senders or receivers. | |||
7.3.1 PIM-Dense Mode (PIM-DM) | 7.3.1 PIM-Dense Mode (PIM-DM) | |||
While the PIM architecture was driven by the need to provide scaleable | While the PIM architecture was driven by the need to provide scaleable | |||
sparse-mode delivery trees, PIM also defines a new dense-mode protocol | sparse-mode delivery trees, PIM also defines a new dense-mode protocol | |||
instead of relying on existing dense-mode protocols such as DVMRP and | instead of relying on existing dense-mode protocols such as DVMRP and | |||
MOSPF. It is envisioned that PIM-DM would be deployed in resource rich | MOSPF. It is envisioned that PIM-DM would be deployed in resource rich | |||
environments, such as a campus LAN where group membership is relatively | environments, such as a campus LAN where group membership is relatively | |||
dense and bandwidth is likely to be readily available. PIM-DM's control | dense and bandwidth is likely to be readily available. PIM-DM's control | |||
messages are similar to PIM-SM's by design. | messages are similar to PIM-SM's by design. | |||
[This space was intentionally left blank.] | ||||
PIM - Dense Mode (PIM-DM) is similar to DVMRP in that it employs the | PIM - Dense Mode (PIM-DM) is similar to DVMRP in that it employs the | |||
Reverse Path Multicasting (RPM) algorithm. However, there are several | Reverse Path Multicasting (RPM) algorithm. However, there are several | |||
important differences between PIM-DM and DVMRP: | important differences between PIM-DM and DVMRP: | |||
o To find routes back to sources, PIM-DM relies on the presence | o To find routes back to sources, PIM-DM relies on the presence | |||
of an existing unicast routing table. PIM-DM is independent of | of an existing unicast routing table. PIM-DM is independent of | |||
the mechanisms of any specific unicast routing protocol. In | the mechanisms of any specific unicast routing protocol. In | |||
contrast, DVMRP contains an integrated routing protocol that | contrast, DVMRP contains an integrated routing protocol that | |||
makes use of its own RIP-like exchanges to build its own unicast | makes use of its own RIP-like exchanges to build its own unicast | |||
routing table (so a router may orient itself with respect to | routing table (so a router may orient itself with respect to | |||
active source(s). MOSPF augments the information in the OSPF | active source(s)). MOSPF augments the information in the OSPF | |||
link state database, thus MOSPF must run in conjunction with | link state database, thus MOSPF must run in conjunction with | |||
OSPF. | OSPF. | |||
o Unlike the DVMRP which calculates a set of child interfaces for | o Unlike the DVMRP which calculates a set of child interfaces for | |||
each (source, group) pair, PIM-DM simply forwards multicast | each (source, group) pair, PIM-DM simply forwards multicast | |||
traffic on all downstream interfaces until explicit prune | traffic on all downstream interfaces until explicit prune | |||
messages are received. PIM-DM is willing to accept packet | messages are received. PIM-DM is willing to accept packet | |||
duplication to eliminate routing protocol dependencies and | duplication to eliminate routing protocol dependencies and | |||
to avoid the overhead inherent in determining the parent/child | to avoid the overhead inherent in determining the parent/child | |||
relationships. | relationships. | |||
For those cases where group members suddenly appear on a pruned branch | For those cases where group members suddenly appear on a pruned branch | |||
of the delivery tree, PIM-DM, like DVMRP, employs graft messages to | of the delivery tree, PIM-DM, like DVMRP, employs graft messages to | |||
re-attach the previously pruned branch to the delivery tree. | re-attach the previously pruned branch to the delivery tree. | |||
8. SHARED TREE ("SPARSE MODE") ROUTING PROTOCOLS | 8. "SPARSE MODE" ROUTING PROTOCOLS | |||
The most recent additions to the set of multicast routing proto- cols | The most recent additions to the set of multicast routing protocols are | |||
are based on a shared delivery tree. | called "sparse mode" protocols. They are designed from a different | |||
perspective than the "dense mode" protocols that we have already | ||||
examined. Often, they are not data-driven, in the sense that forwarding | ||||
state is set up in advance, and they trade off using bandwidth liberally | ||||
(which is a valid thing to do in a campus LAN environment) for other | ||||
techniques that are much more suited to scaling over large WANs, where | ||||
bandwidth is scarce and expensive. | ||||
These emerging routing protocols include: | These emerging routing protocols include: | |||
o Protocol Independent Multicast - Sparse Mode (PIM-SM), and | o Protocol Independent Multicast - Sparse Mode (PIM-SM), and | |||
o Core-Based Trees (CBT). | o Core-Based Trees (CBT). | |||
Each of these routing protocols is designed to operate efficiently | While these routing protocols are designed to operate efficiently over a | |||
over a wide area network where bandwidth is scarce and group members may | wide area network where bandwidth is scarce and group members may be | |||
quite sparsely distributed, this is not to imply that they are only | ||||
be sparsely distributed. Their ultimate goal is to provide scaleable | suitable for small groups. Sparse doesn't mean small, rather it is | |||
interdomain multicast routing across the Internet. | meant to convey that the groups are widely dispersed, and thus it is | |||
wasteful to (for instance) flood their data periodically across the | ||||
entire internetwork. | ||||
8.1 Protocol-Independent Multicast - Sparse Mode (PIM-SM) | 8.1 Protocol-Independent Multicast - Sparse Mode (PIM-SM) | |||
As described previously, PIM also defines a "dense-mode" or source-based | As described previously, PIM also defines a "dense-mode" or source-based | |||
tree variant. The two protocols are quite unique, and other than | tree variant. Again, the two protocols are quite unique, and other than | |||
control messages, they have very little else in common. Because PIM | control messages, they have very little in common. Note that while PIM | |||
integrates control message processing and data packet forwarding among | integrates control message processing and data packet forwarding among | |||
PIM-Sparse and -Dense Modes, a single PIM router can run different modes | PIM-Sparse and -Dense Modes, PIM-SM and PIM-DM must run in separate | |||
for different groups, as desired. | regions. All groups in a region are either sparse-mode or dense-mode. | |||
PIM-Sparse Mode (PIM-SM) is being developed to provide a multicast | PIM-Sparse Mode (PIM-SM) has been developed to provide a multicast | |||
routing protocol that provides efficient communication between members | routing protocol that provides efficient communication between members | |||
of sparsely distributed groups--the type of groups that are likely to | of sparsely distributed groups--the type of groups that are likely to | |||
be common in wide-area internetworks. PIM's designers observe that | be common in wide-area internetworks. PIM's designers observed that | |||
several hosts wishing to participate in a multicast conference do not | several hosts wishing to participate in a multicast conference do not | |||
justify flooding the entire internetwork periodically with the group's | justify flooding the entire internetwork periodically with the group's | |||
multicast traffic. | multicast traffic. | |||
Noting today's existing MBone scaling problems, and extrapolating to a | Noting today's existing MBone scaling problems, and extrapolating to a | |||
future of ubiquitous multicast (overlaid with perhaps thousands of | future of ubiquitous multicast (overlaid with perhaps thousands of | |||
small, widely dispersed groups), it is not hard to imagine that existing | small, widely dispersed groups), it is not hard to imagine that existing | |||
multicast routing protocols will experience scaling problems. To | multicast routing protocols will experience scaling problems. To | |||
eliminate these potential scaling issues, PIM-SM is designed to limit | eliminate these potential scaling issues, PIM-SM is designed to limit | |||
multicast traffic so that only those routers interested in receiving | multicast traffic so that only those routers interested in receiving | |||
skipping to change at page 49, line 50 | skipping to change at page 42, line 47 | |||
In contrast, dense-mode protocols assume downstream group | In contrast, dense-mode protocols assume downstream group | |||
membership and forward multicast traffic on downstream links | membership and forward multicast traffic on downstream links | |||
until explicit prune messages are received. Thus, the default | until explicit prune messages are received. Thus, the default | |||
forwarding action of dense-mode routing protocols is to forward | forwarding action of dense-mode routing protocols is to forward | |||
all traffic, while the default action of a sparse-mode protocol | all traffic, while the default action of a sparse-mode protocol | |||
is to block traffic unless it has been explicitly requested. | is to block traffic unless it has been explicitly requested. | |||
o PIM-SM evolved from the Core-Based Trees (CBT) approach in that | o PIM-SM evolved from the Core-Based Trees (CBT) approach in that | |||
it employs the concept of a "core" (or rendezvous point (RP) in | it employs the concept of a "core" (or rendezvous point (RP) in | |||
PIM-SM terminology) where receivers "meet" sources. The creator | PIM-SM terminology) where receivers "meet" sources. | |||
of each multicast group selects a primary RP and a small set of | ||||
alternative RPs, known as the RP-set. For each group, there is | [This space was intentionally left blank.] | |||
only a single active RP (which is uniquely determined by a hash | ||||
function). | ||||
======================================================================== | ======================================================================== | |||
S1 S2 | S1 S2 | |||
___|___ ___|___ | ___|___ ___|___ | |||
| | | | | | |||
| | | | | | |||
# # | # # | |||
\ / | \ / | |||
\ Primary / | \ / | |||
\_____________RP______________/ | \_____________RP______________/ | |||
/|\ | ./|\. | |||
________________// | \\_______________ | ________________// | \\_______________ | |||
/ _______/ | \______ \ | / _______/ | \______ \ | |||
# # # # # | # # # # # | |||
___|___ ___|___ ___|___ ___|___ ___|___ | ___|___ ___|___ ___|___ ___|___ ___|___ | |||
| | | | | | | | | | | | | | |||
R R R R R R | R R R R R R | |||
LEGEND | LEGEND | |||
# PIM Router | # PIM Router | |||
R Multicast Receiver | R Multicast Receiver | |||
Figure 24 Primary Rendezvous Point | Figure 17: Rendezvous Point | |||
======================================================================== | ======================================================================== | |||
When joining a group, each receiver uses IGMP to notify its directly- | When joining a group, each receiver uses IGMP to notify its directly- | |||
attached router, which in turn joins the multicast delivery tree by | attached router, which in turn joins the multicast delivery tree by | |||
sending an explicit PIM-Join message hop-by-hop toward the group's | sending an explicit PIM-Join message hop-by-hop toward the group's | |||
primary RP. A source uses the RP to announce its presence, and act as | RP. A source uses the RP to announce its presence, and act as a conduit | |||
a conduit to members that have joined the group. This model requires | to members that have joined the group. This model requires sparse-mode | |||
sparse-mode routers to maintain a bit of state (i.e., the RP-set for | routers to maintain a bit of state (the RP-set for the sparse-mode | |||
each defined sparse-mode group) prior to the arrival of data. In | region) prior to the arrival of data. In contrast, because dense-mode | |||
contrast, dense mode protocols are data-driven, since they do not store | protocols are data-driven, they do not store any state for a group until | |||
any state for a group until the arrival of the first data packet. | the arrival of its first data packet. | |||
There is only one RP-set per sparse-mode domain, not per group. | ||||
Moreover, the creator of a group is not involved in RP selection. Also, | ||||
there is no such concept as a "primary" RP. Each group has precisely | ||||
one RP at any given time. In the event of the failure of an RP, a new | ||||
RP-set is distributed which does not include the failed RP. | ||||
8.1.1 Directly Attached Host Joins a Group | 8.1.1 Directly Attached Host Joins a Group | |||
When there is more than one PIM router connected to a multi-access LAN, | When there is more than one PIM router connected to a multi-access LAN, | |||
the router with the highest IP address is selected to function as the | the router with the highest IP address is selected to function as the | |||
Designated Router (DR) for the LAN. The DR may or may not be | Designated Router (DR) for the LAN. The DR may or may not be | |||
responsible for the transmission of IGMP Host Membership Query messages, | responsible for the transmission of IGMP Host Membership Query messages, | |||
but does send Join/Prune messages toward the RP, and maintains the | but does send Join/Prune messages toward the RP, and maintains the | |||
status of the active RP for local senders to multicast groups. | status of the active RP for local senders to multicast groups. | |||
When the DR receives an IGMP Report message for a new group, the DR | When the DR receives an IGMP Report message for a new group, the DR | |||
determines if the group is RP-based or not by examining the group | determines if the group is RP-based or not by examining the group | |||
address. If the address indicates a SM group (by virtue of the group- | address. If the address indicates a SM group (by virtue of the group- | |||
specific state that even inactive groups have stored in all PIM | specific state that even inactive groups have stored in all PIM | |||
routers), the DR performs a deterministic hash function over the | routers), the DR performs a deterministic hash function over the | |||
sparse-mode region's RP-set to uniquely determine the RP for the | ||||
group. | ||||
======================================================================== | ======================================================================== | |||
Source (S) | Source (S) | |||
_|____ | _|____ | |||
| | | | |||
| | | | |||
# | # | |||
/ \ | / \ | |||
/ \ | / \ | |||
skipping to change at page 51, line 27 | skipping to change at page 44, line 35 | |||
Designated / \ | Designated / \ | |||
Host | Router / \ Rendezvous Point | Host | Router / \ Rendezvous Point | |||
-----|- # - - - - - -#- - - - - - - -RP for group G | -----|- # - - - - - -#- - - - - - - -RP for group G | |||
(receiver) | ----Join--> ----Join--> | (receiver) | ----Join--> ----Join--> | |||
| | | | |||
LEGEND | LEGEND | |||
# PIM Router RP Rendezvous Point | # PIM Router RP Rendezvous Point | |||
Figure 25: Host Joins a Multicast Group | Figure 18: Host Joins a Multicast Group | |||
======================================================================== | ======================================================================== | |||
group's RP-set to uniquely determine the primary RP for the group. | After performing the lookup, the DR creates a multicast forwarding entry | |||
(Otherwise, this is a dense-mode group and dense-mode forwarding rules | for the (*, group) pair and transmits a unicast PIM-Join message toward | |||
apply.) | the primary RP for this specific group. The (*, group) notation | |||
After performing the lookup, the DR creates a multicast forwarding cache | ||||
entry for the (*, group) pair and transmits a unicast PIM-Join message | ||||
toward the primary RP for this specific group. The (*, group) notation | ||||
indicates an (any source, group) pair. The intermediate routers forward | indicates an (any source, group) pair. The intermediate routers forward | |||
the unicast PIM-Join message, creating a forwarding cache entry for the | the unicast PIM-Join message, creating a forwarding entry for the | |||
(*, group) pair only if such a forwarding entry does not yet exist. | (*, group) pair only if such a forwarding entry does not yet exist. | |||
Intermediate routers must create a forwarding cache entry so that they | Intermediate routers must create a forwarding entry so that they will be | |||
will be able to forward future traffic downstream toward the DR which | able to forward future traffic downstream toward the DR which originated | |||
originated the PIM-Join message. | the PIM-Join message. | |||
8.1.2 Directly Attached Source Sends to a Group | 8.1.2 Directly Attached Source Sends to a Group | |||
When a source first transmits a multicast packet to a group, its DR | When a source first transmits a multicast packet to a group, its DR | |||
forwards the datagram to the primary RP for subsequent distribution | forwards the datagram to the primary RP for subsequent distribution | |||
along the group's delivery tree. The DR encapsulates the initial | along the group's delivery tree. The DR encapsulates the initial | |||
multicast packets in a PIM-SM-Register packet and unicasts them toward | multicast packets in a PIM-SM-Register packet and unicasts them toward | |||
the primary RP for the group. The PIM-SM-Register packet informs the | the primary RP for the group. The PIM-SM-Register packet informs the | |||
RP of a new source which causes the active RP to transmit PIM-Join | RP of a new source which causes the active RP to transmit PIM-Join | |||
messages back toward the source's DR. The routers between the RP and | messages back toward the source's DR. The routers between the RP and | |||
the source's DR use the re- ceived PIM-Join messages (from the RP) to | the source's DR use the received PIM-Join messages (from the RP) to | |||
create forwarding state for the new (source, group) pair. Now all | create forwarding state for the new (source, group) pair. Now all | |||
routers from the active RP for this sparse-mode group to the source's DR | routers from the active RP for this sparse-mode group to the source's DR | |||
will be able to forward future unencapsulated multicast packets from | will be able to forward future unencapsulated multicast packets from | |||
this source subnetwork to the RP. Until the (source, group) state has | this source subnetwork to the RP. Until the (source, group) state has | |||
been created in all the routers between the RP and source's DR, the DR | been created in all the routers between the RP and source's DR, the DR | |||
must continue to send the source's multicast IP packets to the RP as | must continue to send the source's multicast IP packets to the RP as | |||
unicast packets encapsulated within unicast PIM-Register packets. The | unicast packets encapsulated within unicast PIM-Register packets. The | |||
DR may stop forwarding multicast packets encapsulated in this manner | DR may stop forwarding multicast packets encapsulated in this manner | |||
once it has received a PIM-Register-Stop message from the active RP for | once it has received a PIM-Register-Stop message from the active RP for | |||
this group. The RP may send PIM-Register-Stop messages if there are no | this group. The RP may send PIM-Register-Stop messages if there are no | |||
downstream receivers for a group, or if the RP has successfully joined | downstream receivers for a group, or if the RP has successfully joined | |||
the (source, group) tree (which originates at the source's DR). | the (source, group) tree (which originates at the source's DR). | |||
======================================================================== | ======================================================================== | |||
Source (S) | Source (S) | |||
_|____ | _|____ | |||
| | | | |||
| | | | |||
# | # v | |||
/ \ | /.\ , | |||
/ ^\ | / ^\ v | |||
/ .\ | / .\ , | |||
# ^# | # ^# v | |||
/ .\ | / .\ , | |||
Designated / ^\ | Designated / ^\ v | |||
Host | Router / .\ v | Host | Host | Router / .\ , | Host | |||
-----|-#- - - - - - -#- - - - - - - -RP- - - # - - -|----- | -----|-#- - - - - - -#- - - - - - - -RP- - - # - - -|----- | |||
(receiver) | <~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~> | (receiver) | (receiver) | <~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~> | (receiver) | |||
LEGEND | LEGEND | |||
# PIM Router | # PIM Router | |||
RP Rendezvous Point | RP Rendezvous Point | |||
PIM-Register | > , > PIM-Register | |||
< . < PIM-Join | < . < PIM-Join | |||
~ ~ ~ Resend to group members | ~ ~ ~ Resend to group members | |||
Figure 26: Source sends to a Multicast Group | Figure 19: Source sends to a Multicast Group | |||
======================================================================== | ======================================================================== | |||
8.1.3 Shared Tree (RP-Tree) or Shortest Path Tree (SPT)? | 8.1.3 Shared Tree (RP-Tree) or Shortest Path Tree (SPT)? | |||
The RP-tree provides connectivity for group members but does not | The RP-tree provides connectivity for group members but does not | |||
optimize the delivery path through the internetwork. PIM-SM allows | optimize the delivery path through the internetwork. PIM-SM allows | |||
receivers to either continue to receive multicast traffic over the | routers to either a) continue to receive multicast traffic over the | |||
shared RP-tree or over a source-based shortest-path tree that a receiver | shared RP-tree, or b) subsequently create a source-based shortest-path | |||
subsequently creates. The shortest-path tree allows a group member to | tree on behalf of their attached receiver(s). Besides reducing the | |||
reduce the delay between itself and a particular source. | delay between this router and the source (beneficial to its attached | |||
receivers), the shared tree also reduces traffic concentration effects | ||||
on the RP-tree. | ||||
A PIM router with local receivers has the option of switching to the | A PIM-SM router with local receivers has the option of switching to the | |||
source's shortest-path tree (i.e., source-based tree) once it starts | source's shortest-path tree (i.e., source-based tree) once it starts | |||
receiving data packets from the source. The change- over may be | receiving data packets from the source. The change- over may be | |||
triggered if the data rate from the source exceeds a predefined | triggered if the data rate from the source exceeds a predefined | |||
threshold. The local receiver's DR does this by sending a Join | threshold. The local receiver's last-hop router does this by sending a | |||
message toward the active source. After the source-based SPT is | Join message toward the active source. After the source-based SPT is | |||
active, protocol mechanisms allow a Prune message for the same source | active, protocol mechanisms allow a Prune message for the same source | |||
to be transmitted to the active RP, thus removing this router from the | to be transmitted to the active RP, thus removing this router from the | |||
shared RP-tree. Alternatively, the DR may be configured to continue | shared RP-tree. Alternatively, the DR may be configured to continue | |||
using the shared RP-tree and never switch over to the source-based SPT, | using the shared RP-tree and never switch over to the source-based SPT, | |||
or a router could perhaps use a different administrative metric to | or a router could perhaps use a different administrative metric to | |||
decide if and when to switch to a source-based tree. | decide if and when to switch to a source-based tree. | |||
======================================================================== | ======================================================================== | |||
Source (S) | Source (S) | |||
skipping to change at page 53, line 35 | skipping to change at page 46, line 46 | |||
% / \* | % / \* | |||
% / \* | % / \* | |||
% / \* | % / \* | |||
Designated % # #* | Designated % # #* | |||
Router % / \* | Router % / \* | |||
% / \* | % / \* | |||
Host | <-% % % % % % / \v | Host | <-% % % % % % / \v | |||
-----|-#- - - - - - -#- - - - - - - -RP | -----|-#- - - - - - -#- - - - - - - -RP | |||
(receiver) | <* * * * * * * * * * * * * * * | (receiver) | <* * * * * * * * * * * * * * * | |||
| | | | |||
LEGEND | LEGEND | |||
# PIM Router | # PIM Router | |||
RP Rendezvous Point | RP Rendezvous Point | |||
* * RP Tree | * * RP-Tree (Shared) | |||
% % SPT Tree | % % Shortest-Path Tree (Source-based) | |||
Figure 27: Shared RP-Tree and Shortest Path Tree (SPT) | Figure 20: Shared RP-Tree and Shortest Path Tree (SPT) | |||
======================================================================== | ======================================================================== | |||
8.1.4 Unresolved Issues | Besides a last-hop router being able to switch to a source-based tree, | |||
there is also the capability of the RP for a group to transition to a | ||||
It is important to note that PIM is an Internet draft. This means that | source's shortest-path tree. Similar controls (bandwidth threshhold, | |||
it is still early in its development cycle and clearly a "work in | administrative weights, etc.) can be used at an RP to influence these | |||
decisions. | ||||
progress." There are several important issues that require further | ||||
research, engineering, and/or experimentation: | ||||
o PIM-SM requires routers to maintain a non-trivial | ||||
amount of state information to describe sources | ||||
and groups. | ||||
o Some multicast routers will be required to have | ||||
both PIM interfaces and non-PIM interfaces. The | ||||
interaction and sharing of multicast routing | ||||
information between PIM and other multicast | ||||
routing protocols is still being defined. | ||||
Due to these reasons, especially the need to get operational experience | ||||
with the protocol, when PIM is finally published as an RFC, it will not | ||||
immediately be placed on the standards-track; rather it will be | ||||
classified as experimental. After sufficient operational experience | ||||
has been obtained, presumably a slightly altered specification will be | ||||
defined that incorporates lessons learned during the experimentation | ||||
phase, and that new specification will then be placed on the standards | ||||
track. | ||||
8.2 Core-Based Trees (CBT) | 8.2 Core Based Trees (CBT) | |||
Core Based Trees is another multicast architecture that is based on a | Core Based Trees is another multicast architecture that is based on a | |||
shared delivery tree. It is specifically intended to address the | shared delivery tree. It is specifically intended to address the | |||
important issue of scalability when supporting multicast applications | important issue of scalability when supporting multicast applications | |||
across the public Internet. CBT is also designed to enable | across the public Internet. | |||
interoperability between distinct "clouds" on the Internet, each | ||||
executing a different multicast routing protocol. | ||||
Similar to PIM, CBT is protocol-independent. CBT employs the | Similar to PIM-SM, CBT is protocol-independent. CBT employs the | |||
information contained in the unicast routing table to build its shared | information contained in the unicast routing table to build its shared | |||
delivery tree. It does not care how the unicast routing table is | delivery tree. It does not care how the unicast routing table is | |||
derived, only that a unicast routing table is present. This feature | derived, only that a unicast routing table is present. This feature | |||
allows CBT to be deployed without requiring the presence of any specific | allows CBT to be deployed without requiring the presence of any specific | |||
unicast routing protocol. | unicast routing protocol. | |||
8.2.1 Joining a Group's Shared Tree | Another similarity to PIM-SM is that CBT has adopted the core discovery | |||
mechanism ("bootstrap" ) defined in the PIM-SM specification. For | ||||
inter-domain discovery, efforts are underway to standardize (or at least | ||||
separately specify) a common RP/Core discovery mechanism. The intent is | ||||
that any shared tree protocol could implement this common discovery | ||||
mechanism using its own protocol message types. | ||||
When a multi-access network has more than one CBT router, one of the | In a significant departure from PIM-SM, CBT has decided to maintain it's | |||
routers is elected the designated router (DR) for the subnetwork. The | scaling characteristics by not offering the option of shifting from a | |||
DR is responsible for transmitting IGMP Queries and for initiating the | Shared Tree (e.g., PIM-SM's RP-Tree) to a Shortest Path Tree (SPT) to | |||
construction of a branch that links directly-attached group members to | optimize delay. The designers of CBT believe that this is a critical | |||
the shared distribution tree for the group. The router on the subnetwork | decision since when multicasting becomes widely deployed, the need for | |||
with the lowest IP address is elected the IGMP Querier and also serves | routers to maintain large amounts of state information will become the | |||
as the CBT DR. | overpowering scaling factor. | |||
When the DR receives an IGMP Host Membership Report for a new group, it | Finally, unlike PIM-SM's shared tree state, CBT state is bi-directional. | |||
transmits a CBT Join-Request to the next-hop router on the unicast path | Data may therefore flow in either direction along a branch. Thus, data | |||
from a source which is directly attached to an existing tree branch need | ||||
not be encapsulated. | ||||
to the "target core" for the multicast group. The identification of the | 8.2.1 Joining a Group's Shared Tree | |||
"target core" is based on static configuration. | ||||
The Join-Request is processed by all intermediate CBT routers, each of | A host that wants to join a multicast group issues an IGMP host | |||
which identifies the interface on which the Join-Request was received as | membership report. This message informs its local CBT-aware router(s) | |||
part of this group's delivery tree. The intermediate routers continue | that it wishes to receive traffic addressed to the multicast group. | |||
to forward the Join-Request toward the target core and to mark local | Upon receipt of an IGMP host membership report for a new group, the | |||
interfaces until the request reaches either 1) a core router, or 2) a | local CBT router issues a JOIN_REQUEST hop-by-hop toward the group's | |||
router that is already on the distribution tree for this group. | core router. | |||
In either case, this router stops forwarding the Join-Request and | If the JOIN_REQUEST encounters a router that is already on the group's | |||
responds with a Join-Ack which follows the path back to the DR which | shared tree before it reaches the core router, then that router issues a | |||
initiated the Join-Request. The Join-Ack fixes the state in each of the | JOIN_ACK hop-by-hop back toward the sending router. If the JOIN_REQUEST | |||
intermediate routers causing the interfaces to become part of the | does not encounter an on-tree CBT router along its path towards the | |||
distribution tree for the multicast group. The newly constructed branch | core, then the core router is responsible for responding with a | |||
is made up of non-core (i.e., "on-tree") routers providing the shortest | JOIN_ACK. In either case, each intermediate router that forwards the | |||
path between a member's directly attached DR and a core. | JOIN_REQUEST towards the core is required to create a transient "join | |||
state." This transient "join state" includes the multicast group, and | ||||
the JOIN_REQUEST's incoming and outgoing interfaces. This information | ||||
allows an intermediate router to forward returning JOIN_ACKs along the | ||||
exact reverse path to the CBT router which initiated the JOIN_REQUEST. | ||||
Once a branch is created, each child router monitors the status of its | As the JOIN_ACK travels towards the CBT router that issued the | |||
parent router with a keepalive mechanism. A child router periodically | JOIN_REQUEST, each intermediate router creates new "active state" for | |||
unicasts a CBT-Echo-Request to its parent router which is then required | this group. New branches are established by having the intermediate | |||
to respond with a unicast CBT-Echo-Reply message. | routers remember which interface is upstream, and which interface(s) | |||
is(are) downstream. Once a new branch is created, each child router | ||||
monitors the status of its parent router with a keepalive mechanism, | ||||
the CBT "Echo" protocol. A child router periodically unicasts a | ||||
CBT_ECHO_REQUEST to its parent router, which is then required to respond | ||||
with a unicast CBT_ECHO_REPLY message. | ||||
======================================================================== | ======================================================================== | |||
#- - - -#- - - - -# | #- - - -#- - - - -# | |||
| \ | | \ | |||
| # | | # | |||
| | | | |||
# - - - - # | # - - - - # | |||
member | | | member | | | |||
host --| | | host --| | | |||
skipping to change at page 55, line 49 | skipping to change at page 47, line 97 | |||
| <--ACK-- <--ACK-- <--ACK-- | | <--ACK-- <--ACK-- <--ACK-- | |||
| | | | |||
LEGEND | LEGEND | |||
[DR] CBT Designated Router | [DR] CBT Designated Router | |||
[:] CBT Router | [:] CBT Router | |||
[@] Target Core Router | [@] Target Core Router | |||
# CBT Router that is already on the shared tree | # CBT Router that is already on the shared tree | |||
Figure 28: CBT Tree Joining Process | Figure 21: CBT Tree Joining Process | |||
======================================================================== | ======================================================================== | |||
It is only necessary to implement a single "keepalive" mechanism on each | If, for any reason, the link between an on-tree router and its parent | |||
link regardless of the number of multicast groups that are sharing the | should fail, or if the parent router is otherwise unreachable, the | |||
link. If for any reason the link between the child and parent should | on-tree router transmits a FLUSH_TREE message on its child interface(s) | |||
fail, the child is responsible for re-attaching itself and its | which initiates the tearing down of all downstream branches for the | |||
downstream children to the shared delivery tree. | multicast group. Each downstream router is then responsible for | |||
re-attaching itself (provided it has a directly attached group member) | ||||
to the group's shared delivery tree. | ||||
8.2.2 Primary and Secondary Cores | The Designated Router (DR) is elected by CBT's "Hello" protocol and | |||
functions as THE single upstream router for all groups using that link. | ||||
The DR is not necessarily the best next-hop router to every core for | ||||
every multicast group. The implication is that it is possible for a | ||||
JOIN_REQUEST to be redirected by the DR across a link to the best | ||||
next-hop router providing access a given group's core. Note that data | ||||
traffic is never duplicated across a link, only JOIN_REQUESTs, and the | ||||
volume of this JOIN_REQUEST traffic should be negligible. | ||||
Instead of a single active "core" or "rendezvous point," CBT may have | 8.2.2 Data Packet Forwarding | |||
multiple active cores to increase robustness. The initiator of a | ||||
multicast group elects one of these routers as the Primary Core, while | ||||
all other cores are classified as Secondary Cores. The Primary Core must | ||||
be uniquely identified for the entire multi- cast group. | ||||
Whenever a group member joins to a secondary core, the secondary core | When a JOIN_ACK is received by an intermediate router, it either adds | |||
router ACKs the Join-Request and then joins toward the Primary Core. | the interface over which the JOIN_ACK was received to an existing | |||
Since each Join-Request contains the identity of the Primary Core for | forwarding cache entry, or creates a new entry if one does not already | |||
the group, the secondary core can easily determine the identity of the | exist for the multicast group. When a CBT router receives a data packet | |||
Primary Core for the group. This simple process allows the CBT tree | addressed to the multicast group, it simply forwards the packet over all | |||
to become fully connected as individual members join the multicast | outgoing interfaces as specified by the forwarding cache entry for the | |||
group. | group. | |||
======================================================================== | 8.2.3 Non-Member Sending | |||
+----> [PC] <-----------+ | ||||
| ^ | | ||||
Join | | Join | Join | ||||
| | | | ||||
| | | | ||||
[SC] [SC] [SC] [SC] [SC] <-----+ | ||||
^ ^ ^ | | ||||
| | | | | ||||
Join | | Join Join | Join | | ||||
| | | | | ||||
| | | | | ||||
[x] [x] [x] [x] | ||||
: : : : | ||||
member member member member | ||||
host host host host | ||||
LEGEND | ||||
[PC] Primary Core Router | ||||
[SC] Secondary Core Router | ||||
[x] Member-hosts' directly-attached routers | ||||
Figure 29: Primary and Secondary Core Routers | ||||
======================================================================== | ||||
8.2.3 Data Packet Forwarding | ||||
After a Join-Ack is received by an intermediate router, it creates a CBT | ||||
forwarding information base (FIB) entry listing all interfaces that | ||||
are part of the specified group's delivery tree. When a CBT router | ||||
receives a packet addressed to the multicast group, it simply forwards | ||||
the packet over all outgoing interfaces as specified by the FIB entry | ||||
for the group. | ||||
A CBT router may forward a multicast data packet in either "CBT Mode" or | ||||
"Native Mode." | ||||
o CBT Mode is designed for operation in heterogeneous | ||||
environments that may include non-multicast capable | ||||
routers or mrouters that do not implement (or are not | ||||
configured for) CBT. Under these conditions, CBT Mode | ||||
is used to encapsulate the data packet in a CBT header | ||||
and "tunnel" it between CBT-capable routers (or islands). | ||||
o Native Mode is designed for operation in a homogeneous | ||||
environment where all routers implement the CBT routing | ||||
protocol and no specialized encapsulation is required. | ||||
8.2.4 Non-Member Sending | ||||
Similar to other multicast routing protocols, CBT does not require that | Similar to other multicast routing protocols, CBT does not require that | |||
the source of a multicast packet be a member of the multicast group. | the source of a multicast packet be a member of the multicast group. | |||
However, for a multicast data packet to reach the core tree for the | However, for a multicast data packet to reach the active core for the | |||
group, at least one CBT-capable router must be present on the non-member | group, at least one CBT-capable router must be present on the non-member | |||
source station's subnetwork. The local CBT-capable router employs CBT | source station's subnetwork. The local CBT-capable router employs | |||
Mode encapsulation and unicasts the data packet toward a core for the | IP-in-IP encapsulation and unicasts the data packet to the active core | |||
multicast group. When the encapsulated packet encounters an on-tree | for delivery to the rest of the multicast group. | |||
router (or the target core), the packet is forwarded as required by the | ||||
CBT specification. | ||||
8.2.5 Emulating Shortest-Path Trees | ||||
The most common criticism of shared tree protocols is that they offer | ||||
sub-optimal routes and that they create high levels of traffic | ||||
concentration at the core routers. One recent proposal in CBT | ||||
technology is a mechanism to dynamically reconfigure the core-based tree | ||||
so that it becomes rooted at the source station's local CBT router. In | ||||
effect, the CBT becomes a source-based tree but still remains a CBT (one | ||||
with a core that now happens to be adjacent to the source). If | ||||
successfully tested and demonstrated, this technique could allow CBT to | ||||
emulate a shortest-path tree, providing more-optimal routes and reducing | ||||
traffic concentration among the cores. These new mechanisms are being | ||||
designed with an eye toward preserving CBT's simplicity and scalability, | ||||
while addressing key perceived weaknesses of the CBT protocol. Note | ||||
that PIM-SM also has a similar technique whereby a source-based delivery | ||||
tree can be selected by certain receivers. | ||||
For this mechanism, every CBT router is responsible for monitoring the | 8.2.4 CBT Multicast Interoperability | |||
transmission rate and duration of each source station on a directly | ||||
attached subnetwork. If a pre-defined threshold is exceeded, the local | ||||
CBT router may initiate steps to transition the CBT tree so that the | ||||
group's receivers become joined to a "core" that is local to the source | ||||
station's subnetwork. This is accomplished by having the local router | ||||
encapsulate traffic in CBT Mode and place its own IP address in the | ||||
"first-hop router" field. All routers on the CBT tree examine the | ||||
"first-hop router" field in every CBT Mode data packet. If this field | ||||
contains a non-NULL value, each router transmits a Join-Request toward | ||||
the address specified in the "first-hop router" field. It is important | ||||
to note that on the publication date of this "Introduction to IP | ||||
Multicast Routing" RFC, these proposed mechanisms to support dynamic | ||||
source-migration of cores have not yet been tested, simulated, or | ||||
demonstrated. | ||||
8.2.6 CBT Multicast Interoperability | Multicast interoperability is currently being defined. Work is underway | |||
in the IDMR working group to describe the attachment of stub-CBT and | ||||
stub-PIM domains to a DVMRP backbone. Future work will focus on | ||||
developing methods of connecting non-DVMRP transit domains to a DVMRP | ||||
backbone. | ||||
Multicast interoperability is being defined in several stages. Stage 1 | CBT interoperability will be achieved through the deployment of domain | |||
is concerned with the attachment of non-DVMRP stub domains to a DVMRP | border routers (BRs) which enable the forwarding of multicast traffic | |||
backbone (e.g., the MBone). Work is currently underway in the IDMR | between the CBT and DVMRP domains. The BR implements DVMRP and CBT on | |||
working group to describe the attachment of stub-CBT and stub-PIM | different interfaces and is responsible for forwarding data across the | |||
domains to a DVMRP backbone. The next stage will focus on developing | domain boundary. | |||
methods of connecting non-DVMRP transit domains to a DVMRP backbone. | ||||
======================================================================== | ======================================================================== | |||
/---------------\ /---------------\ | /---------------\ /---------------\ | |||
| | | | | | | | | | |||
| | | | | | | | | | |||
| DVMRP |--[BR]--| CBT Domain | | | DVMRP |--[BR]--| CBT Domain | | |||
| Backbone | | | | | Backbone | | | | |||
| | | | | | | | | | |||
\---------------/ \---------------/ | \---------------/ \---------------/ | |||
Figure 30: Domain Border Routers (BRs) | Figure 22: Domain Border Routers (BRs) | |||
======================================================================== | ======================================================================== | |||
CBT interoperability will be achieved through the deployment of domain | ||||
border routers (BRs) which enable the forwarding of multicast traffic | ||||
between the CBT and DVMRP domains. The BR implements DVMRP and CBT on | ||||
different interfaces and is responsible for forwarding data across the | ||||
domain boundary. | ||||
The BR is also responsible for exporting selected routes out of the CBT | The BR is also responsible for exporting selected routes out of the CBT | |||
domain into the DVMRP domain. While the CBT domain never needs to | domain into the DVMRP domain. While the CBT stub domain never needs to | |||
import routes, the DVMRP backbone needs to import routes to sources of | import routes, the DVMRP backbone needs to import routes to any sources | |||
of traffic which are inside the CBT domain. The routes must be imported | ||||
so that DVMRP can perform its RPF check. | ||||
traffic from within the CBT domain. The routes must be imported so that | 9. INTEROPERABILITY FRAMEWORK FOR MULTICAST BORDER ROUTERS | |||
DVMRP can perform the RPF check (which is required for construction of | ||||
its forwarding table). | ||||
9. REFERENCES | In late 1996, the IETF IDMR working group began discussing a formal | |||
structure that would describe the way different multicast routing | ||||
protocols should interact inside a multicast border router (MBR). The | ||||
work can be found in the following internet draft: <draft-thaler- | ||||
interop-0at CBT has adopted the core discovery | ||||
mechanism ("bootstrap" ) defined in the PIM-SM specification. For | ||||
inter-domain discovery, efforts are underway to standardize (or at least | ||||
separately specify) a common RP/Core discovery mechanism. The intent is | ||||
that any shared tree protocol could implement this common discovery | ||||
mechanism using its own protocol message types. | ||||
9.1 Requests for Comments (RFCs) | In a significant departure from PIM-SM, CBT has decided to maintain it's | |||
scaling characteristics by not offering the option of shifting from a | ||||
Shared Tree (e.g., PIM-SM's RP-Tree) to a Shortest Path Tree (SPT) to | ||||
optimize delay. The designers of CBT believe that this is a critical | ||||
decision since when multicasting becomes widely deployed, the need for | ||||
routers to maintain large amounts of state information will become the | ||||
overpowering scaling factor. | ||||
1075 "Distance Vector Multicast Routing Protocol," D. Waitzman, | Finally, unlike PIM-SM's shared tree state, CBT state is bi-directional. | |||
C. Partridge, and S. Deering, November 1988. | Data may therefore flow in either direction along a branch. Thus, data | |||
from a source which is directly attached to an existing tree branch need | ||||
not be encapsulated. | ||||
1112 "Host Extensions for IP Multicasting," Steve Deering, | 8.2.1 Joining a Group's Shared Tree | |||
August 1989. | ||||
1583 "OSPF Version 2," John Moy, March 1994. | A host that wants to join a multicast group issues an IGMP host | |||
membership report. This message informs its local CBT-aware router(s) | ||||
that it wishes to receive traffic addressed to the multicast group. | ||||
Upon receipt of an IGMP host membership report for a new group, the | ||||
local CBT router issues a JOIN_REQUEST hop-by-hop toward the group's | ||||
core router. | ||||
1584 "Multicast Extensions to OSPF," John Moy, March 1994. | If the JOIN_REQUEST encounters a router that is already on the group's | |||
shared tree before it reaches the core router, then that router issues a | ||||
JOIN_ACK hop-by-hop back toward the sending router. If the JOIN_REQUEST | ||||
does not encounter an on-tree CBT router along its path towards the | ||||
core, then the core router is responsible for responding with a | ||||
JOIN_ACK. In either case, each intermediate router that forwards the | ||||
JOIN_REQUEST towards the core is required to create a transient "join | ||||
state." This transient "join state" includes the multicast group, and | ||||
the JOIN_REQUEST's incoming and outgoing interfaces. This information | ||||
allows an intermediate router to forward returning JOIN_ACKs along the | ||||
exact reverse path to the CBT router which initiated the JOIN_REQUEST. | ||||
1585 "MOSPF: Analysis and Experience," John Moy, March 1994. | As the JOIN_ACK travels towards the CBT router that issued the | |||
JOIN_REQUEST, each intermediate router creates new "active state" for | ||||
this group. New branches are established by having the intermediate | ||||
routers remember which interface is upstream, and which interface(s) | ||||
is(are) downstream. Once a new branch is created, each child router | ||||
monitors the status of its parent router with a keepalive mechanism, | ||||
the CBT "Echo" protocol. A child router periodically unicasts a | ||||
CBT_ECHO_REQUEST to its parent router, which is then required to respond | ||||
with a unicast CBT_ECHO_REPLY message. | ||||
1700 "Assigned Numbers," J. Reynolds and J. Postel, October | ======================================================================== | |||
1994. (STD 2) | ||||
1800 "Internet Official Protocol Standards," Jon Postel, | #- - - -#- - - - -# | |||
Editor, July 1995. | | \ | |||
| # | ||||
| | ||||
# - - - - # | ||||
member | | | ||||
host --| | | ||||
| --Join--> --Join--> --Join--> | | ||||
|- [DR] - - - [:] - - - -[:] - - - - [@] | ||||
| <--ACK-- <--ACK-- <--ACK-- | ||||
| | ||||
1812 "Requirements for IP version 4 Routers," Fred Baker, | LEGEND | |||
Editor, June 1995 | ||||
9.2 Internet Drafts | [DR] CBT Designated Router | |||
[:] CBT Router | ||||
[@] Target Core Router | ||||
# CBT Router that is already on the shared tree | ||||
"Core Based Trees (CBT) Multicast: Architectural Overview," | Figure 21: CBT Tree Joining Process | |||
<draft-ietf-idmr-cbt-arch-03.txt>, A. J. Ballardie, September 19, | ======================================================================== | |||
1996. | ietf-idmr-cbt-spec-07.txt>, A. J. Ballardie, March 1997. | |||
"Core Based Trees (CBT) Multicast: Protocol Specification," <draft- | "Core Based Tree (CBT) Multicast Border Router Specification for | |||
ietf-idmr-cbt-spec-06.txt>, A. J. Ballardie, November 21, 1995. | Connecting a CBT Stub Region to a DVMRP Backbone," <draft-ietf- | |||
idmr-cbt-dvmrp-00.txt>, A. J. Ballardie, March 1997. | ||||
"Hierarchical Distance Vector Multicast Routing for the MBone," | "Distance Vector Multicast Routing Protocol," <draft-ietf-idmr- | |||
Ajit Thyagarajan and Steve Deering, July 1995. | dvmrp-v3-04.ps>, T. Pusateri, February 19, 1997. | |||
"Internet Group Management Protocol, Version 2," <draft-ietf- | "Internet Group Management Protocol, Version 2," <draft-ietf- | |||
idmr-igmp-v2-05.txt>, William Fenner, October 25, 1996. | idmr-igmp-v2-06.txt>, William Fenner, January 22, 1997. | |||
"Internet Group Management Protocol, Version 3," <draft-cain- | "Internet Group Management Protocol, Version 3," <draft-cain- | |||
igmp-00.txt>, Brad Cain, Ajit Thyagarajan, and Steve Deering, | igmp-00.txt>, Brad Cain, Ajit Thyagarajan, and Steve Deering, | |||
Expires March 8, 1996. | Expired. | |||
"Protocol Independent Multicast (PIM): Motivation and Architecture," | ||||
<draft-ietf-idmr-pim-arch-04.ps>, S. Deering, D. Estrin, | ||||
D. Farinacci, V. Jacobson, C. Liu, and L. Wei, September 11, 1996. | ||||
"Protocol Independent Multicast (PIM), Dense Mode Protocol | "Protocol Independent Multicast-Dense Mode (PIM-DM): Protocol | |||
Specification," <draft-ietf-idmr-pim-dm-spec-04.ps>, D. Estrin, | Specification," <draft-ietf-idmr-pim-dm-spec-04.ps>, D. Estrin, | |||
D. Farinacci, V. Jacobson, C. Liu, L. Wei, P. Sharma, and | D. Farinacci, A. Helmy, V. Jacobson, and L. Wei, September 12, 1996. | |||
A. Helmy, September 16, 1996. | ||||
"Protocol Independent Multicast-Sparse Mode (PIM-SM): Motivation | ||||
and Architecture," <draft-ietf-idmr-pim-arch-04.ps>, S. Deering, | ||||
D. Estrin, D. Farinacci, V. Jacobson, C. Liu, and L. Wei, | ||||
November 19, 1996. | ||||
"Protocol Independent Multicast-Sparse Mode (PIM-SM): Protocol | "Protocol Independent Multicast-Sparse Mode (PIM-SM): Protocol | |||
Specification," <draft-ietf-idmr-pim-sm-spec-09.ps>, S. Deering, | Specification," <draft-ietf-idmr-pim-sm-spec-09.ps>, D. Estrin, | |||
D. Estrin, D. Farinacci, V. Jacobson, C. Liu, L. Wei, P. Sharma, | D. Farinacci, A. Helmy, D. Thaler; S. Deering, M. Handley, | |||
and A Helmy, September 19, 1996. | V. Jacobson, C. Liu, P. Sharma, and L. Wei, October 9, 1996. | |||
9.3 Textbooks | (Note: Results of IESG review were announced on December 23, 1996: | |||
This internet-draft is to be published as an Experimental RFC.) | ||||
"PIM Multicast Border Router (PMBR) specification for connecting | ||||
PIM-SM domains to a DVMRP Backbone," <draft-ietf-mboned-pmbr- | ||||
spec-00.txt>, D. Estrin, A. Helmy, D. Thaler, Febraury 3, 1997. | ||||
"Administratively Scoped IP Multicast," <draft-ietf-mboned-admin-ip- | ||||
space-01.txt>, D. Meyer, December 23, 1996. | ||||
"Interoperability Rules for Multicast Routing Protocols," <draft- | ||||
thaler-interop-00.txt>, D. Thaler, November 7, 1996. | ||||
See the IDMR home pages for an archive of specifications: | ||||
<URL:http://www.cs.ucl.ac.uk/ietf/public_idmr/> | ||||
<URL:http://www.ietf.org/html.charters/idmr-charter.html> | ||||
10.3 Textbooks | ||||
Comer, Douglas E. Internetworking with TCP/IP Volume 1 Principles, | Comer, Douglas E. Internetworking with TCP/IP Volume 1 Principles, | |||
Protocols, and Architecture Second Edition, Prentice Hall, Inc. | Protocols, and Architecture Second Edition, Prentice Hall, Inc. | |||
Englewood Cliffs, New Jersey, 1991 | Englewood Cliffs, New Jersey, 1991 | |||
Huitema, Christian. Routing in the Internet, Prentice Hall, Inc. | Huitema, Christian. Routing in the Internet, Prentice Hall, Inc. | |||
Englewood Cliffs, New Jersey, 1995 | Englewood Cliffs, New Jersey, 1995 | |||
Stevens, W. Richard. TCP/IP Illustrated: Volume 1 The Protocols, | Stevens, W. Richard. TCP/IP Illustrated: Volume 1 The Protocols, | |||
Addison Wesley Publishing Company, Reading MA, 1994 | Addison Wesley Publishing Company, Reading MA, 1994 | |||
Wright, Gary and W. Richard Stevens. TCP/IP Illustrated: Volume 2 | Wright, Gary and W. Richard Stevens. TCP/IP Illustrated: Volume 2 | |||
The Implementation, Addison Wesley Publishing Company, Reading MA, | The Implementation, Addison Wesley Publishing Company, Reading MA, | |||
1995 | 1995 | |||
9.4 Other | 10.4 Other | |||
Deering, Steven E. "Multicast Routing in a Datagram | Deering, Steven E. "Multicast Routing in a Datagram | |||
Internetwork," Ph.D. Thesis, Stanford University, December 1991. | Internetwork," Ph.D. Thesis, Stanford University, December 1991. | |||
Ballardie, Anthony J. "A New Approach to Multicast Communication | Ballardie, Anthony J. "A New Approach to Multicast Communication | |||
in a Datagram Internetwork," Ph.D. Thesis, University of London, | in a Datagram Internetwork," Ph.D. Thesis, University of London, | |||
May 1995. | May 1995. | |||
10. SECURITY CONSIDERATIONS | "Hierarchical Distance Vector Multicast Routing for the MBone," | |||
Ajit Thyagarajan and Steve Deering, July 1995. | ||||
11. SECURITY CONSIDERATIONS | ||||
Security issues are not discussed in this memo. | Security issues are not discussed in this memo. | |||
11. AUTHORS' ADDRESSES | 12. ACKNOWLEDGEMENTS | |||
Chuck Semeria | This RFC would not have been possible without the encouragement of Mike | |||
O'Dell and the support of Joel Halpern and David Meyer. Also invaluable | ||||
was the feedback and comments of the IETF MBoneD and IDMR working groups. | ||||
Certain people spent considerable time commenting on and discussing this | ||||
paper with the authors, and deserve to be mentioned by name: Tony | ||||
Ballardie, Steve Casner, Jon Crowcroft, Steve Deering, Bill Fenner, Hugh | ||||
Holbrook, Cyndi Jung, Shuching Shieh, Dave Thaler, and Nair Venugopal. | ||||
Our apologies to anyone we unintentionally neglected to list here. | ||||
13. AUTHORS' ADDRESSES | ||||
Tom Maufer | ||||
3Com Corporation | 3Com Corporation | |||
5400 Bayfront Plaza | 5400 Bayfront Plaza | |||
P.O. Box 58145 | P.O. Box 58145 | |||
Santa Clara, CA 95052-8145 | Santa Clara, CA 95052-8145 | |||
Phone: +1 408 764-7201 | Phone: +1 408 764-8814 | |||
Email: <Chuck_Semeria@3Com.com> | Email: <maufer@3Com.com> | |||
Tom Maufer | Chuck Semeria | |||
3Com Corporation | 3Com Corporation | |||
5400 Bayfront Plaza | 5400 Bayfront Plaza | |||
P.O. Box 58145 | P.O. Box 58145 | |||
Santa Clara, CA 95052-8145 | Santa Clara, CA 95052-8145 | |||
Phone: +1 408 764-8814 | Phone: +1 408 764-7201 | |||
Email: <maufer@3Com.com> | Email: <semeria@3Com.com> | |||
End of changes. | ||||
This html diff was produced by rfcdiff 1.23, available from http://www.levkowetz.com/ietf/tools/rfcdiff/ |