--- 1/draft-ietf-rtcweb-rtp-usage-17.txt 2014-10-21 16:14:43.160363472 -0700 +++ 2/draft-ietf-rtcweb-rtp-usage-18.txt 2014-10-21 16:14:43.252365695 -0700 @@ -1,21 +1,21 @@ RTCWEB Working Group C. S. Perkins Internet-Draft University of Glasgow Intended status: Standards Track M. Westerlund -Expires: February 26, 2015 Ericsson +Expires: April 24, 2015 Ericsson J. Ott Aalto University - August 25, 2014 + October 21, 2014 Web Real-Time Communication (WebRTC): Media Transport and Use of RTP - draft-ietf-rtcweb-rtp-usage-17 + draft-ietf-rtcweb-rtp-usage-18 Abstract The Web Real-Time Communication (WebRTC) framework provides support for direct interactive rich communication using audio, video, text, collaboration, games, etc. between two peers' web-browsers. This memo describes the media transport aspects of the WebRTC framework. It specifies how the Real-time Transport Protocol (RTP) is used in the WebRTC context, and gives requirements for which RTP features, profiles, and extensions need to be supported. @@ -28,21 +28,21 @@ Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet- Drafts is at http://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." - This Internet-Draft will expire on February 26, 2015. + This Internet-Draft will expire on April 24, 2015. Copyright Notice Copyright (c) 2014 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents @@ -92,21 +92,21 @@ 10. Signalling Considerations . . . . . . . . . . . . . . . . . . 22 11. WebRTC API Considerations . . . . . . . . . . . . . . . . . . 24 12. RTP Implementation Considerations . . . . . . . . . . . . . . 26 12.1. Configuration and Use of RTP Sessions . . . . . . . . . 26 12.1.1. Use of Multiple Media Sources Within an RTP Session 26 12.1.2. Use of Multiple RTP Sessions . . . . . . . . . . . . 28 12.1.3. Differentiated Treatment of RTP Packet Streams . . . 32 12.2. Media Source, RTP Packet Streams, and Participant Identification . . . . . . . . . . . . . . . . . . . . . 34 12.2.1. Media Source Identification . . . . . . . . . . . . 34 - 12.2.2. SSRC Collision Detection . . . . . . . . . . . . . . 35 + 12.2.2. SSRC Collision Detection . . . . . . . . . . . . . . 34 12.2.3. Media Synchronisation Context . . . . . . . . . . . 36 13. Security Considerations . . . . . . . . . . . . . . . . . . . 36 14. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 38 15. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 38 16. References . . . . . . . . . . . . . . . . . . . . . . . . . 38 16.1. Normative References . . . . . . . . . . . . . . . . . . 38 16.2. Informative References . . . . . . . . . . . . . . . . . 41 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 43 1. Introduction @@ -116,64 +116,62 @@ time media applications. Previous work has defined the RTP protocol, along with numerous profiles, payload formats, and other extensions. When combined with appropriate signalling, these form the basis for many teleconferencing systems. The Web Real-Time communication (WebRTC) framework provides the protocol building blocks to support direct, interactive, real-time communication using audio, video, collaboration, games, etc., between two peers' web-browsers. This memo describes how the RTP framework is to be used in the WebRTC context. It proposes a baseline set of - RTP features that are to be implemented by all WebRTC-aware end- - points, along with suggested extensions for enhanced functionality. + RTP features that are to be implemented by all WebRTC Endpoints, + along with suggested extensions for enhanced functionality. This memo specifies a protocol intended for use within the WebRTC framework, but is not restricted to that context. An overview of the WebRTC framework is given in [I-D.ietf-rtcweb-overview]. The structure of this memo is as follows. Section 2 outlines our rationale in preparing this memo and choosing these RTP features. Section 3 defines terminology. Requirements for core RTP protocols are described in Section 4 and suggested RTP extensions are described in Section 5. Section 6 outlines mechanisms that can increase robustness to network problems, while Section 7 describes congestion control and rate adaptation mechanisms. The discussion of mandated RTP mechanisms concludes in Section 8 with a review of performance - monitoring and network management tools that can be used in the - WebRTC context. Section 9 gives some guidelines for future - incorporation of other RTP and RTP Control Protocol (RTCP) extensions - into this framework. Section 10 describes requirements placed on the - signalling channel. Section 11 discusses the relationship between - features of the RTP framework and the WebRTC application programming - interface (API), and Section 12 discusses RTP implementation - considerations. The memo concludes with security considerations - (Section 13) and IANA considerations (Section 14). + monitoring and network management tools. Section 9 gives some + guidelines for future incorporation of other RTP and RTP Control + Protocol (RTCP) extensions into this framework. Section 10 describes + requirements placed on the signalling channel. Section 11 discusses + the relationship between features of the RTP framework and the WebRTC + application programming interface (API), and Section 12 discusses RTP + implementation considerations. The memo concludes with security + considerations (Section 13) and IANA considerations (Section 14). 2. Rationale The RTP framework comprises the RTP data transfer protocol, the RTP control protocol, and numerous RTP payload formats, profiles, and extensions. This range of add-ons has allowed RTP to meet various needs that were not envisaged by the original protocol designers, and to support many new media encodings, but raises the question of what extensions are to be supported by new implementations. The development of the WebRTC framework provides an opportunity to review the available RTP features and extensions, and to define a common - baseline feature set for all WebRTC implementations of RTP. This - builds on the past 20 years development of RTP to mandate the use of - extensions that have shown widespread utility, while still remaining - compatible with the wide installed base of RTP implementations where - possible. + baseline RTP feature set for all WebRTC Endpoints. This builds on + the past 20 years development of RTP to mandate the use of extensions + that have shown widespread utility, while still remaining compatible + with the wide installed base of RTP implementations where possible. RTP and RTCP extensions that are not discussed in this document can - be implemented by WebRTC end-points if they are beneficial for new - use cases. However, they are not necessary to address the WebRTC use + be implemented by WebRTC Endpoints if they are beneficial for new use + cases. However, they are not necessary to address the WebRTC use cases and requirements identified in [I-D.ietf-rtcweb-use-cases-and-requirements]. While the baseline set of RTP features and extensions defined in this memo is targeted at the requirements of the WebRTC framework, it is expected to be broadly useful for other conferencing-related uses of RTP. In particular, it is likely that this set of RTP features and extensions will be appropriate for other desktop or mobile video conferencing systems, or for room-based high-quality telepresence applications. @@ -198,44 +196,45 @@ and transport protocol used. Bi-directional Transport-layer Flow: A bi-directional transport- layer flow is a transport-layer flow that is symmetric. That is, the transport-layer flow in the reverse direction has a 5-tuple where the source and destination address and ports are swapped compared to the forward path transport-layer flow, and the transport protocol is the same. This document uses the terminology from - [I-D.ietf-avtext-rtp-grouping-taxonomy]. Other terms are used - according to their definitions from the RTP Specification [RFC3550]. - Especially note the following frequently used terms: RTP Packet - Stream, RTP Session, and End-point. + [I-D.ietf-avtext-rtp-grouping-taxonomy] and + [I-D.ietf-rtcweb-overview]. Other terms are used according to their + definitions from the RTP Specification [RFC3550]. Especially note + the following frequently used terms: RTP Packet Stream, RTP Session, + and End-point. 4. WebRTC Use of RTP: Core Protocols The following sections describe the core features of RTP and RTCP that need to be implemented, along with the mandated RTP profiles. Also described are the core extensions providing essential features - that all WebRTC implementations need to implement to function - effectively on today's networks. + that all WebRTC Endpoints need to implement to function effectively + on today's networks. 4.1. RTP and RTCP The Real-time Transport Protocol (RTP) [RFC3550] is REQUIRED to be implemented as the media transport protocol for WebRTC. RTP itself comprises two parts: the RTP data transfer protocol, and the RTP control protocol (RTCP). RTCP is a fundamental and integral part of - RTP, and MUST be implemented and used in all WebRTC applications. + RTP, and MUST be implemented and used in all WebRTC Endpoints. The following RTP and RTCP features are sometimes omitted in limited functionality implementations of RTP, but are REQUIRED in all WebRTC - implementations: + Endpoints: o Support for use of multiple simultaneous SSRC values in a single RTP session, including support for RTP end-points that send many SSRC values simultaneously, following [RFC3550] and [I-D.ietf-avtcore-rtp-multi-stream]. The RTCP optimisations for multi-SSRC sessions defined in [I-D.ietf-avtcore-rtp-multi-stream-optimisation] MAY be supported; if supported the usage MUST be signalled. o Random choice of SSRC on joining a session; collision detection @@ -337,27 +336,27 @@ profile is backwards compatible with legacy systems that implement only the RTP/AVP or RTP/SAVP profile, given some constraints on parameter configuration such as the RTCP bandwidth value and "trr- int" (the most important factor for interworking with RTP/(S)AVP end-points via a gateway is to set the trr-int parameter to a value representing 4 seconds, see Section 6.1 in [I-D.ietf-avtcore-rtp-multi-stream]). The secure RTP (SRTP) profile extensions [RFC3711] are needed to provide media encryption, integrity protection, replay protection and - a limited form of source authentication. WebRTC implementations MUST - NOT send packets using the basic RTP/AVP profile or the RTP/AVPF - profile; they MUST employ the full RTP/SAVPF profile to protect all - RTP and RTCP packets that are generated (i.e., implementations MUST - use SRTP and SRTCP). The RTP/SAVPF profile MUST be configured using - the cipher suites, DTLS-SRTP protection profiles, keying mechanisms, - and other parameters described in [I-D.ietf-rtcweb-security-arch]. + a limited form of source authentication. WebRTC Endpoints MUST NOT + send packets using the basic RTP/AVP profile or the RTP/AVPF profile; + they MUST employ the full RTP/SAVPF profile to protect all RTP and + RTCP packets that are generated (i.e., implementations MUST use SRTP + and SRTCP). The RTP/SAVPF profile MUST be configured using the + cipher suites, DTLS-SRTP protection profiles, keying mechanisms, and + other parameters described in [I-D.ietf-rtcweb-security-arch]. 4.3. Choice of RTP Payload Formats The set of mandatory to implement codecs and RTP payload formats for WebRTC is not specified in this memo, instead they are defined in separate specifications, such as [I-D.ietf-rtcweb-audio]. Implementations can support any codec for which an RTP payload format and associated signalling is defined. Implementation cannot assume that the other participants in an RTP session understand any RTP payload format, no matter how common; the mapping between RTP payload @@ -419,24 +418,23 @@ the case where senders use only a single clock rate). 4.4. Use of RTP Sessions An association amongst a set of end-points communicating using RTP is known as an RTP session [RFC3550]. An end-point can be involved in several RTP sessions at the same time. In a multimedia session, each type of media has typically been carried in a separate RTP session (e.g., using one RTP session for the audio, and a separate RTP session using a different transport-layer flow for the video). - WebRTC implementations of RTP are REQUIRED to implement support for - multimedia sessions in this way, separating each session using - different transport-layer flows for compatibility with legacy - systems. + WebRTC Endpoints are REQUIRED to implement support for multimedia + sessions in this way, separating each RTP session using different + transport-layer flows for compatibility with legacy systems. In modern day networks, however, with the widespread use of network address/port translators (NAT/NAPT) and firewalls, it is desirable to reduce the number of transport-layer flows used by RTP applications. This can be done by sending all the RTP packet streams in a single RTP session, which will comprise a single transport-layer flow (this will prevent the use of some quality-of-service mechanisms, as discussed in Section 12.1.3). Implementations are therefore also REQUIRED to support transport of all RTP packet streams, independent of media type, in a single RTP session using a single transport layer @@ -555,92 +553,93 @@ Each RTP end-point MUST have at least one RTCP CNAME, and that RTCP CNAME MUST be unique within the RTCPeerConnection. RTCP CNAMEs identify a particular synchronisation context, i.e., all SSRCs associated with a single RTCP CNAME share a common reference clock. If an end-point has SSRCs that are associated with several unsynchronised reference clocks, and hence different synchronisation contexts, it will need to use multiple RTCP CNAMEs, one for each synchronisation context. - Taking the discussion in Section 11 into account, a WebRTC end-point + Taking the discussion in Section 11 into account, a WebRTC Endpoint MUST NOT use more than one RTCP CNAME in the RTP sessions belonging to single RTCPeerConnection (that is, an RTCPeerConnection forms a synchronisation context). RTP middleboxes MAY generate RTP packet streams associated with more than one RTCP CNAME, to allow them to avoid having to resynchronize media from multiple different end- points part of a multi-party RTP session. The RTP specification [RFC3550] includes guidelines for choosing a unique RTP CNAME, but these are not sufficient in the presence of NAT devices. In addition, long-term persistent identifiers can be problematic from a privacy viewpoint (Section 13). Accordingly, a - WebRTC endpoint MUST generate a new, unique, short-term persistent + WebRTC Endpoint MUST generate a new, unique, short-term persistent RTCP CNAME for each RTCPeerConnection, following [RFC7022], with a single exception; if explicitly requested at creation an RTCPeerConnection MAY use the same CNAME as as an existing RTCPeerConnection within their common same-origin context. - An WebRTC end-point MUST support reception of any CNAME that matches + An WebRTC Endpoint MUST support reception of any CNAME that matches the syntax limitations specified by the RTP specification [RFC3550] and cannot assume that any CNAME will be chosen according to the form suggested above. 4.10. Handling of Leap Seconds The guidelines regarding handling of leap seconds to limit their impact on RTP media play-out and synchronization given in [RFC7164] SHOULD be followed. 5. WebRTC Use of RTP: Extensions There are a number of RTP extensions that are either needed to obtain full functionality, or extremely useful to improve on the baseline - performance, in the WebRTC application context. One set of these - extensions is related to conferencing, while others are more generic - in nature. The following subsections describe the various RTP - extensions mandated or suggested for use within the WebRTC context. + performance, in the WebRTC context. One set of these extensions is + related to conferencing, while others are more generic in nature. + The following subsections describe the various RTP extensions + mandated or suggested for use within WebRTC. 5.1. Conferencing Extensions and Topologies RTP is a protocol that inherently supports group communication. Groups can be implemented by having each endpoint send its RTP packet streams to an RTP middlebox that redistributes the traffic, by using a mesh of unicast RTP packet streams between endpoints, or by using an IP multicast group to distribute the RTP packet streams. These topologies can be implemented in a number of ways as discussed in [I-D.ietf-avtcore-rtp-topologies-update]. While the use of IP multicast groups is popular in IPTV systems, the topologies based on RTP middleboxes are dominant in interactive video conferencing environments. Topologies based on a mesh of unicast transport-layer flows to create a common RTP session have not seen - widespread deployment to date. Accordingly, WebRTC implementations - are not expected to support topologies based on IP multicast groups - or to support mesh-based topologies, such as a point-to-multipoint - mesh configured as a single RTP session (Topo-Mesh in the terminology - of [I-D.ietf-avtcore-rtp-topologies-update]). However, a point-to- + widespread deployment to date. Accordingly, WebRTC Endpoints are not + expected to support topologies based on IP multicast groups or to + support mesh-based topologies, such as a point-to-multipoint mesh + configured as a single RTP session (Topo-Mesh in the terminology of + + [I-D.ietf-avtcore-rtp-topologies-update]). However, a point-to- multipoint mesh constructed using several RTP sessions, implemented - in the WebRTC context using independent RTCPeerConnections - [W3C.WD-webrtc-20130910], can be expected to be utilised by WebRTC - applications and needs to be supported. + in WebRTC using independent RTCPeerConnections + [W3C.WD-webrtc-20130910], can be expected to be used in WebRTC, and + needs to be supported. - WebRTC implementations of RTP endpoints implemented according to this - memo are expected to support all the topologies described in + WebRTC Endpoints implemented according to this memo are expected to + support all the topologies described in [I-D.ietf-avtcore-rtp-topologies-update] where the RTP endpoints send and receive unicast RTP packet streams to and from some peer device, provided that peer can participate in performing congestion control on the RTP packet streams. The peer device could be another RTP endpoint, or it could be an RTP middlebox that redistributes the RTP packet streams to other RTP endpoints. This limitation means that some of the RTP middlebox-based topologies are not suitable for use - in the WebRTC environment. Specifically: + in WebRTC. Specifically: o Video switching MCUs (Topo-Video-switch-MCU) SHOULD NOT be used, since they make the use of RTCP for congestion control and quality of service reports problematic (see Section 3.8 of [I-D.ietf-avtcore-rtp-topologies-update]). o The Relay-Transport Translator (Topo-PtM-Trn-Translator) topology SHOULD NOT be used because its safe use requires a congestion control algorithm or RTP circuit breaker that handles point to multipoint, which has not yet been standardised. @@ -658,50 +657,51 @@ The RTP extensions described in Section 5.1.1 to Section 5.1.6 are designed to be used with centralised conferencing, where an RTP middlebox (e.g., a conference bridge) receives a participant's RTP packet streams and distributes them to the other participants. These extensions are not necessary for interoperability; an RTP end-point that does not implement these extensions will work correctly, but might offer poor performance. Support for the listed extensions will greatly improve the quality of experience and, to provide a reasonable baseline quality, some of these extensions are mandatory - to be supported by WebRTC end-points. + to be supported by WebRTC Endpoints. The RTCP conferencing extensions are defined in Extended RTP Profile for Real-time Transport Control Protocol (RTCP)-Based Feedback (RTP/ AVPF) [RFC4585] and the memo on Codec Control Messages (CCM) in RTP/ AVPF [RFC5104]; they are fully usable by the Secure variant of this profile (RTP/SAVPF) [RFC5124]. 5.1.1. Full Intra Request (FIR) The Full Intra Request message is defined in Sections 3.5.1 and 4.3.1 of the Codec Control Messages [RFC5104]. It is used to make the mixer request a new Intra picture from a participant in the session. This is used when switching between sources to ensure that the receivers can decode the video or other predictive media encoding - with long prediction chains. WebRTC senders MUST understand and - react to FIR feedback messages they receive, since this greatly - improves the user experience when using centralised mixer-based - conferencing. Support for sending FIR messages is OPTIONAL. + with long prediction chains. WebRTC Endpoints that are sending media + MUST understand and react to FIR feedback messages they receive, + since this greatly improves the user experience when using + centralised mixer-based conferencing. Support for sending FIR + messages is OPTIONAL. 5.1.2. Picture Loss Indication (PLI) The Picture Loss Indication message is defined in Section 6.3.1 of the RTP/AVPF profile [RFC4585]. It is used by a receiver to tell the sending encoder that it lost the decoder context and would like to have it repaired somehow. This is semantically different from the Full Intra Request above as there could be multiple ways to fulfil - the request. WebRTC senders MUST understand and react to PLI - feedback messages as a loss tolerance mechanism. Receivers MAY send - PLI messages. + the request. WebRTC Endpoints that are sending media MUST understand + and react to PLI feedback messages as a loss tolerance mechanism. + Receivers MAY send PLI messages. 5.1.3. Slice Loss Indication (SLI) The Slice Loss Indication message is defined in Section 6.3.2 of the RTP/AVPF profile [RFC4585]. It is used by a receiver to tell the encoder that it has detected the loss or corruption of one or more consecutive macro blocks, and would like to have these repaired somehow. It is RECOMMENDED that receivers generate SLI feedback messages if slices are lost when using a codec that supports the concept of macro blocks. A sender that receives an SLI feedback @@ -735,34 +735,34 @@ 5.1.6. Temporary Maximum Media Stream Bit Rate Request (TMMBR) The TMMBR feedback message is defined in Sections 3.5.4 and 4.2.1 of the Codec Control Messages [RFC5104]. This request and its notification message are used by a media receiver to inform the sending party that there is a current limitation on the amount of bandwidth available to this receiver. This can be various reasons for this: for example, an RTP mixer can use this message to limit the media rate of the sender being forwarded by the mixer (without doing media transcoding) to fit the bottlenecks existing towards the other - session participants. WebRTC senders are REQUIRED to implement - support for TMMBR messages, and MUST follow bandwidth limitations set - by a TMMBR message received for their SSRC. The sending of TMMBR - requests is OPTIONAL. + session participants. WebRTC Endpoints that are sending media are + REQUIRED to implement support for TMMBR messages, and MUST follow + bandwidth limitations set by a TMMBR message received for their SSRC. + The sending of TMMBR requests is OPTIONAL. 5.2. Header Extensions The RTP specification [RFC3550] provides the capability to include RTP header extensions containing in-band data, but the format and semantics of the extensions are poorly specified. The use of header - extensions is OPTIONAL in the WebRTC context, but if they are used, - they MUST be formatted and signalled following the general mechanism - for RTP header extensions defined in [RFC5285], since this gives - well-defined semantics to RTP header extensions. + extensions is OPTIONAL in WebRTC, but if they are used, they MUST be + formatted and signalled following the general mechanism for RTP + header extensions defined in [RFC5285], since this gives well-defined + semantics to RTP header extensions. As noted in [RFC5285], the requirement from the RTP specification that header extensions are "designed so that the header extension may be ignored" [RFC3550] stands. To be specific, header extensions MUST only be used for data that can safely be ignored by the recipient without affecting interoperability, and MUST NOT be used when the presence of the extension has changed the form or nature of the rest of the packet in a way that is not compatible with the way the stream is signalled (e.g., as defined by the payload type). Valid examples of RTP header extensions might include metadata that is additional to @@ -886,100 +886,98 @@ respective parameters. If an RTP payload format negotiated for use in a RTCPeerConnection supports redundant transmission or FEC as a standard feature of that payload format, then that support MAY be used in the RTCPeerConnection, subject to any appropriate signalling. There are several block-based FEC schemes that are designed for use with RTP independent of the chosen RTP payload format. At the time of this writing there is no consensus on which, if any, of these FEC - schemes is appropriate for use in the WebRTC context. Accordingly, - this memo makes no recommendation on the choice of block-based FEC - for WebRTC use. + schemes is appropriate for use in WebRTC. Accordingly, this memo + makes no recommendation on the choice of block-based FEC for WebRTC + use. 7. WebRTC Use of RTP: Rate Control and Media Adaptation WebRTC will be used in heterogeneous network environments using a variety set of link technologies, including both wired and wireless links, to interconnect potentially large groups of users around the world. As a result, the network paths between users can have widely varying one-way delays, available bit-rates, load levels, and traffic mixtures. Individual end-points can send one or more RTP packet - streams to each participant in a WebRTC conference, and there can be - several participants. Each of these RTP packet streams can contain - different types of media, and the type of media, bit rate, and number - of RTP packet streams as well as transport-layer flows can be highly - asymmetric. Non-RTP traffic can share the network paths with RTP - transport-layer flows. Since the network environment is not - predictable or stable, WebRTC end-points MUST ensure that the RTP - traffic they generate can adapt to match changes in the available - network capacity. + streams to each participant, and there can be several participants. - The quality of experience for users of WebRTC implementation is very - dependent on effective adaptation of the media to the limitations of - the network. End-points have to be designed so they do not transmit - significantly more data than the network path can support, except for - very short time periods, otherwise high levels of network packet loss - or delay spikes will occur, causing media quality degradation. The - limiting factor on the capacity of the network path might be the link + Each of these RTP packet streams can contain different types of + media, and the type of media, bit rate, and number of RTP packet + streams as well as transport-layer flows can be highly asymmetric. + Non-RTP traffic can share the network paths with RTP transport-layer + flows. Since the network environment is not predictable or stable, + WebRTC Endpoints MUST ensure that the RTP traffic they generate can + adapt to match changes in the available network capacity. + + The quality of experience for users of WebRTC is very dependent on + effective adaptation of the media to the limitations of the network. + End-points have to be designed so they do not transmit significantly + more data than the network path can support, except for very short + time periods, otherwise high levels of network packet loss or delay + spikes will occur, causing media quality degradation. The limiting + factor on the capacity of the network path might be the link bandwidth, or it might be competition with other traffic on the link (this can be non-WebRTC traffic, traffic due to other WebRTC flows, or even competition with other WebRTC flows in the same session). An effective media congestion control algorithm is therefore an essential part of the WebRTC framework. However, at the time of this writing, there is no standard congestion control algorithm that can be used for interactive media applications such as WebRTC's flows. Some requirements for congestion control algorithms for RTCPeerConnections are discussed in [I-D.ietf-rmcat-cc-requirements]. A future version of this memo will mandate the use of a congestion control algorithm that satisfies these requirements. 7.1. Boundary Conditions and Circuit Breakers - WebRTC implementations MUST implement the RTP circuit breaker - algorithm that is described in - [I-D.ietf-avtcore-rtp-circuit-breakers]. The RTP circuit breaker is - designed to enable applications to recognise and react to situations - of extreme network congestion. However, since the RTP circuit - breaker might not be triggered until congestion becomes extreme, it - cannot be considered a substitute for congestion control, and - applications MUST also implement congestion control to allow them to - adapt to changes in network capacity. Any future RTP congestion - control algorithms are expected to operate within the envelope - allowed by the circuit breaker. + WebRTC Endpoints MUST implement the RTP circuit breaker algorithm + that is described in [I-D.ietf-avtcore-rtp-circuit-breakers]. The + RTP circuit breaker is designed to enable applications to recognise + and react to situations of extreme network congestion. However, + since the RTP circuit breaker might not be triggered until congestion + becomes extreme, it cannot be considered a substitute for congestion + control, and applications MUST also implement congestion control to + allow them to adapt to changes in network capacity. Any future RTP + congestion control algorithms are expected to operate within the + envelope allowed by the circuit breaker. The session establishment signalling will also necessarily establish boundaries to which the media bit-rate will conform. The choice of media codecs provides upper- and lower-bounds on the supported bit- rates that the application can utilise to provide useful quality, and the packetisation choices that exist. In addition, the signalling channel can establish maximum media bit-rate boundaries using, for example, the SDP "b=AS:" or "b=CT:" lines and the RTP/AVPF Temporary Maximum Media Stream Bit Rate (TMMBR) Requests (see Section 5.1.6 of this memo). Signalled bandwidth limitations, such as SDP "b=AS:" or "b=CT:" lines received from the peer, MUST be followed when sending - RTP packet streams. A WebRTC endpoint receiving media SHOULD signal + RTP packet streams. A WebRTC Endpoint receiving media SHOULD signal its bandwidth limitations, these limitations have to be based on known bandwidth limitations, for example the capacity of the edge links. 7.2. Congestion Control Interoperability and Legacy Systems There are legacy RTP implementations that do not implement RTCP, and hence do not provide any congestion feedback. Congestion control - cannot be performed with these end-points. WebRTC implementations - that need to interwork with such end-points MUST limit their - transmission to a low rate, equivalent to a VoIP call using a low - bandwidth codec, that is unlikely to cause any significant - congestion. + cannot be performed with these end-points. WebRTC Endpoints that + need to interwork with such end-points MUST limit their transmission + to a low rate, equivalent to a VoIP call using a low bandwidth codec, + that is unlikely to cause any significant congestion. When interworking with legacy implementations that support RTCP using the RTP/AVP profile [RFC3551], congestion feedback is provided in RTCP RR packets every few seconds. Implementations that have to interwork with such end-points MUST ensure that they keep within the RTP circuit breaker [I-D.ietf-avtcore-rtp-circuit-breakers] constraints to limit the congestion they can cause. If a legacy end-point supports RTP/AVPF, this enables negotiation of important parameters for frequent reporting, such as the "trr-int" @@ -1012,33 +1010,32 @@ As described in Section 4.1, implementations are REQUIRED to generate RTCP Sender Report (SR) and Reception Report (RR) packets relating to the RTP packet streams they send and receive. These RTCP reports can be used for performance monitoring purposes, since they include basic packet loss and jitter statistics. A large number of additional performance metrics are supported by the RTCP Extended Reports (XR) framework [RFC3611][RFC6792]. At the time of this writing, it is not clear what extended metrics are suitable - for use in the WebRTC context, so there is no requirement that - implementations generate RTCP XR packets. However, implementations - that can use detailed performance monitoring data MAY generate RTCP - XR packets as appropriate; the use of such packets SHOULD be - signalled in advance. + for use in WebRTC, so there is no requirement that implementations + generate RTCP XR packets. However, implementations that can use + detailed performance monitoring data MAY generate RTCP XR packets as + appropriate; the use of such packets SHOULD be signalled in advance. 9. WebRTC Use of RTP: Future Extensions It is possible that the core set of RTP protocols and RTP extensions specified in this memo will prove insufficient for the future needs - of WebRTC applications. In this case, future updates to this memo - MUST be made following the Guidelines for Writers of RTP Payload - Format Specifications [RFC2736], How to Write an RTP Payload Format + of WebRTC. In this case, future updates to this memo MUST be made + following the Guidelines for Writers of RTP Payload Format + Specifications [RFC2736], How to Write an RTP Payload Format [I-D.ietf-payload-rtp-howto] and Guidelines for Extending the RTP Control Protocol [RFC5968], and SHOULD take into account any future guidelines for extending RTP and related protocols that have been developed. Authors of future extensions are urged to consider the wide range of environments in which RTP is used when recommending extensions, since extensions that are applicable in some scenarios can be problematic in others. Where possible, the WebRTC framework will adopt RTP extensions that are of general utility, to enable easy implementation @@ -1055,65 +1052,65 @@ RTP Profile: The name of the RTP profile to be used in session. The RTP/AVP [RFC3551] and RTP/AVPF [RFC4585] profiles can interoperate on basic level, as can their secure variants RTP/SAVP [RFC3711] and RTP/SAVPF [RFC5124]. The secure variants of the profiles do not directly interoperate with the non-secure variants, due to the presence of additional header fields for authentication in SRTP packets and cryptographic transformation of the payload. WebRTC requires the use of the RTP/SAVPF profile, and this MUST be signalled. Interworking functions might transform this into the RTP/SAVP profile for a legacy use case, by indicating to the - WebRTC end-point that the RTP/SAVPF is used and configuring a trr- + WebRTC Endpoint that the RTP/SAVPF is used and configuring a trr- int value of 4 seconds. Transport Information: Source and destination IP address(s) and ports for RTP and RTCP MUST be signalled for each RTP session. In WebRTC these transport addresses will be provided by ICE [RFC5245] that signals candidates and arrives at nominated candidate address pairs. If RTP and RTCP multiplexing [RFC5761] is to be used, such that a single port, i.e. transport-layer flow, is used for RTP and RTCP flows, this MUST be signalled (see Section 4.5). RTP Payload Types, media formats, and format parameters: The mapping between media type names (and hence the RTP payload formats to be used), and the RTP payload type numbers MUST be signalled. Each media type MAY also have a number of media type parameters that MUST also be signalled to configure the codec and RTP payload format (the "a=fmtp:" line from SDP). Section 4.3 of this memo discusses requirements for uniqueness of payload types. RTP Extensions: The use of any additional RTP header extensions and RTCP packet types, including any necessary parameters, MUST be - signalled. This signalling is to ensure that a WebRTC endpoint's + signalled. This signalling is to ensure that a WebRTC Endpoint's behaviour, especially when sending, of any extensions is predictable and consistent. For robustness, and for compatibility with non-WebRTC systems that might be connected to a WebRTC session via a gateway, implementations are REQUIRED to ignore unknown RTCP packets and RTP header extensions (see also Section 4.1). RTCP Bandwidth: Support for exchanging RTCP Bandwidth values to the end-points will be necessary. This SHALL be done as described in "Session Description Protocol (SDP) Bandwidth Modifiers for RTP Control Protocol (RTCP) Bandwidth" [RFC3556] if using SDP, or something semantically equivalent. This also ensures that the end-points have a common view of the RTCP bandwidth. A common RTCP bandwidth is important as a too different view of the bandwidths can lead to failure to interoperate. These parameters are often expressed in SDP messages conveyed within an offer/answer exchange. RTP does not depend on SDP or on the offer /answer model, but does require all the necessary parameters to be agreed upon, and provided to the RTP implementation. Note that in - the WebRTC context it will depend on the signalling model and API how - these parameters need to be configured but they will be need to - either be set in the API or explicitly signalled between the peers. + WebRTC it will depend on the signalling model and API how these + parameters need to be configured but they will be need to either be + set in the API or explicitly signalled between the peers. 11. WebRTC API Considerations The WebRTC API [W3C.WD-webrtc-20130910] and the Media Capture and Streams API [W3C.WD-mediacapture-streams-20130903] defines and uses the concept of a MediaStream that consists of zero or more MediaStreamTracks. A MediaStreamTrack is an individual stream of media from any type of media source like a microphone or a camera, but also conceptual sources, like a audio mix or a video composition, are possible. The MediaStreamTracks within a MediaStream need to be @@ -1168,39 +1165,39 @@ user or device across different services (see Section 4.4.1 of [I-D.ietf-rtcweb-security] for details). A web application can request that the CNAMEs used in different RTCPeerConnections (within a same-orign context) be the same, this allows for synchronization of the endpoint's RTP packet streams across the different RTCPeerConnections. Note: this doesn't result in a tracking issue, since the creation of matching CNAMEs depends on existing tracking. - The above will currently force a WebRTC end-point that receives a + The above will currently force a WebRTC Endpoint that receives a MediaStreamTrack on one RTCPeerConnection and adds it as an outgoing on any RTCPeerConnection to perform resynchronisation of the stream. This, as the sending party needs to change the CNAME to the one it uses, which implies that the sender has to use a local system clock as timebase for the synchronisation. Thus, the relative relation between the timebase of the incoming stream and the system sending out needs to defined. This relation also needs monitoring for clock drift and likely adjustments of the synchronisation. The sending entity is also responsible for congestion control for its sent streams. In cases of packet loss the loss of incoming data also needs to be handled. This leads to the observation that the method that is least likely to cause issues or interruptions in the outgoing source packet stream is a model of full decoding, including repair etc., followed by encoding of the media again into the outgoing packet stream. Optimisations of this method is clearly possible and implementation specific. - A WebRTC end-point MUST support receiving multiple MediaStreamTracks, + A WebRTC Endpoint MUST support receiving multiple MediaStreamTracks, where each of different MediaStreamTracks (and their sets of associated packet streams) uses different CNAMEs. However, MediaStreamTracks that are received with different CNAMEs have no defined synchronisation. Note: The motivation for supporting reception of multiple CNAMEs is to allow for forward compatibility with any future changes that enables more efficient stream handling when end-points relay/ forward streams. It also ensures that end-points can interoperate with certain types of multi-stream middleboxes or end-points that @@ -1221,32 +1218,33 @@ Finally this specification puts a requirement on the WebRTC API to realize a method for determining the CSRC list (Section 4.1) as well as the Mixer-to-Client audio levels (Section 5.2.3) (when supported) and the basic requirements for this is further discussed in Section 12.2.1. 12. RTP Implementation Considerations The following discussion provides some guidance on the implementation of the RTP features described in this memo. The focus is on a WebRTC - end-point implementation perspective, and while some mention is made + Endpoint implementation perspective, and while some mention is made of the behaviour of middleboxes, that is not the focus of this memo. 12.1. Configuration and Use of RTP Sessions - A WebRTC end-point will be a simultaneous participant in one or more + A WebRTC Endpoint will be a simultaneous participant in one or more RTP sessions. Each RTP session can convey multiple media sources, and can include media data from multiple end-points. In the - following, some ways in which WebRTC end-points can configure and use + following, some ways in which WebRTC Endpoints can configure and use RTP sessions is outlined. 12.1.1. Use of Multiple Media Sources Within an RTP Session + RTP is a group communication protocol, and every RTP session can potentially contain multiple RTP packet streams. There are several reasons why this might be desirable: Multiple media types: Outside of WebRTC, it is common to use one RTP session for each type of media sources (e.g., one RTP session for audio sources and one for video sources, each sent over different transport layer flows). However, to reduce the number of UDP ports used, the default in WebRTC is to send all types of media in a single RTP session, as described in Section 4.4, using RTP and @@ -1251,21 +1249,21 @@ ports used, the default in WebRTC is to send all types of media in a single RTP session, as described in Section 4.4, using RTP and RTCP multiplexing (Section 4.5) to further reduce the number of UDP ports needed. This RTP session then uses only one bi- directional transport-layer flow, but will contain multiple RTP packet streams, each containing a different type of media. A common example might be an end-point with a camera and microphone that sends two RTP packet streams, one video and one audio, into a single RTP session. - Multiple Capture Devices: A WebRTC end-point might have multiple + Multiple Capture Devices: A WebRTC Endpoint might have multiple cameras, microphones, or other media capture devices, and so might want to generate several RTP packet streams of the same media type. Alternatively, it might want to send media from a single capture device in several different formats or quality settings at once. Both can result in a single end-point sending multiple RTP packet streams of the same media type into a single RTP session at the same time. Associated Repair Data: An end-point might send a RTP packet stream that is somehow associated with another stream. For example, it @@ -1276,44 +1274,44 @@ stream. Layered or Multiple Description Coding: An end-point can use a layered media codec, for example H.264 SVC, or a multiple description codec, that generates multiple RTP packet streams, each with a distinct RTP SSRC, within a single RTP session. RTP Mixers, Translators, and Other Middleboxes: An RTP session, in the WebRTC context, is a point-to-point association between an end-point and some other peer device, where those devices share a - common SSRC space. The peer device might be another WebRTC end- - point, or it might be an RTP mixer, translator, or some other form - of media processing middlebox. In the latter cases, the middlebox - might send mixed or relayed RTP streams from several participants, - that the WebRTC end-point will need to render. Thus, even though - a WebRTC end-point might only be a member of a single RTP session, - the peer device might be extending that RTP session to incorporate - other end-points. WebRTC is a group communication environment and - end-points need to be capable of receiving, decoding, and playing - out multiple RTP packet streams at once, even in a single RTP - session. + common SSRC space. The peer device might be another WebRTC + Endpoint, or it might be an RTP mixer, translator, or some other + form of media processing middlebox. In the latter cases, the + middlebox might send mixed or relayed RTP streams from several + participants, that the WebRTC Endpoint will need to render. Thus, + even though a WebRTC Endpoint might only be a member of a single + RTP session, the peer device might be extending that RTP session + to incorporate other end-points. WebRTC is a group communication + environment and end-points need to be capable of receiving, + decoding, and playing out multiple RTP packet streams at once, + even in a single RTP session. 12.1.2. Use of Multiple RTP Sessions In addition to sending and receiving multiple RTP packet streams - within a single RTP session, a WebRTC end-point might participate in - multiple RTP sessions. There are several reasons why a WebRTC end- - point might choose to do this: + within a single RTP session, a WebRTC Endpoint might participate in + multiple RTP sessions. There are several reasons why a WebRTC + Endpoint might choose to do this: To interoperate with legacy devices: The common practice in the non- WebRTC world is to send different types of media in separate RTP sessions, for example using one RTP session for audio and another RTP session, on a separate transport layer flow, for video. All - WebRTC end-points need to support the option of sending different + WebRTC Endpoints need to support the option of sending different types of media on different RTP sessions, so they can interwork with such legacy devices. This is discussed further in Section 4.4. To provide enhanced quality of service: Some network-based quality of service mechanisms operate on the granularity of transport layer flows. If it is desired to use these mechanisms to provide differentiated quality of service for some RTP packet streams, then those RTP packet streams need to be sent in a separate RTP session using a different transport-layer flow, and with @@ -1412,21 +1410,21 @@ Figure 2: RTP mixer with only unicast paths There are various methods of implementation for the middlebox. If implemented as a standard RTP mixer or translator, a single RTP session will extend across the middlebox and encompass all the end-points in one multi-party session. Other types of middlebox might use separate RTP sessions between each end-point and the middlebox. A common aspect is that these RTP middleboxes can use a number of tools to control the media encoding provided by a - WebRTC end-point. This includes functions like requesting the + WebRTC Endpoint. This includes functions like requesting the breaking of the encoding chain and have the encoder produce a so called Intra frame. Another is limiting the bit-rate of a given stream to better suit the mixer view of the multiple down-streams. Others are controlling the most suitable frame-rate, picture resolution, the trade-off between frame-rate and spatial quality. The middlebox has the responsibility to correctly perform congestion control, source identification, manage synchronisation while providing the application with suitable media optimisations. The middlebox also has to be a trusted node when it comes to security, since it manipulates either the RTP header or the media @@ -1538,37 +1536,37 @@ identified using a combination of IP and port address. Deep Packet Inspection: A network classifier (DPI) inspects the packet and tries to determine if the packet represents a particular application and type that is to be prioritized. Flow-based differentiation will provide the same treatment to all packets within a transport-layer flow, i.e., relative prioritization is not possible. Moreover, if the resources are limited it might not be possible to provide differential treatment compared to best-effort - for all the RTP packet streams in a WebRTC application. When flow- - based differentiation is available the WebRTC application needs to - know about it so that it can provide the separation of the RTP packet + for all the RTP packet streams used in a WebRTC session. When flow- + based differentiation is available, the WebRTC Endpoint needs to know + about it so that it can provide the separation of the RTP packet streams onto different UDP flows to enable a more granular usage of flow based differentiation. That way at least providing different prioritization of audio and video if desired by application. DiffServ assumes that either the end-point or a classifier can mark the packets with an appropriate DSCP so that the packets are treated according to that marking. If the end-point is to mark the traffic - two requirements arise in the WebRTC context: 1) The WebRTC - application or browser has to know which DSCP to use and that it can - use them on some set of RTP packet streams. 2) The information needs - to be propagated to the operating system when transmitting the - packet. Details of this process are outside the scope of this memo - and are further discussed in "DSCP and other packet markings for - RTCWeb QoS" [I-D.ietf-tsvwg-rtcweb-qos]. + two requirements arise in the WebRTC context: 1) The WebRTC Endpoint + has to know which DSCP to use and that it can use them on some set of + RTP packet streams. 2) The information needs to be propagated to the + operating system when transmitting the packet. Details of this + process are outside the scope of this memo and are further discussed + in "DSCP and other packet markings for RTCWeb QoS" + [I-D.ietf-tsvwg-rtcweb-qos]. For packet based marking schemes it might be possible to mark individual RTP packets differently based on the relative priority of the RTP payload. For example video codecs that have I, P, and B pictures could prioritise any payloads carrying only B frames less, as these are less damaging to loose. However, depending on the QoS mechanism and what markings that are applied, this can result in not only different packet drop probabilities but also packet reordering, see [I-D.ietf-tsvwg-rtcweb-qos] for further discussion. As a default policy all RTP packets related to a RTP packet stream ought to be @@ -1590,56 +1588,57 @@ 12.2. Media Source, RTP Packet Streams, and Participant Identification 12.2.1. Media Source Identification Each RTP packet stream is identified by a unique synchronisation source (SSRC) identifier. The SSRC identifier is carried in each of the RTP packets comprising a RTP packet stream, and is also used to identify that stream in the corresponding RTCP reports. The SSRC is chosen as discussed in Section 4.8. The first stage in demultiplexing RTP and RTCP packets received on a single transport - layer flow at a WebRTC end-point is to separate the RTP packet - streams based on their SSRC value; once that is done, additional + layer flow at a WebRTC Endpoint is to separate the RTP packet streams + based on their SSRC value; once that is done, additional demultiplexing steps can determine how and where to render the media. RTP allows a mixer, or other RTP-layer middlebox, to combine encoded streams from multiple media sources to form a new encoded stream from a new media source (the mixer). The RTP packets in that new RTP packet stream can include a Contributing Source (CSRC) list, indicating which original SSRCs contributed to the combined source stream. As described in Section 4.1, implementations need to support reception of RTP data packets containing a CSRC list and RTCP packets that relate to sources present in the CSRC list. The CSRC list can change on a packet-by-packet basis, depending on the mixing operation being performed. Knowledge of what media sources contributed to a particular RTP packet can be important if the user interface indicates which participants are active in the session. Changes in the CSRC list included in packets needs to be exposed to the WebRTC application using some API, if the application is to be able to track changes in session participation. It is desirable to map CSRC values back into WebRTC MediaStream identities as they cross this API, to - avoid exposing the SSRC/CSRC name space to JavaScript applications. + avoid exposing the SSRC/CSRC name space to WebRTC applications. If the mixer-to-client audio level extension [RFC6465] is being used in the session (see Section 5.2.3), the information in the CSRC list is augmented by audio level information for each contributing source. It is desirable to expose this information to the WebRTC application using some API, after mapping the CSRC values to WebRTC MediaStream identities, so it can be exposed in the user interface. 12.2.2. SSRC Collision Detection The RTP standard requires RTP implementations to have support for detecting and handling SSRC collisions, i.e., resolve the conflict when two different end-points use the same SSRC value (see section - 8.2 of [RFC3550]). This requirement also applies to WebRTC end- - points. There are several scenarios where SSRC collisions can occur: + 8.2 of [RFC3550]). This requirement also applies to WebRTC + Endpoints. There are several scenarios where SSRC collisions can + occur: o In a point-to-point session where each SSRC is associated with either of the two end-points and where the main media carrying SSRC identifier will be announced in the signalling channel, a collision is less likely to occur due to the information about used SSRCs. If SDP is used, this information is provided by Source-Specific SDP Attributes [RFC5576]. Still, collisions can occur if both end-points start using a new SSRC identifier prior to having signalled it to the peer and received acknowledgement on the signalling message. The Source-Specific SDP Attributes @@ -1794,22 +1793,22 @@ Romascanu, Jim Spring, Martin Thomson, and the other members of the IETF RTCWEB working group for their valuable feedback. 16. References 16.1. Normative References [I-D.ietf-avtcore-multi-media-rtp-session] Westerlund, M., Perkins, C., and J. Lennox, "Sending Multiple Types of Media in a Single RTP Session", draft- - ietf-avtcore-multi-media-rtp-session-05 (work in - progress), February 2014. + ietf-avtcore-multi-media-rtp-session-06 (work in + progress), October 2014. [I-D.ietf-avtcore-rtp-circuit-breakers] Perkins, C. and V. Singh, "Multimedia Congestion Control: Circuit Breakers for Unicast RTP Sessions", draft-ietf- avtcore-rtp-circuit-breakers-06 (work in progress), July 2014. [I-D.ietf-avtcore-rtp-multi-stream-optimisation] Lennox, J., Westerlund, M., Wu, Q., and C. Perkins, "Sending Multiple Media Streams in a Single RTP Session: @@ -1924,63 +1923,63 @@ [RFC7164] Gross, K. and R. Brandenburg, "RTP and Leap Seconds", RFC 7164, March 2014. 16.2. Informative References [I-D.ietf-avtcore-multiplex-guidelines] Westerlund, M., Perkins, C., and H. Alvestrand, "Guidelines for using the Multiplexing Features of RTP to Support Multiple Media Streams", draft-ietf-avtcore- - multiplex-guidelines-02 (work in progress), January 2014. + multiplex-guidelines-03 (work in progress), October 2014. [I-D.ietf-avtcore-rtp-topologies-update] Westerlund, M. and S. Wenger, "RTP Topologies", draft- ietf-avtcore-rtp-topologies-update-04 (work in progress), August 2014. [I-D.ietf-avtext-rtp-grouping-taxonomy] Lennox, J., Gross, K., Nandakumar, S., and G. Salgueiro, "A Taxonomy of Grouping Semantics and Mechanisms for Real- Time Transport Protocol (RTP) Sources", draft-ietf-avtext- rtp-grouping-taxonomy-02 (work in progress), June 2014. [I-D.ietf-mmusic-msid] Alvestrand, H., "WebRTC MediaStream Identification in the - Session Description Protocol", draft-ietf-mmusic-msid-06 - (work in progress), June 2014. + Session Description Protocol", draft-ietf-mmusic-msid-07 + (work in progress), October 2014. [I-D.ietf-mmusic-sdp-bundle-negotiation] Holmberg, C., Alvestrand, H., and C. Jennings, "Negotiating Media Multiplexing Using the Session Description Protocol (SDP)", draft-ietf-mmusic-sdp-bundle- - negotiation-08 (work in progress), August 2014. + negotiation-12 (work in progress), October 2014. [I-D.ietf-payload-rtp-howto] Westerlund, M., "How to Write an RTP Payload Format", draft-ietf-payload-rtp-howto-13 (work in progress), January 2014. [I-D.ietf-rmcat-cc-requirements] Jesup, R., "Congestion Control Requirements For RMCAT", - draft-ietf-rmcat-cc-requirements-05 (work in progress), - July 2014. + draft-ietf-rmcat-cc-requirements-06 (work in progress), + October 2014. [I-D.ietf-rtcweb-audio] Valin, J. and C. Bran, "WebRTC Audio Codec and Processing - Requirements", draft-ietf-rtcweb-audio-05 (work in - progress), February 2014. + Requirements", draft-ietf-rtcweb-audio-06 (work in + progress), September 2014. [I-D.ietf-rtcweb-overview] Alvestrand, H., "Overview: Real Time Protocols for - Browser-based Applications", draft-ietf-rtcweb-overview-11 - (work in progress), August 2014. + Browser-based Applications", draft-ietf-rtcweb-overview-12 + (work in progress), October 2014. [I-D.ietf-rtcweb-use-cases-and-requirements] Holmberg, C., Hakansson, S., and G. Eriksson, "Web Real- Time Communication Use-cases and Requirements", draft- ietf-rtcweb-use-cases-and-requirements-14 (work in progress), February 2014. [I-D.ietf-tsvwg-rtcweb-qos] Dhesikan, S., Jennings, C., Druta, D., Jones, P., and J. Polk, "DSCP and other packet markings for RTCWeb QoS",