--- 1/draft-ietf-detnet-use-cases-02.txt 2016-02-16 13:16:26.723522730 -0800 +++ 2/draft-ietf-detnet-use-cases-03.txt 2016-02-16 13:16:26.903527215 -0800 @@ -1,38 +1,38 @@ Internet Engineering Task Force E. Grossman, Ed. Internet-Draft DOLBY Intended status: Informational C. Gunther -Expires: August 13, 2016 HARMAN +Expires: August 19, 2016 HARMAN P. Thubert P. Wetterwald CISCO J. Raymond HYDRO-QUEBEC J. Korhonen BROADCOM Y. Kaneko Toshiba S. Das Applied Communication Sciences Y. Zha HUAWEI B. Varga J. Farkas Ericsson F. Goetz J. Schmitt Siemens - February 10, 2016 + February 16, 2016 Deterministic Networking Use Cases - draft-ietf-detnet-use-cases-02 + draft-ietf-detnet-use-cases-03 Abstract This draft documents requirements in several diverse industries to establish multi-hop paths for characterized flows with deterministic properties. In this context deterministic implies that streams can be established which provide guaranteed bandwidth and latency which can be established from either a Layer 2 or Layer 3 (IP) interface, and which can co-exist on an IP network with best-effort traffic. @@ -54,137 +54,145 @@ Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet- Drafts is at http://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." - This Internet-Draft will expire on August 13, 2016. + This Internet-Draft will expire on August 19, 2016. Copyright Notice Copyright (c) 2016 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Table of Contents - 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4 + 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 5 2. Pro Audio Use Cases . . . . . . . . . . . . . . . . . . . . . 5 2.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 5 2.2. Fundamental Stream Requirements . . . . . . . . . . . . . 6 2.2.1. Guaranteed Bandwidth . . . . . . . . . . . . . . . . 6 2.2.2. Bounded and Consistent Latency . . . . . . . . . . . 7 2.2.2.1. Optimizations . . . . . . . . . . . . . . . . . . 8 - 2.3. Additional Stream Requirements . . . . . . . . . . . . . 8 + 2.3. Additional Stream Requirements . . . . . . . . . . . . . 9 2.3.1. Deterministic Time to Establish Streaming . . . . . . 9 2.3.2. Use of Unused Reservations by Best-Effort Traffic . . 9 - 2.3.3. Layer 3 Interconnecting Layer 2 Islands . . . . . . . 9 + 2.3.3. Layer 3 Interconnecting Layer 2 Islands . . . . . . . 10 2.3.4. Secure Transmission . . . . . . . . . . . . . . . . . 10 2.3.5. Redundant Paths . . . . . . . . . . . . . . . . . . . 10 2.3.6. Link Aggregation . . . . . . . . . . . . . . . . . . 10 - 2.3.7. Traffic Segregation . . . . . . . . . . . . . . . . . 10 + 2.3.7. Traffic Segregation . . . . . . . . . . . . . . . . . 11 2.3.7.1. Packet Forwarding Rules, VLANs and Subnets . . . 11 2.3.7.2. Multicast Addressing (IPv4 and IPv6) . . . . . . 11 - 2.4. Integration of Reserved Streams into IT Networks . . . . 11 + 2.4. Integration of Reserved Streams into IT Networks . . . . 12 2.5. Security Considerations . . . . . . . . . . . . . . . . . 12 2.5.1. Denial of Service . . . . . . . . . . . . . . . . . . 12 2.5.2. Control Protocols . . . . . . . . . . . . . . . . . . 12 2.6. A State-of-the-Art Broadcast Installation Hits Technology - Limits . . . . . . . . . . . . . . . . . . . . . . . . . 12 - 2.7. Acknowledgements . . . . . . . . . . . . . . . . . . . . 13 + Limits . . . . . . . . . . . . . . . . . . . . . . . . . 13 3. Utility Telecom Use Cases . . . . . . . . . . . . . . . . . . 13 3.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . 13 3.2. Telecommunications Trends and General telecommunications Requirements . . . . . . . . . . . . . . . . . . . . . . 14 3.2.1. General Telecommunications Requirements . . . . . . . 14 3.2.1.1. Migration to Packet-Switched Network . . . . . . 15 3.2.2. Applications, Use cases and traffic patterns . . . . 16 3.2.2.1. Transmission use cases . . . . . . . . . . . . . 16 3.2.2.2. Distribution use case . . . . . . . . . . . . . . 26 3.2.2.3. Generation use case . . . . . . . . . . . . . . . 29 3.2.3. Specific Network topologies of Smart Grid Applications . . . . . . . . . . . . . . . . . . . . 30 3.2.4. Precision Time Protocol . . . . . . . . . . . . . . . 31 3.3. IANA Considerations . . . . . . . . . . . . . . . . . . . 32 3.4. Security Considerations . . . . . . . . . . . . . . . . . 32 3.4.1. Current Practices and Their Limitations . . . . . . . 32 3.4.2. Security Trends in Utility Networks . . . . . . . . . 34 - 3.5. Acknowledgements . . . . . . . . . . . . . . . . . . . . 35 - 4. Building Automation Systems Use Cases . . . . . . . . . . . . 35 - 4.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 36 - 4.2. BAS Functionality . . . . . . . . . . . . . . . . . . . . 36 - 4.3. BAS Architecture . . . . . . . . . . . . . . . . . . . . 37 - 4.4. Deployment Model . . . . . . . . . . . . . . . . . . . . 38 - 4.5. Use cases and Field Network Requirements . . . . . . . . 40 - 4.5.1. Environmental Monitoring . . . . . . . . . . . . . . 40 - 4.5.2. Fire Detection . . . . . . . . . . . . . . . . . . . 40 - 4.5.3. Feedback Control . . . . . . . . . . . . . . . . . . 41 - 4.6. Security Considerations . . . . . . . . . . . . . . . . . 42 - 5. Wireless for Industrial Use Cases . . . . . . . . . . . . . . 43 - 5.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 43 - 5.2. Terminology . . . . . . . . . . . . . . . . . . . . . . . 44 - 5.3. 6TiSCH Overview . . . . . . . . . . . . . . . . . . . . . 44 - 5.3.1. TSCH and 6top . . . . . . . . . . . . . . . . . . . . 47 - 5.3.2. SlotFrames and Priorities . . . . . . . . . . . . . . 47 - 5.3.3. Schedule Management by a PCE . . . . . . . . . . . . 47 - 5.3.4. Track Forwarding . . . . . . . . . . . . . . . . . . 48 - 5.3.4.1. Transport Mode . . . . . . . . . . . . . . . . . 50 - 5.3.4.2. Tunnel Mode . . . . . . . . . . . . . . . . . . . 51 - 5.3.4.3. Tunnel Metadata . . . . . . . . . . . . . . . . . 52 - 5.4. Operations of Interest for DetNet and PCE . . . . . . . . 53 - 5.4.1. Packet Marking and Handling . . . . . . . . . . . . . 54 - 5.4.1.1. Tagging Packets for Flow Identification . . . . . 54 - 5.4.1.2. Replication, Retries and Elimination . . . . . . 54 - 5.4.1.3. Differentiated Services Per-Hop-Behavior . . . . 55 - 5.4.2. Topology and capabilities . . . . . . . . . . . . . . 55 - 5.5. Security Considerations . . . . . . . . . . . . . . . . . 56 - 5.6. Acknowledgments . . . . . . . . . . . . . . . . . . . . . 56 - 6. Cellular Radio Use Cases . . . . . . . . . . . . . . . . . . 56 - 6.1. Introduction and background . . . . . . . . . . . . . . . 56 - 6.2. Network architecture . . . . . . . . . . . . . . . . . . 60 - 6.3. Time synchronization requirements . . . . . . . . . . . . 61 - 6.4. Time-sensitive stream requirements . . . . . . . . . . . 62 - 6.5. Security considerations . . . . . . . . . . . . . . . . . 63 - 7. Industrial M2M . . . . . . . . . . . . . . . . . . . . . . . 63 - 7.1. Use Case Description . . . . . . . . . . . . . . . . . . 64 - 7.2. Industrial M2M Communication Today . . . . . . . . . . . 65 - 7.2.1. Transport Parameters . . . . . . . . . . . . . . . . 65 - 7.2.2. Stream Creation and Destruction . . . . . . . . . . . 66 - 7.3. Industrial M2M Future . . . . . . . . . . . . . . . . . . 66 - 7.4. Industrial M2M Asks . . . . . . . . . . . . . . . . . . . 66 - 7.5. Acknowledgements . . . . . . . . . . . . . . . . . . . . 67 - 8. Other Use Cases . . . . . . . . . . . . . . . . . . . . . . . 67 - 8.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 67 - 8.2. Critical Delay Requirements . . . . . . . . . . . . . . . 68 - 8.3. Coordinated multipoint processing (CoMP) . . . . . . . . 69 - 8.3.1. CoMP Architecture . . . . . . . . . . . . . . . . . . 69 - 8.3.2. Delay Sensitivity in CoMP . . . . . . . . . . . . . . 70 - 8.4. Industrial Automation . . . . . . . . . . . . . . . . . . 70 - 8.5. Vehicle to Vehicle . . . . . . . . . . . . . . . . . . . 70 - 8.6. Gaming, Media and Virtual Reality . . . . . . . . . . . . 71 - 9. Use Case Common Elements . . . . . . . . . . . . . . . . . . 71 - 10. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 72 - 11. Informative References . . . . . . . . . . . . . . . . . . . 73 - Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 81 + 4. Building Automation Systems . . . . . . . . . . . . . . . . . 35 + 4.1. Use Case Description . . . . . . . . . . . . . . . . . . 35 + 4.2. Building Automation Systems Today . . . . . . . . . . . . 36 + 4.2.1. BAS Architecture . . . . . . . . . . . . . . . . . . 36 + 4.2.2. BAS Deployment Model . . . . . . . . . . . . . . . . 37 + 4.2.3. Use Cases for Field Networks . . . . . . . . . . . . 39 + 4.2.3.1. Environmental Monitoring . . . . . . . . . . . . 39 + 4.2.3.2. Fire Detection . . . . . . . . . . . . . . . . . 39 + 4.2.3.3. Feedback Control . . . . . . . . . . . . . . . . 40 + 4.2.4. Security Considerations . . . . . . . . . . . . . . . 40 + 4.3. BAS Future . . . . . . . . . . . . . . . . . . . . . . . 40 + 4.4. BAS Asks . . . . . . . . . . . . . . . . . . . . . . . . 41 + 5. Wireless for Industrial Use Cases . . . . . . . . . . . . . . 41 + 5.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 41 + 5.2. Terminology . . . . . . . . . . . . . . . . . . . . . . . 42 + 5.3. 6TiSCH Overview . . . . . . . . . . . . . . . . . . . . . 43 + 5.3.1. TSCH and 6top . . . . . . . . . . . . . . . . . . . . 46 + 5.3.2. SlotFrames and Priorities . . . . . . . . . . . . . . 46 + 5.3.3. Schedule Management by a PCE . . . . . . . . . . . . 46 + 5.3.4. Track Forwarding . . . . . . . . . . . . . . . . . . 47 + 5.3.4.1. Transport Mode . . . . . . . . . . . . . . . . . 49 + 5.3.4.2. Tunnel Mode . . . . . . . . . . . . . . . . . . . 50 + 5.3.4.3. Tunnel Metadata . . . . . . . . . . . . . . . . . 51 + 5.4. Operations of Interest for DetNet and PCE . . . . . . . . 51 + 5.4.1. Packet Marking and Handling . . . . . . . . . . . . . 52 + 5.4.1.1. Tagging Packets for Flow Identification . . . . . 52 + 5.4.1.2. Replication, Retries and Elimination . . . . . . 52 + 5.4.1.3. Differentiated Services Per-Hop-Behavior . . . . 53 + 5.4.2. Topology and capabilities . . . . . . . . . . . . . . 53 + 5.5. Security Considerations . . . . . . . . . . . . . . . . . 54 + 6. Cellular Radio Use Cases . . . . . . . . . . . . . . . . . . 54 + 6.1. Use Case Description . . . . . . . . . . . . . . . . . . 54 + 6.1.1. Network Architecture . . . . . . . . . . . . . . . . 54 + 6.1.2. Time Synchronization Requirements . . . . . . . . . . 55 + 6.1.3. Time-Sensitive Stream Requirements . . . . . . . . . 57 + 6.1.4. Security Considerations . . . . . . . . . . . . . . . 57 + 6.2. Cellular Radio Networks Today . . . . . . . . . . . . . . 58 + 6.3. Cellular Radio Networks Future . . . . . . . . . . . . . 58 + 6.4. Cellular Radio Networks Asks . . . . . . . . . . . . . . 60 + 7. Industrial M2M . . . . . . . . . . . . . . . . . . . . . . . 60 + 7.1. Use Case Description . . . . . . . . . . . . . . . . . . 60 + 7.2. Industrial M2M Communication Today . . . . . . . . . . . 62 + 7.2.1. Transport Parameters . . . . . . . . . . . . . . . . 62 + 7.2.2. Stream Creation and Destruction . . . . . . . . . . . 63 + 7.3. Industrial M2M Future . . . . . . . . . . . . . . . . . . 63 + 7.4. Industrial M2M Asks . . . . . . . . . . . . . . . . . . . 63 + 8. Other Use Cases . . . . . . . . . . . . . . . . . . . . . . . 64 + 8.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 64 + 8.2. Critical Delay Requirements . . . . . . . . . . . . . . . 65 + 8.3. Coordinated multipoint processing (CoMP) . . . . . . . . 65 + 8.3.1. CoMP Architecture . . . . . . . . . . . . . . . . . . 65 + 8.3.2. Delay Sensitivity in CoMP . . . . . . . . . . . . . . 66 + 8.4. Industrial Automation . . . . . . . . . . . . . . . . . . 67 + 8.5. Vehicle to Vehicle . . . . . . . . . . . . . . . . . . . 67 + 8.6. Gaming, Media and Virtual Reality . . . . . . . . . . . . 68 + 9. Use Case Common Elements . . . . . . . . . . . . . . . . . . 68 + 10. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 69 + 10.1. Pro Audio . . . . . . . . . . . . . . . . . . . . . . . 69 + 10.2. Utility Telecom . . . . . . . . . . . . . . . . . . . . 69 + 10.3. Building Automation Systems . . . . . . . . . . . . . . 70 + 10.4. Wireless for Industrial . . . . . . . . . . . . . . . . 70 + 10.5. Cellular Radio . . . . . . . . . . . . . . . . . . . . . 70 + 10.6. Industrial M2M . . . . . . . . . . . . . . . . . . . . . 70 + 10.7. Other . . . . . . . . . . . . . . . . . . . . . . . . . 70 + 11. Informative References . . . . . . . . . . . . . . . . . . . 71 + Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 79 1. Introduction This draft presents use cases from diverse industries which have in common a need for deterministic streams, but which also differ notably in their network topologies and specific desired behavior. Together, they provide broad industry context for DetNet and a yardstick against which proposed DetNet designs can be measured (to what extent does a proposed design satisfy these various use cases?) @@ -206,22 +214,20 @@ o What do you want the IETF to deliver? The level of detail in each use case should be sufficient to express the relevant elements of the use case, but not more. At the end we consider the use cases collectively, and examine the most significant goals they have in common. 2. Pro Audio Use Cases - (This section was derived from draft-gunther-detnet-proaudio-req-01) - 2.1. Introduction The professional audio and video industry includes music and film content creation, broadcast, cinema, and live exposition as well as public address, media and emergency systems at large venues (airports, stadiums, churches, theme parks). These industries have already gone through the transition of audio and video signals from analog to digital, however the interconnect systems remain primarily point-to-point with a single (or small number of) signals per link, interconnected with purpose-built hardware. @@ -574,38 +580,22 @@ they possibly could with packet-based technology. They constructed seven individual studios using layer 2 LANS (using IEEE 802.1 AVB) that were entirely effective at routing audio within the LANs, and they were very happy with the results, however to interconnect these layer 2 LAN islands together they ended up using dedicated links because there is no standards-based routing solution available. This is the kind of motivation we have to develop these standards because customers are ready and able to use them. -2.7. Acknowledgements - - The editors would like to acknowledge the help of the following - individuals and the companies they represent: - - Jeff Koftinoff, Meyer Sound - - Jouni Korhonen, Associate Technical Director, Broadcom - - Pascal Thubert, CTAO, Cisco - - Kieran Tyrrell, Sienda New Media Technologies GmbH - 3. Utility Telecom Use Cases - (This section was derived from draft-wetterwald-detnet-utilities- - reqs-02) - 3.1. Overview [I-D.finn-detnet-problem-statement] defines the characteristics of a deterministic flow as a data communication flow with a bounded latency, extraordinarily low frame loss, and a very narrow jitter. This document intends to define the utility requirements for deterministic networking. Utility Telecom Networks @@ -1570,360 +1560,286 @@ associated with implementing security solutions in OT networks. Securing OT (Operation technology) telecommunications over packet- switched IP networks follow the same principles that are foundational for securing the IT infrastructure, i.e., consideration must be given to enforcing electronic access control for both person-to-machine and machine-to-machine communications, and providing the appropriate levels of data privacy, device and platform integrity, and threat detection and mitigation. -3.5. Acknowledgements - - Faramarz Maghsoodlou, Ph. D. IoT Connected Industries and Energy - Practice Cisco - - Pascal Thubert, CTAO Cisco - -4. Building Automation Systems Use Cases -4.1. Introduction - - Building Automation System (BAS) is a system that manages various - equipment and sensors in buildings (e.g., heating, cooling and - ventilating) for improving residents' comfort, reduction of energy - consumption and automatic responses in case of failure and emergency. - For example, BAS measures temperature of a room by using various - sensors and then controls the HVAC (Heating, Ventilating, and air - Conditioning) system automatically to maintain the temperature level - and minimize the energy consumption. - - There are typically two layers of network in a BAS. Upper one is - called management network and the lower one is called field network. - In management networks, an IP-based communication protocol is used - while in field network, non-IP based communication protocols (a.k.a., - field protocol) are mainly used. - - There are many field protocols used in today's deployment in which - some medium access control and physical layers protocols are - standards-based and others are proprietary based. Therefore the BAS - needs to have multiple MAC/PHY modules and interfaces to make use of - multiple field protocols based devices. This situation not only - makes BAS more expensive with large development cycle of multiple - devices but also creates the issue of vendor lock-in with multiple - types of management applications. +4. Building Automation Systems - The other issue with some of the existing field networks and - protocols are security. When these protocols and network were - developed, it was assumed that the field networks are isolated - physically from external networks and therefore the network and - protocol security was not a concern. However, in today's world many - BASes are managed remotely and is connected to shared IP networks and - it is also not uncommon that same IT infrastructure is used be it - office, home or in enterprise networks. Adding network and protocol - security to existing system is a non-trivial task. +4.1. Use Case Description - This document first describes the BAS functionalities, its - architecture and current deployment models. Then we discuss the use - cases and field network requirements that need to be satisfied by - deterministic networking. + A Building Automation System (BAS) manages equipment and sensors in a + building for improving residents' comfort, reducing energy + consumption, and responding to failures and emergencies. For + example, the BAS measures the temperature of a room using sensors and + then controls the HVAC (heating, ventilating, and air conditioning) + to maintain a set temperature and minimize energy consumption. -4.2. BAS Functionality + A BAS primarily performs the following functions: - Building Automation System (BAS) is a system that manages various - devices in buildings automatically. BAS primarily performs the - following functions: + o Periodically measures states of devices, for example humidity and + illuminance of rooms, open/close state of doors, FAN speed, etc. - o Measures states of devices in a regular interval. For example, - temperature or humidity or illuminance of rooms, on/off state of - room lights, open/close state of doors, FAN speed, valve, running - mode of HVAC, and its power consumption. + o Stores the measured data. - o Stores the measured data into a database (Note: The database keeps - the data for several years). + o Provides the measured data to BAS systems and operators. - o Provides the measured data for BAS operators for visualization. + o Generates alarms for abnormal state of devices. - o Generates alarms for abnormal state of devices (e.g., calling - operator's cellular phone, sending an e-mail to operators and so - on). + o Controls devices (e.g. turn off room lights at 10:00 PM). - o Controls devices on demand. +4.2. Building Automation Systems Today - o Controls devices with a pre-defined operation schedule (e.g., turn - off room lights at 10:00 PM). +4.2.1. BAS Architecture -4.3. BAS Architecture + A typical BAS architecture of today is shown in Figure 1. - A typical BAS architecture is described below in Figure 1. There are - several elements in a BAS. + +----------------------------+ + | | + | BMS HMI | + | | | | + | +----------------------+ | + | | Management Network | | + | +----------------------+ | + | | | | + | LC LC | + | | | | + | +----------------------+ | + | | Field Network | | + | +----------------------+ | + | | | | | | + | Dev Dev Dev Dev | + | | + +----------------------------+ - +----------------------------+ | | | BMS HMI | | | | | - | +----------------------+ | | | Management Network | | | - +----------------------+ | | | | | | LC LC | | | | | | - +----------------------+ | | | Field Network | | | +----------------------+ - | | | | | | | | Dev Dev Dev Dev | | | +----------------------------+ BMS := - Building Management Server HMI := Human Machine Interface LC := Local - Controller + BMS := Building Management Server + HMI := Human Machine Interface + LC := Local Controller Figure 1: BAS architecture - Human Machine Interface (HMI): It is commonly a computing platform - (e.g., desktop PC) used by operators. Operators perform the - following operations through HMI. + There are typically two layers of network in a BAS. The upper one is + called the Management Network and the lower one is called the Field + Network. In management networks an IP-based communication protocol + is used, while in field networks non-IP based communication protocols + ("field protocols") are mainly used. Field networks have specific + timing requirements, whereas management networks can be best-effort. - o Monitoring devices: HMI displays measured device states. For - example, latest device states, a history chart of states, a popup - window with an alert message. Typically, the measured device - states are stored in BMS (Building Management Server). + A Human Machine Interface (HMI) is typically a desktop PC used by + operators to monitor and display device states, send device control + commands to Local Controllers (LCs), and configure building schedules + (for example "turn off all room lights in the building at 10:00 PM"). - o Controlling devices: HMI provides ability to control the devices. - For example, turn on a room light, set a target temperature to - HVAC. Several parameters (a target device, a control value, - etc.), can be set by the operators which then HMI sends to a LC - (Local Controller) via the management network. + A Building Management Server (BMS) performs the following operations. - o Configuring an operational schedule: HMI provides scheduling - capability through which operational schedule is defined. For - example, schedule includes 1) a time to control, 2) a target - device to control, and 3) a control value. A specific operational - example could be turn off all room lights in the building at 10:00 - PM. This schedule is typically stored in BMS. + o Collect and store device states from LCs at regular intervals. - Building Management Server (BMS) collects device states from LCs - (Local Controllers) and stores it into a database. According to its - configuration, BMS executes the following operation automatically. + o Send control values to LCs according to a building schedule. - o BMS collects device states from LCs in a regular interval and then - stores the information into a database. + o Send an alarm signal to operators if it detects abnormal devices + states. - o BMS sends control values to LCs according to a pre-configured - schedule. + The BMS and HMI communicate with LCs via IP-based "management + protocols" (see standards [bacnetip], [knx]). - o BMS sends an alarm signal to operators if it detects abnormal - devices states. For example, turning on a red lamp, calling - operators' cellular phone, sending an e-mail to operators. + A LC is typically a Programmable Logic Controller (PLC) which is + connected to several tens or hundreds of devices using "field + protocols". An LC performs the following kinds of operations: - BMS and HMI communicate with Local Controllers (LCs) via IP-based - communication protocol standardized by BACnet/IP [bacnetip], KNX/IP - [knx]. These protocols are commonly called as management protocols. - LCs measure device states and provide the information to BMS or HMI. - These devices may include HVAC, FAN, doors, valves, lights, sensors - (e.g., temperature, humidity, and illuminance). LC can also set - control values to the devices. LC sometimes has additional - functions, for example, sending a device state to BMS or HMI if the - device state exceeds a certain threshold value, feedback control to a - device to keep the device state at a certain state. Typical example - of LC is a PLC (Programmable Logic Controller). + o Measure device states and provide the information to BMS or HMI. - Each LC is connected with a different field network and communicates - with several tens or hundreds of devices via the field network. - Today there are many field protocols used in the field network. - Based on the type of field protocol used, LC interfaces and its - hardware/software could be different. Field protocols are currently - non-IP based in which some of them are standards-based (e.g., LonTalk - [lontalk], Modbus [modbus], Profibus [profibus], FL-net [flnet],) and - others are proprietary. + o Send control values to devices, unilaterally or as part of a + feedback control loop. -4.4. Deployment Model + There are many field protocols used today; some are standards-based + and others are proprietary (see standards [lontalk], [modbus], + [profibus] and [flnet]). The result is that BASs have multiple MAC/ + PHY modules and interfaces. This makes BASs more expensive, slower + to develop, and can result in "vendor lock-in" with multiple types of + management applications. - An example BAS system deployment model for medium and large buildings - is depicted in Figure 2 below. In this case the physical layout of - the entire system spans across multiple floors in which there is - normally a monitoring room where the BAS management entities are - located. Each floor will have one or more LCs depending upon the - number of devices connected to the field network. +4.2.2. BAS Deployment Model + + An example BAS for medium or large buildings is shown in Figure 2. + The physical layout spans multiple floors, and there is a monitoring + room where the BAS management entities are located. Each floor will + have one or more LCs depending upon the number of devices connected + to the field network. - +--------------------------------------------------+ | - Floor 3 | | +----LC~~~~+~~~~~+~~~~~+ | | | | | | | | | Dev Dev Dev | | | | - |--- | ------------------------------------------| | | Floor 2 | | - +----LC~~~~+~~~~~+~~~~~+ Field Network | | | | | | | | | Dev Dev Dev | | | | - |--- | ------------------------------------------| | | Floor 1 | | - +----LC~~~~+~~~~~+~~~~~+ +-----------------| | | | | | | Monitoring Room | | - | Dev Dev Dev | | | | | BMS HMI | | | Management Network | | | | | - +--------------------------------+-----+ | | | | + +--------------------------------------------------+ + | Floor 3 | + | +----LC~~~~+~~~~~+~~~~~+ | + | | | | | | + | | Dev Dev Dev | + | | | + |--- | ------------------------------------------| + | | Floor 2 | + | +----LC~~~~+~~~~~+~~~~~+ Field Network | + | | | | | | + | | Dev Dev Dev | + | | | + |--- | ------------------------------------------| + | | Floor 1 | + | +----LC~~~~+~~~~~+~~~~~+ +-----------------| + | | | | | | Monitoring Room | + | | Dev Dev Dev | | + | | | BMS HMI | + | | Management Network | | | | + | +--------------------------------+-----+ | + | | | +--------------------------------------------------+ - Figure 2: Deployment model for Medium/Large Buildings + Figure 2: BAS Deployment model for Medium/Large Buildings - Each LC is then connected to the monitoring room via the management - network. In this scenario, the management functions are performed - locally and reside within the building. In most cases, fast Ethernet - (e.g. 100BASE-TX) is used for the management network. In the field - network, variety of physical interfaces such as RS232C, and RS485 are - used. Since management network is non-real time, Ethernet without - quality of service is sufficient for today's deployment. However, - the requirements are different for field networks when they are - replaced by either Ethernet or any wireless technologies supporting - real time requirements (Section 3.4). + Each LC is connected to the monitoring room via the Management + network, and the management functions are performed within the + building. In most cases, fast Ethernet (e.g. 100BASE-T) is used for + the management network. Since the management network is non- + realtime, use of Ethernet without quality of service is sufficient + for today's deployment. - Figure 3 depicts a deployment model in which the management can be - hosted remotely. This deployment is becoming popular for small - office and residential buildings whereby having a standalone - monitoring system is not a cost effective solution. In such - scenario, multiple buildings are managed by a remote management - monitoring system. + In the field network a variety of physical interfaces such as RS232C + and RS485 are used, which have specific timing requirements. Thus if + a field network is to be replaced with an Ethernet or wireless + network, such networks must support time-critical deterministic + flows. - +---------------+ | Remote Center | | | | BMS HMI | - +------------------------------------+ | | | | | Floor 2 | | +---+---+ | | - +----LC~~~~+~~~~~+ Field Network| | | | | | | | | | Router | | | Dev Dev | - +-------|-------+ | | | | |--- | ------------------------------| | | | Floor - 1 | | | +----LC~~~~+~~~~~+ | | | | | | | | | | Dev Dev | | | | | | | | - Management Network | WAN | | +------------------------Router-------------+ | - | +------------------------------------+ + In Figure 3, another deployment model is presented in which the + management system is hosted remotely. This is becoming popular for + small office and residential buildings in which a standalone + monitoring system is not cost-effective. + + +---------------+ + | Remote Center | + | | + | BMS HMI | + +------------------------------------+ | | | | + | Floor 2 | | +---+---+ | + | +----LC~~~~+~~~~~+ Field Network| | | | + | | | | | | Router | + | | Dev Dev | +-------|-------+ + | | | | + |--- | ------------------------------| | + | | Floor 1 | | + | +----LC~~~~+~~~~~+ | | + | | | | | | + | | Dev Dev | | + | | | | + | | Management Network | WAN | + | +------------------------Router-------------+ + | | + +------------------------------------+ Figure 3: Deployment model for Small Buildings - In either case, interoperability today is only limited to the - management network and its protocols. In existing deployment, there - are limited interoperability opportunity in the field network due to - its nature of non-IP-based design and requirements. + Some interoperability is possible today in the Management Network, + but not in today's field networks due to their non-IP-based design. -4.5. Use cases and Field Network Requirements +4.2.3. Use Cases for Field Networks - In this section, we describe several use cases and corresponding - network requirements. + Below are use cases for Environmental Monitoring, Fire Detection, and + Feedback Control, and their implications for field network + performance. -4.5.1. Environmental Monitoring +4.2.3.1. Environmental Monitoring - In this use case, LCs measure environmental data (e.g. temperatures, - humidity, illuminance, CO2, etc.) from several sensor devices at each - measurement interval. LCs keep latest value of each sensor. BMS - sends data requests to LCs to collect the latest values, then stores - the collected values into a database. Operators check the latest - environmental data that are displayed by the HMI. BMS also checks - the collected data automatically to notify the operators if a room - condition was going to bad (e.g., too hot or cold). The following - table lists the field network requirements in which the number of - devices in a typical building will be ~100s per LC. + The BMS polls each LC at a maximum measurement interval of 100ms (for + example to draw a historical chart of 1 second granularity with a 10x + sampling interval) and then performs the operations as specified by + the operator. Each LC needs to measure each of its several hundred + sensors once per measurement interval. Latency is not critical in + this scenario as long as all sensor values are completed in the + measurement interval. Availability is expected to be 99.999 %. - +----------------------+-------------+ - | Metric | Requirement | - +----------------------+-------------+ - | Measurement interval | 100 msec | - | | | - | Availability | 99.999 % | - +----------------------+-------------+ +4.2.3.2. Fire Detection - Table 11: Field Network Requirements for Environmental Monitoring + On detection of a fire, the BMS must stop the HVAC, close the fire + shutters, turn on the fire sprinklers, send an alarm, etc. There are + typically ~10s of sensors per LC that BMS needs to manage. In this + scenario the measurement interval is 10-50ms, the communication delay + is 10ms, and the availability must be 99.9999 %. - There is a case that BMS sends data requests at each 1 second in - order to draw a historical chart of 1 second granularity. Therefore - 100 msec measurement interval is sufficient for this use case, - because typically 10 times granularity (compared with the interval of - data requests) is considered enough accuracy in this use case. A LC - needs to measure values of all sensors connected with itself at each - measurement interval. Each communication delay in this scenario is - not so critical. The important requirement is completing - measurements of all sensor values in the specified measurement - interval. The availability in this use case is very high (Three 9s). +4.2.3.3. Feedback Control -4.5.2. Fire Detection + BAS systems utilize feedback control in various ways; the most time- + critial is control of DC motors, which require a short feedback + interval (1-5ms) with low communication delay (10ms) and jitter + (1ms). The feedback interval depends on the characteristics of the + device and a target quality of control value. There are typically + ~10s of such devices per LC. - In the case of fire detection, HMI needs to show a popup window with - an alert message within a few seconds after an abnormal state is - detected. BMS needs to do some operations if it detects fire. For - example, stopping a HVAC, closing fire shutters, and turning on fire - sprinklers. The following table describes requirements in which the - number of devices in a typical building will be ~10s per LC. + Communication delay is expected to be less than 10 ms, jitter less + than 1 sec while the availability must be 99.9999% . - +----------------------+---------------+ - | Metric | Requirement | - +----------------------+---------------+ - | Measurement interval | 10s of msec | - | | | - | Communication delay | < 10s of msec | - | | | - | Availability | 99.9999 % | - +----------------------+---------------+ +4.2.4. Security Considerations - Table 12: Field Network Requirements for Fire Detection + When BAS field networks were developed it was assumed that the field + networks would always be physically isolated from external networks + and therefore security was not a concern. In today's world many BASs + are managed remotely and are thus connected to shared IP networks and + so security is definitely a concern, yet security features are not + available in the majority of BAS field network deployments . - In order to perform the above operation within a few seconds (1 or 2 - seconds) after detecting fire, LCs should measure sensor values at a - regular interval of less than 10s of msec. If a LC detects an - abnormal sensor value, it sends an alarm information to BMS and HMI - immediately. BMS then controls HVAC or fire shutters or fire - sprinklers. HMI then displays a pop up window and generates the - alert message. Since the management network does not operate in real - time, and software run on BMS or HMI requires 100s of ms, the - communication delay should be less than ~10s of msec. The - availability in this use case is very high (Four 9s). + The management network, being an IP-based network, has the protocols + available to enable network security, but in practice many BAS + systems do not implement even the available security features such as + device authentication or encryption for data in transit. -4.5.3. Feedback Control +4.3. BAS Future - Feedback control is used to keep a device state at a certain value. - For example, keeping a room temperature at 27 degree Celsius, keeping - a water flow rate at 100 L/m and so on. The target device state is - normally pre-defined in LCs or provided from BMS or from HMI. + In the future we expect more fine-grained environmental monitoring + and lower energy consumption, which will require more sensors and + devices, thus requiring larger and more complex building networks. - In feedback control procedure, a LC repeats the following actions at - a regular interval (feedback interval). + We expect building networks to be connected to or converged with + other networks (Enterprise network, Home network, and Internet). - 1. The LC measures device states of the target device. + Therefore better facilities for network management, control, + reliability and security are critical in order to improve resident + and operator convenience and comfort. For example the ability to + monitor and control building devices via the internet would enable + (for example) control of room lights or HVAC from a resident's + desktop PC or phone application. - 2. The LC calculates a control value by considering the measured - device state. +4.4. BAS Asks - 3. The LC sends the control value to the target device. + The community would like to see an interoperable protocol + specification that can satisfy the timing, security, availability and + QoS constraints described above, such that the resulting converged + network can replace the disparate field networks. Ideally this + connectivity could extend to the open Internet. - The feedback interval highly depends on the characteristics of the - device and a target quality of control value. While several tens of - milliseconds feedback interval is sufficient to control a valve that - regulates a water flow, controlling DC motors requires several - milliseconds interval. The following table describes the field - network requirements in which the number of devices in a typical - building will be ~10s per LC. + This would imply an architecture that can guarantee - +----------------------+---------------+ - | Metric | Requirement | - +----------------------+---------------+ - | Feedback interval | ~10ms - 100ms | - | | | - | Communication delay | < 10s of msec | - | | | - | Communication jitter | < 1 msec | - | | | - | Availability | 99.9999 % | - +----------------------+---------------+ + o Low communication delays (from <10ms to 100ms in a network of + several hundred devices) - Table 13: Field Network Requirements for Feedback Control + o Low jitter (< 1 ms) - Small communication delay and jitter are required in this use case in - order to provide high quality of feedback control. This is currently - offered in production environment with hgh availability (Four 9s). + o Tight feedback intervals (1ms - 10ms) -4.6. Security Considerations + o High network availability (up to 99.9999% ) - Both network and physical security of BAS are important. While - physical security is present in today's deployment, adequate network - security and access control are either not implemented or configured - properly. This was sufficient in networks while they are isolated - and not connected to the IT or other infrastructure networks but when - IT and OT (Operational Technology) are connected in the same - infrastructure network, network security is essential. The - management network being an IP-based network does have the protocols - and knobs to enable the network security but in many cases BAS for - example, does not use device authentication or encryption for data in - transit. On the contrary, many of today's field networks do not - provide any security at all. Following are the high level security - requirements that the network should provide: + o Availability of network data in disaster scenario o Authentication between management and field devices (both local and remote) o Integrity and data origin authentication of communication data between field and management devices o Confidentiality of data when communicated to a remote device - o Availability of network data for normal and disaster scenario - 5. Wireless for Industrial Use Cases (This section was derived from draft-thubert-6tisch-4detnet-01) 5.1. Introduction The emergence of wireless technology has enabled a variety of new devices to get interconnected, at a very low marginal cost per device, at any distance ranging from Near Field to interplanetary, and in circumstances where wiring may not be practical, for instance @@ -2508,526 +2426,468 @@ 5.5. Security Considerations On top of the classical protection of control signaling that can be expected to support DetNet, it must be noted that 6TiSCH networks operate on limited resources that can be depleted rapidly if an attacker manages to operate a DoS attack on the system, for instance by placing a rogue device in the network, or by obtaining management control and to setup extra paths. -5.6. Acknowledgments - - This specification derives from the 6TiSCH architecture, which is the - result of multiple interactions, in particular during the 6TiSCH - (bi)Weekly Interim call, relayed through the 6TiSCH mailing list at - the IETF. - - The authors wish to thank: Kris Pister, Thomas Watteyne, Xavier - Vilajosana, Qin Wang, Tom Phinney, Robert Assimiti, Michael - Richardson, Zhuo Chen, Malisa Vucinic, Alfredo Grieco, Martin Turon, - Dominique Barthel, Elvis Vogli, Guillaume Gaillard, Herman Storey, - Maria Rita Palattella, Nicola Accettura, Patrick Wetterwald, Pouria - Zand, Raghuram Sudhaakar, and Shitanshu Shah for their participation - and various contributions. - 6. Cellular Radio Use Cases - (This section was derived from draft-korhonen-detnet-telreq-00) - -6.1. Introduction and background - - The recent developments in telecommunication networks, especially in - the cellular domain, are heading towards transport networks where - precise time synchronization support has to be one of the basic - building blocks. While the transport networks themselves have - practically transitioned to all-AP packet based networks to meet the - bandwidth and cost requirements, a highly accurate clock distribution - has become a challenge. Earlier the transport networks in the - cellular domain were typically time division and multiplexing (TDM) - -based and provided frequency synchronization capabilities as a part - of the transport media. Alternatively other technologies such as - Global Positioning System (GPS) or Synchronous Ethernet (SyncE) - [SyncE] were used. New radio access network deployment models and - architectures may require time sensitive networking services with - strict requirements on other parts of the network that previously - were not considered to be packetized at all. The time and - synchronization support are already topical for backhaul and midhaul - packet networks [MEF], and becoming a real issue for fronthaul - networks. Specifically in the fronthaul networks the timing and - synchronization requirements can be extreme for packet based - technologies, for example, in order of sub +-20 ns packet delay - variation (PDV) and frequency accuracy of +0.002 PPM [Fronthaul]. - - Both Ethernet and IP/MPLS [RFC3031] (and PseudoWires (PWE) [RFC3985] - for legacy transport support) have become popular tools to build and - manage new all-IP radio access networks (RAN) - [I-D.kh-spring-ip-ran-use-case]. Although various timing and - synchronization optimizations have already been proposed and - implemented including 1588 PTP enhancements - [I-D.ietf-tictoc-1588overmpls][I-D.mirsky-mpls-residence-time], these - solution are not necessarily sufficient for the forthcoming RAN - architectures or guarantee the higher time-synchronization - requirements [CPRI]. There are also existing solutions for the TDM - over IP [RFC5087] [RFC4553] or Ethernet transports [RFC5086]. The - really interesting and important existing work for time sensitive - networking has been done for Ethernet [TSNTG], which specifies the - use of IEEE 1588 time precision protocol (PTP) [IEEE1588] in the - context of IEEE 802.1D and IEEE 802.1Q. While IEEE 802.1AS - [IEEE8021AS] specifies a Layer-2 time synchronizing service other - specification, such as IEEE 1722 [IEEE1722] specify Ethernet-based - Layer-2 transport for time-sensitive streams. New promising work - seeks to enable the transport of time-sensitive fronthaul streams in - Ethernet bridged networks [IEEE8021CM]. Similarly to IEEE 1722 there - is an ongoing standardization effort to define Layer-2 transport - encapsulation format for transporting radio over Ethernet (RoE) in - IEEE 1904.3 Task Force [IEEE19043]. - - As already mentioned all-IP RANs and various "haul" networks would - benefit from time synchronization and time-sensitive transport - services. Although Ethernet appears to be the unifying technology - for the transport there is still a disconnect providing Layer-3 - services. The protocol stack typically has a number of layers below - the Ethernet Layer-2 that shows up to the Layer-3 IP transport. It - is not uncommon that on top of the lowest layer (optical) transport - there is the first layer of Ethernet followed one or more layers of - MPLS, PseudoWires and/or other tunneling protocols finally carrying - the Ethernet layer visible to the user plane IP traffic. While there - are existing technologies, especially in MPLS/PWE space, to establish - circuits through the routed and switched networks, there is a lack of - signaling the time synchronization and time-sensitive stream - requirements/reservations for Layer-3 flows in a way that the entire - transport stack is addressed and the Ethernet layers that needs to be - configured are addressed. Furthermore, not all "user plane" traffic - will be IP. Therefore, the same solution need also address the use - cases where the user plane traffic is again another layer or Ethernet - frames. There is existing work describing the problem statement - [I-D.finn-detnet-problem-statement] and the architecture - [I-D.finn-detnet-architecture] for deterministic networking (DetNet) - that eventually targets to provide solutions for time-sensitive (IP/ - transport) streams with deterministic properties over Ethernet-based - switched networks. - - This document describes requirements for deterministic networking in - a cellular telecom transport networks context. The requirements - include time synchronization, clock distribution and ways of - establishing time-sensitive streams for both Layer-2 and Layer-3 user - plane traffic using IETF protocol solutions. - - The recent developments in telecommunication networks, especially in - the cellular domain, are heading towards transport networks where - precise time synchronization support has to be one of the basic - building blocks. While the transport networks themselves have - practically transitioned to all-AP packet based networks to meet the - bandwidth and cost requirements, a highly accurate clock distribution - has become a challenge. Earlier the transport networks in the - cellular domain were typically time division and multiplexing (TDM) - -based and provided frequency synchronization capabilities as a part - of the transport media. Alternatively other technologies such as - Global Positioning System (GPS) or Synchronous Ethernet (SyncE) - [SyncE] were used. New radio access network deployment models and - architectures may require time sensitive networking services with - strict requirements on other parts of the network that previously - were not considered to be packetized at all. The time and - synchronization support are already topical for backhaul and midhaul - packet networks [MEF], and becoming a real issue for fronthaul - networks. Specifically in the fronthaul networks the timing and - synchronization requirements can be extreme for packet based - technologies, for example, in order of sub +-20 ns packet delay - variation (PDV) and frequency accuracy of +0.002 PPM [Fronthaul]. - - Both Ethernet and IP/MPLS [RFC3031] (and PseudoWires (PWE) [RFC3985] - for legacy transport support) have become popular tools to build and - manage new all-IP radio access networks (RAN) - [I-D.kh-spring-ip-ran-use-case]. Although various timing and - synchronization optimizations have already been proposed and - implemented including 1588 PTP enhancements - [I-D.ietf-tictoc-1588overmpls][I-D.mirsky-mpls-residence-time], these - solution are not necessarily sufficient for the forthcoming RAN - architectures or guarantee the higher time-synchronization - requirements [CPRI]. There are also existing solutions for the TDM - over IP [RFC5087] [RFC4553] or Ethernet transports [RFC5086]. The - really interesting and important existing work for time sensitive - networking has been done for Ethernet [TSNTG], which specifies the - use of IEEE 1588 time precision protocol (PTP) [IEEE1588] in the - context of IEEE 802.1D and IEEE 802.1Q. While IEEE 802.1AS - [IEEE8021AS] specifies a Layer-2 time synchronizing service other - specification, such as IEEE 1722 [IEEE1722] specify Ethernet-based - Layer-2 transport for time-sensitive streams. New promising work - seeks to enable the transport of time-sensitive fronthaul streams in - Ethernet bridged networks [IEEE8021CM]. Similarly to IEEE 1722 there - is an ongoing standardization effort to define Layer-2 transport - encapsulation format for transporting radio over Ethernet (RoE) in - IEEE 1904.3 Task Force [IEEE19043]. - - As already mentioned all-IP RANs and various "haul" networks would - benefit from time synchronization and time-sensitive transport - services. Although Ethernet appears to be the unifying technology - for the transport there is still a disconnect providing Layer-3 - services. The protocol stack typically has a number of layers below - the Ethernet Layer-2 that shows up to the Layer-3 IP transport. It - is not uncommon that on top of the lowest layer (optical) transport - there is the first layer of Ethernet followed one or more layers of - MPLS, PseudoWires and/or other tunneling protocols finally carrying - the Ethernet layer visible to the user plane IP traffic. While there - are existing technologies, especially in MPLS/PWE space, to establish - circuits through the routed and switched networks, there is a lack of - signaling the time synchronization and time-sensitive stream - requirements/reservations for Layer-3 flows in a way that the entire - transport stack is addressed and the Ethernet layers that needs to be - configured are addressed. Furthermore, not all "user plane" traffic - will be IP. Therefore, the same solution need also address the use - cases where the user plane traffic is again another layer or Ethernet - frames. There is existing work describing the problem statement - [I-D.finn-detnet-problem-statement] and the architecture - [I-D.finn-detnet-architecture] for deterministic networking (DetNet) - that eventually targets to provide solutions for time-sensitive (IP/ - transport) streams with deterministic properties over Ethernet-based - switched networks. - - This document describes requirements for deterministic networking in - a cellular telecom transport networks context. The requirements - include time synchronization, clock distribution and ways of - establishing time-sensitive streams for both Layer-2 and Layer-3 user - plane traffic using IETF protocol solutions. - -6.2. Network architecture +6.1. Use Case Description - Figure Figure 9 illustrates a typical, 3GPP defined, cellular network - architecture, which also has fronthaul and midhaul network segments. - The fronthaul refers to the network connecting base stations (base - band processing units) to the remote radio heads (antennas). The - midhaul network typically refers to the network inter-connecting base - stations (or small/pico cells). + This use case describes the application of deterministic networking + in the context of cellular telecom transport networks. Important + elements include time synchronization, clock distribution, and ways + of establishing time-sensitive streams for both Layer-2 and Layer-3 + user plane traffic. - Fronthaul networks build on the available excess time after the base - band processing of the radio frame has completed. Therefore, the - available time for networking is actually very limited, which in - practise determines how far the remote radio heads can be from the - base band processing units (i.e. base stations). For example, in a - case of LTE radio the Hybrid ARQ processing of a radio frame is - allocated 3 ms. Typically the processing completes way earlier (say - up to 400 us, could be much less, though) thus allowing the remaining - time to be used e.g. for fronthaul network. 200 us equals roughly 40 - km of optical fiber based transport (assuming round trip time would - be total 2*200 us). The base band processing time and the available - "delay budget" for the fronthaul is a subject to change, possibly - dramatically, in the forthcoming "5G" to meet, for example, the - envisioned reduced radio round trip times, and other architecural and - service requirements [NGMN]. +6.1.1. Network Architecture - The maximum "delay budget" is then consumed by all nodes and required - buffering between the remote radio head and the base band processing - in addition to the distance incurred delay. Packet delay variation - (PDV) is problematic to fronthaul networks and must be minimized. If - the transport network cannot guarantee low enough PDV additional - buffering has to be introduced at the edges of the network to buffer - out the jitter. Any buffering will eat up the total available delay - budget, though. Section Section 6.3 will discuss the PDV - requirements in more detail. + Figure 9 illustrates a typical 3GPP-defined cellular network + architecture, which includes "Fronthaul" and "Midhaul" network + segments. The "Fronthaul" is the network connecting base stations + (baseband processing units) to the remote radio heads (antennas). + The "Midhaul" is the network inter-connecting base stations (or small + cell sites). - Y (remote radios) + Y (remote radio heads (antennas)) \ Y__ \.--. .--. +------+ \_( `. +---+ _(Back`. | 3GPP | Y------( Front )----|eNB|----( Haul )----| core | ( ` .Haul ) +---+ ( ` . ) ) | netw | /`--(___.-' \ `--(___.-' +------+ Y_/ / \.--. \ Y_/ _( Mid`. \ ( Haul ) \ ( ` . ) ) \ - `--(___.-'\_____+---+ (small cells) + `--(___.-'\_____+---+ (small cell sites) \ |SCe|__Y +---+ +---+ Y__|eNB|__Y +---+ Y_/ \_Y ("local" radios) - Figure 9: Generic 3GPP-based cellular network architecture with - Front/Mid/Backhaul networks - -6.3. Time synchronization requirements + Figure 9: Generic 3GPP-based Cellular Network Architecture - Cellular networks starting from long term evolution (LTE) [TS36300] - [TS23401] radio the phase synchronization is also needed in addition - to the frequency synchronization. The commonly referenced fronthaul - network synchronization requirements are typically drawn from the - common public radio interface (CPRI) [CPRI] specification that - defines the transport protocol between the base band processing - - radio equipment controller (REC) and the remote antenna - radio - equipment (RE). However, the fundamental requirements still - originate from the respective cellular system and radio - specifications such as the 3GPP ones [TS25104][TS36104][TS36211] - [TS36133]. + The available processing time for Fronthaul networking overhead is + limited to the available time after the baseband processing of the + radio frame has completed. For example in Long Term Evolution (LTE) + radio, processing of a radio frame is allocated 3ms, but typically + the processing completes much earlier (<400us) allowing the remaining + time to be used by the Fronthaul network. This ultimately determines + the distance the remote radio heads can be located from the base + stations (200us equals roughly 40 km of optical fiber-based + transport, thus round trip time is 2*200us = 400us). - The fronthaul time synchronization requirements for the current 3GPP - LTE-based networks are listed below: + The remainder of the "maximum delay budget" is consumed by all nodes + and buffering between the remote radio head and the baseband + processing, plus the distance-incurred delay. - Transport link contribution to radio frequency error: + The baseband processing time and the available "delay budget" for the + fronthaul is likely to change in the forthcoming "5G" due to reduced + radio round trip times and other architectural and service + requirements [NGMN]. - +-2 PPB. The given value is considered to be "available" for the - fronthaul link out of the total 50 PPB budget reserved for the - radio interface. +6.1.2. Time Synchronization Requirements - Delay accuracy: + Fronthaul time synchronization requirements are given by [TS25104], + [TS36104], [TS36211], and [TS36133]. These can be summarized for the + current 3GPP LTE-based networks as: - +-8.138 ns i.e. +-1/32 Tc (UMTS Chip time, Tc, 1/3.84 MHz) to - downlink direction and excluding the (optical) cable length in one - direction. Round trip accuracy is then +-16.276 ns. The value is - this low to meet the 3GPP timing alignment error (TAE) measurement + Delay Accuracy: + +-8ns (i.e. +-1/32 Tc, where Tc is the UMTS Chip time of 1/3.84 + MHz) resulting in a round trip accuracy of +-16ns. The value is + this low to meet the 3GPP Timing Alignment Error (TAE) measurement requirements. - Packet delay variation (PDV): + Packet Delay Variation: + Packet Delay Variation (PDV aka Jitter aka Timing Alignment Error) + is problematic to Fronthaul networks and must be minimized. If + the transport network cannot guarantee low enough PDV then + additional buffering has to be introduced at the edges of the + network to buffer out the jitter. Buffering is not desirable as + it reduces the total available delay budget. * For multiple input multiple output (MIMO) or TX diversity transmissions, at each carrier frequency, TAE shall not exceed 65 ns (i.e. 1/4 Tc). * For intra-band contiguous carrier aggregation, with or without MIMO or TX diversity, TAE shall not exceed 130 ns (i.e. 1/2 Tc). * For intra-band non-contiguous carrier aggregation, with or without MIMO or TX diversity, TAE shall not exceed 260 ns (i.e. one Tc). * For inter-band carrier aggregation, with or without MIMO or TX diversity, TAE shall not exceed 260 ns. - The above listed time synchronization requirements are hard to meet - even with point to point connected networks, not to mention cases - where the underlying transport network actually constitutes of - multiple hops. It is expected that network deployments have to deal - with the jitter requirements buffering at the very ends of the - connections, since trying to meet the jitter requirements in every - intermediate node is likely to be too costly. However, every measure - to reduce jitter and delay on the path are valuable to make it easier - to meet the end to end requirements. + Transport link contribution to radio frequency error: + +-2 PPB. This value is considered to be "available" for the + Fronthaul link out of the total 50 PPB budget reserved for the + radio interface. Note: the reason that the transport link + contributes to radio frequency error is as follows. The current + way of doing Fronthaul is from the radio unit to remote radio head + directly. The remote radio head is essentially a passive device + (without buffering etc.) The transport drives the antenna + directly by feeding it with samples and everything the transport + adds will be introduced to radio as-is. So if the transport + causes additional frequence error that shows immediately on the + radio as well. + + The above listed time synchronization requirements are difficult to + meet with point-to-point connected networks, and more difficult when + the network includes multiple hops. It is expected that networks + must include buffering at the ends of the connections as imposed by + the jitter requirements, since trying to meet the jitter requirements + in every intermediate node is likely to be too costly. However, + every measure to reduce jitter and delay on the path makes it easier + to meet the end-to-end requirements. In order to meet the timing requirements both senders and receivers - must is perfect sync. This asks for a very accurate clock - distribution solution. Basically all means and hardware support for - guaranteeing accurate time synchronization in the network is needed. - As an example support for 1588 transparent clocks (TC) in every - intermediate node would be helpful. + must remain time synchronized, demanding very accurate clock + distribution, for example support for IEEE 1588 transparent clocks in + every intermediate node. -6.4. Time-sensitive stream requirements + In cellular networks from the LTE radio era onward, phase + synchronization is needed in addition to frequency synchronization + ([TS36300], [TS23401]). + +6.1.3. Time-Sensitive Stream Requirements In addition to the time synchronization requirements listed in - Section Section 6.3 the fronthaul networks assume practically error - free transport. The maximum bit error rate (BER) has been defined to - be 10^-12. When packetized that would equal roughly to packet error - rate (PER) of 2.4*10^-9 (assuming ~300 bytes packets). - Retransmitting lost packets and/or using forward error coding (FEC) - to circumvent bit errors are practically impossible due additional - incurred delay. Using redundant streams for better guarantees for - delivery is also practically impossible due to high bandwidth - requirements fronthaul networks have. For instance, current - uncompressed CPRI bandwidth expansion ratio is roughly 20:1 compared - to the IP layer user payload it carries in a "radio sample form". + Section Section 6.1.2 the Fronthaul networks assume practically + error-free transport. The maximum bit error rate (BER) has been + defined to be 10^-12. When packetized that would imply a packet + error rate (PER) of 2.4*10^-9 (assuming ~300 bytes packets). + Retransmitting lost packets and/or using forward error correction + (FEC) to circumvent bit errors is practically impossible due to the + additional delay incurred. Using redundant streams for better + guarantees for delivery is also practically impossible in many cases + due to high bandwidth requirements of Fronthaul networks. For + instance, current uncompressed CPRI bandwidth expansion ratio is + roughly 20:1 compared to the IP layer user payload it carries. + Protection switching is also a candidate but current technologies for + the path switch are too slow. We do not currently know of a better + solution for this issue. - The other fundamental assumption is that fronthaul links are - symmetric. Last, all fronthaul streams (carrying radio data) have - equal priority and cannot delay or pre-empt each other. This implies - the network has always be sufficiently under subscribed to guarantee - each time-sensitive flow meets their schedule. + Fronthaul links are assumed to be symmetric, and all Fronthaul + streams (i.e. those carrying radio data) have equal priority and + cannot delay or pre-empt each other. This implies that the network + must guarantee that each time-sensitive flow meets their schedule. - Mapping the fronthaul requirements to [I-D.finn-detnet-architecture] - Section 3 "Providing the DetNet Quality of Service" what is seemed - usable are: +6.1.4. Security Considerations - (a) Zero congestion loss. + Establishing time-sensitive streams in the network entails reserving + networking resources for long periods of time. It is important that + these reservation requests be authenticated to prevent malicious + reservation attempts from hostile nodes (or accidental + misconfiguration). This is particularly important in the case where + the reservation requests span administrative domains. Furthermore, + the reservation information itself should be digitally signed to + reduce the risk of a legitimate node pushing a stale or hostile + configuration into another networking node. - (b) Pinned-down paths. +6.2. Cellular Radio Networks Today - The current time-sensitive networking features may still not be - sufficient for fronthaul traffic. Therefore, having specific - profiles that take the requirements of fronthaul into account are - deemed to be useful [IEEE8021CM]. + Today's Fronthaul networks typically consist of: + + o Dedicated point-to-point fiber connection is common + + o Proprietary protocols and framings + + o Custom equipment and no real networking + + Today's Midhaul and Backhaul networks typically consist of: + + o Mostly normal IP networks, MPLS-TP, etc. + + o Clock distribution and sync using 1588 and SyncE + + Telecommunication networks in the cellular domain are already heading + towards transport networks where precise time synchronization support + is one of the basic building blocks. While the transport networks + themselves have practically transitioned to all-IP packet based + networks to meet the bandwidth and cost requirements, highly accurate + clock distribution has become a challenge. + + Transport networks in the cellular domain are typically based on Time + Division Multiplexing (TDM-based) and provide frequency + synchronization capabilities as a part of the transport media. + Alternatively other technologies such as Global Positioning System + (GPS) or Synchronous Ethernet (SyncE) are used [SyncE]. + + Both Ethernet and IP/MPLS [RFC3031] (and PseudoWires (PWE) [RFC3985] + for legacy transport support) have become popular tools to build and + manage new all-IP Radio Access Networks (RAN) + [I-D.kh-spring-ip-ran-use-case]. Although various timing and + synchronization optimizations have already been proposed and + implemented including 1588 PTP enhancements + [I-D.ietf-tictoc-1588overmpls][I-D.mirsky-mpls-residence-time], these + solution are not necessarily sufficient for the forthcoming RAN + architectures or guarantee the higher time-synchronization + requirements [CPRI]. There are also existing solutions for the TDM + over IP [RFC5087] [RFC4553] or Ethernet transports [RFC5086]. + +6.3. Cellular Radio Networks Future + + We would like to see the following in future Cellular Radio networks: + + o Unified standards-based transport protocols and standard + networking equipment that can make use of underlying deterministic + link-layer services + + o Unified and standards-based network management systems and + protocols in all parts of the network (including Fronthaul) + + New radio access network deployment models and architectures may + require time sensitive networking services with strict requirements + on other parts of the network that previously were not considered to + be packetized at all. The time and synchronization support are + already topical for Backhaul and Midhaul packet networks [MEF], and + becoming a real issue for Fronthaul networks. Specifically in the + Fronthaul networks the timing and synchronization requirements can be + extreme for packet based technologies, for example, on the order of + sub +-20 ns packet delay variation (PDV) and frequency accuracy of + +0.002 PPM [Fronthaul]. The actual transport protocols and/or solutions to establish required - transport "circuits" (pinned-down paths) for fronthaul traffic are - still undefined. Those are likely to include but not limited to - solutions directly over Ethernet, over IP, and MPLS/PseudoWire + transport "circuits" (pinned-down paths) for Fronthaul traffic are + still undefined. Those are likely to include (but are not limited + to) solutions directly over Ethernet, over IP, and MPLS/PseudoWire transport. -6.5. Security considerations + Even the current time-sensitive networking features may not be + sufficient for Fronthaul traffic. Therefore, having specific + profiles that take the requirements of Fronthaul into account is + desirable [IEEE8021CM]. - Establishing time-sensitive streams in the network entails reserving - networking resources sometimes for a considerable long time. It is - important that these reservation requests must be authenticated to - prevent malicious reservation attempts from hostile nodes or even - accidental misconfiguration. This is specifically important in a - case where the reservation requests span administrative domains. - Furthermore, the reservation information itself should be digitally - signed to reduce the risk where a legitimate node pushed a stale or - hostile configuration into the networking node. + The really interesting and important existing work for time sensitive + networking has been done for Ethernet [TSNTG], which specifies the + use of IEEE 1588 time precision protocol (PTP) [IEEE1588] in the + context of IEEE 802.1D and IEEE 802.1Q. While IEEE 802.1AS + [IEEE8021AS] specifies a Layer-2 time synchronizing service other + specification, such as IEEE 1722 [IEEE1722] specify Ethernet-based + Layer-2 transport for time-sensitive streams. New promising work + seeks to enable the transport of time-sensitive fronthaul streams in + Ethernet bridged networks [IEEE8021CM]. Similarly to IEEE 1722 there + is an ongoing standardization effort to define Layer-2 transport + encapsulation format for transporting radio over Ethernet (RoE) in + IEEE 1904.3 Task Force [IEEE19043]. + + All-IP RANs and various "haul" networks would benefit from time + synchronization and time-sensitive transport services. Although + Ethernet appears to be the unifying technology for the transport + there is still a disconnect providing Layer-3 services. The protocol + stack typically has a number of layers below the Ethernet Layer-2 + that shows up to the Layer-3 IP transport. It is not uncommon that + on top of the lowest layer (optical) transport there is the first + layer of Ethernet followed one or more layers of MPLS, PseudoWires + and/or other tunneling protocols finally carrying the Ethernet layer + visible to the user plane IP traffic. While there are existing + technologies, especially in MPLS/PWE space, to establish circuits + through the routed and switched networks, there is a lack of + signaling the time synchronization and time-sensitive stream + requirements/reservations for Layer-3 flows in a way that the entire + transport stack is addressed and the Ethernet layers that needs to be + configured are addressed. + + Furthermore, not all "user plane" traffic will be IP. Therefore, the + same solution also must address the use cases where the user plane + traffic is again another layer or Ethernet frames. There is existing + work describing the problem statement + [I-D.finn-detnet-problem-statement] and the architecture + [I-D.finn-detnet-architecture] for deterministic networking (DetNet) + that targets solutions for time-sensitive (IP/transport) streams with + deterministic properties over Ethernet-based switched networks. + +6.4. Cellular Radio Networks Asks + + A standard for data plane transport specification which is: + + o Unified among all *hauls + + o Deployed in a highly deterministic network environment + + A standard for data flow information models that are: + + o Aware of the time sensitivity and constraints of the target + networking environment + + o Aware of underlying deterministic networking services (e.g. on the + Ethernet layer) + + Mapping the Fronthaul requirements to IETF DetNet + [I-D.finn-detnet-architecture] Section 3 "Providing the DetNet + Quality of Service", the relevant features are: + + o Zero congestion loss. + + o Pinned-down paths. 7. Industrial M2M + 7.1. Use Case Description Industrial Automation in general refers to automation of manufacturing, quality control and material processing. In this "machine to machine" (M2M) use case we consider machine units in a plant floor which periodically exchange data with upstream or downstream machine modules and/or a supervisory controller within a local area network. - The actors of Machine to Machine (M2M) communication are Programmable - Logic Controls (PLCs). The communication between PLCs and between - PLCs and the supervisory PLC (S-PLC) is achieved via critical - Control-Data-Streams Figure 10. + The actors of M2M communication are Programmable Logic Controllers + (PLCs). Communication between PLCs and between PLCs and the + supervisory PLC (S-PLC) is achieved via critical control/data streams + Figure 10. S (Sensor) \ +-----+ PLC__ \.--. .--. ---| MES | \_( `. _( `./ +-----+ A------( Local )-------------( L2 ) ( Net ) ( Net ) +-------+ /`--(___.-' `--(___.-' ----| S-PLC | S_/ / PLC .--. / +-------+ A_/ \_( `. (Actuator) ( Local ) ( Net ) /`--(___.-'\ / \ A S A Figure 10: Current Generic Industrial M2M Network Architecture - This use case addresses PLC-related communications; communication to + This use case focuses on PLC-related communications; communication to Manufacturing-Execution-Systems (MESs) are not addressed. - This use case addresses only critical Control-Data-Streams; non- - critical traffic between industrial automation applications (such as - communication of state, configuration, set-up, connection to - Manufacturing-Execution-System (MES) and database communication) are - adequately served by currently available prioritizing techniques. - Such traffic can use up to 80% of the total bandwidth required. - There is also a subset of non-time-critical traffic that must be - reliable even though it is not time critical. + This use case covers only critical control/data streams; non-critical + traffic between industrial automation applications (such as + communication of state, configuration, set-up, and database + communication) are adequately served by currently available + prioritizing techniques. Such traffic can use up to 80% of the total + bandwidth required. There is also a subset of non-time-critical + traffic that must be reliable even though it is not time sensitive. In this use case the primary need for deterministic networking is to provide end-to-end delivery of M2M messages within specific timing constraints, for example in closed loop automation control. Today this level of determinism is provided by proprietary networking technologies. In addition, standard networking technologies are used to connect the local network to remote industrial automation sites, e.g. over an enterprise or metro network which also carries other - types of traffic. Therefore, deterministic flows need to be - sustained regardless of the amount of other flows in those networks. + types of traffic. Therefore, flows that should be forwarded with + deterministic guarantees need to be sustained regardless of the + amount of other flows in those networks. 7.2. Industrial M2M Communication Today Today, proprietary networks fulfill the needed timing and - availability for M2M networks, as described in this section. + availability for M2M networks. The network topologies used today by industrial automation are similar to those used by telecom networks: Daisy Chain, Ring, Hub and Spoke, and Comb (a subset of Daisy Chain). - PLC-related Control-Data-Streams are transmitted periodically and - they are established either with a pre-configured payload or a - payload configured during runtime. + PLC-related control/data streams are transmitted periodically and + carry either a pre-configured payload or a payload configured during + runtime. - Some industrial applications require time synchronization ("time - sync") at the end nodes. For such time-coordinated PLCs, accuracy of - 1 microsecond is required. Even in the case of "non-time- - coordinated" PLCs time sync may be needed e.g. for timestamping of - sensor data. + Some industrial applications require time synchronization at the end + nodes. For such time-coordinated PLCs, accuracy of 1 microsecond is + required. Even in the case of "non-time-coordinated" PLCs time sync + may be needed e.g. for timestamping of sensor data. Industrial network scenarios require advanced security solutions. Many of the current industrial production networks are physically - separated. Protection of critical flows are handled today by - gateways / firewalls. + separated. Preventing critical flows from be leaked outside a domain + is handled today by filtering policies that are typically enforced in + firewalls. 7.2.1. Transport Parameters The Cycle Time defines the frequency of message(s) between industrial actors. The Cycle Time is application dependent, in the range of 1ms - - 100ms for critical Control-Data-Streams. + - 100ms for critical control/data streams. Because industrial applications assume deterministic transport for critical Control-Data-Stream parameters (instead of defining latency and delay variation parameters) it is sufficient to fulfill the upper - bound of latency (maximum latency). The communication must ensure a - maximum end to end delivery time of messages in the range of 100 - microseconds to 50 milliseconds depending on the control loop - application. + bound of latency (maximum latency). The underlying networking + infrastructure must ensure a maximum end-to-end delivery time of + messages in the range of 100 microseconds to 50 milliseconds + depending on the control loop application. - Bandwidth requirements of Control-Data-Streams are usually calculated - directly from the bytes-per-cycle parameter of the control loop. For - PLC-to-PLC communication one can expect 2 - 32 streams with packet - size in the range of 100 - 700 bytes. For S-PLC to PLCs the number - of streams is higher - up to 256 streams. Usually no more than 20% - of available bandwidth is used for critical Control-Data-Streams. In - today's networks 1Gbps links are commonly used. + The bandwidth requirements of control/data streams are usually + calculated directly from the bytes-per-cycle parameter of the control + loop. For PLC-to-PLC communication one can expect 2 - 32 streams + with packet size in the range of 100 - 700 bytes. For S-PLC to PLCs + the number of streams is higher - up to 256 streams. Usually no more + than 20% of available bandwidth is used for critical control/data + streams. In today's networks 1Gbps links are commonly used. Most PLC control loops are rather tolerant of packet loss, however - critical Control-Data-Streams accept no more than 1 packet loss per + critical control/data streams accept no more than 1 packet loss per consecutive communication cycle (i.e. if a packet gets lost in cycle "n", then the next cycle ("n+1") must be lossless). After two or more consecutive packet losses the network may be considered to be "down" by the Application. As network downtime may impact the whole production system the required network availability is rather high (99,999%). Based on the above parameters we expect that some form of redundancy will be required for M2M communications, however any individual solution depends on several parameters including cycle time, delivery time, etc. 7.2.2. Stream Creation and Destruction - In an industrial environment, critical Control-Data-Streams are + In an industrial environment, critical control/data streams are created rather infrequently, on the order of ~10 times per day / week - / month. Most of these critical Control-Data-Streams get created at + / month. Most of these critical control/data streams get created at machine startup, however flexibility is also needed during runtime, for example when adding or removing a machine. Going forward as production systems become more flexible, we expect a significant increase in the rate at which streams are created, changed and destroyed. 7.3. Industrial M2M Future We would like to see the various proprietary networks replaced with a - converged standards-based network with deterministic properties that - can satisfy the timing and reliability constraints described above. + converged IP-standards-based network with deterministic properties + that can satisfy the timing, security and reliability constraints + described above. 7.4. Industrial M2M Asks - We can summarize the current requirements stated above as follows: + o Converged IP-based network - +-------------------------+--------------+ - | Metric | Requirement | - +-------------------------+--------------+ - | Sync Accuracy | 1 usec | - | | | - | Message Delivery Time | 100us - 50ms | - | | | - | Packet loss (burstless) | 0.1-1 % | - | | | - | Availability | 99.999 % | - +-------------------------+--------------+ + o Deterministic behavior (bounded latency and jitter ) - Table 14: Actor-to-Actor Timing Parameters + o High availability (presumably through redundancy) (99.999 %) -7.5. Acknowledgements + o Low message delivery time (100us - 50ms) - The authors would like to thank Feng Chen and Marcel Kiessling for - their comments and suggestions. + o Low packet loss (burstless, 0.1-1 %) -8. Other Use Cases + o Precise time synchronization accuracy (1us) - (This section was derived from draft-zha-detnet-use-case-00) + o Security (e.g. prevent critical flows from being leaked between + physically separated networks) + +8. Other Use Cases 8.1. Introduction The rapid growth of the today's communication system and its access into almost all aspects of daily life has led to great dependency on services it provides. The communication network, as it is today, has applications such as multimedia and peer-to-peer file sharing distribution that require Quality of Service (QoS) guarantees in terms of delay and jitter to maintain a certain level of performance. Meanwhile, mobile wireless communications has become an important @@ -3272,20 +3131,79 @@ o High availability (99.9999 percent up time requested, but may be up to twelve 9s) o Reliability, redundancy (lives at stake) o Security (from failures, attackers, misbehaving devices - sensitive to both packet content and arrival time) 10. Acknowledgments +10.1. Pro Audio + + This section was derived from draft-gunther-detnet-proaudio-req-01. + + The editors would like to acknowledge the help of the following + individuals and the companies they represent: + + Jeff Koftinoff, Meyer Sound + + Jouni Korhonen, Associate Technical Director, Broadcom + + Pascal Thubert, CTAO, Cisco + + Kieran Tyrrell, Sienda New Media Technologies GmbH + +10.2. Utility Telecom + + This section was derived from draft-wetterwald-detnet-utilities-reqs- + 02. + + Faramarz Maghsoodlou, Ph. D. IoT Connected Industries and Energy + Practice Cisco + + Pascal Thubert, CTAO Cisco + +10.3. Building Automation Systems + + This section was derived from draft-bas-usecase-detnet-00. + +10.4. Wireless for Industrial + + This section was derived from draft-thubert-6tisch-4detnet-01. + + This specification derives from the 6TiSCH architecture, which is the + result of multiple interactions, in particular during the 6TiSCH + (bi)Weekly Interim call, relayed through the 6TiSCH mailing list at + the IETF. + + The authors wish to thank: Kris Pister, Thomas Watteyne, Xavier + Vilajosana, Qin Wang, Tom Phinney, Robert Assimiti, Michael + Richardson, Zhuo Chen, Malisa Vucinic, Alfredo Grieco, Martin Turon, + Dominique Barthel, Elvis Vogli, Guillaume Gaillard, Herman Storey, + Maria Rita Palattella, Nicola Accettura, Patrick Wetterwald, Pouria + Zand, Raghuram Sudhaakar, and Shitanshu Shah for their participation + and various contributions. + +10.5. Cellular Radio + + This section was derived from draft-korhonen-detnet-telreq-00. + +10.6. Industrial M2M + + The authors would like to thank Feng Chen and Marcel Kiessling for + their comments and suggestions. + +10.7. Other + + This section was derived from draft-zha-detnet-use-case-00. + This document has benefited from reviews, suggestions, comments and proposed text provided by the following members, listed in alphabetical order: Jing Huang, Junru Lin, Lehong Niu and Oilver Huang. 11. Informative References [ACE] IETF, "Authentication and Authorization for Constrained Environments", .