draft-ietf-roll-building-routing-reqs-05.txt   draft-ietf-roll-building-routing-reqs-06.txt 
Networking Working Group J. Martocci, Ed. Networking Working Group J. Martocci, Ed.
Internet-Draft Johnson Controls Inc. Internet-Draft Johnson Controls Inc.
Intended status: Informational Pieter De Mil Intended status: Informational Pieter De Mil
Expires: August 3, 2009 Ghent University IBCN Expires: February 7, 2010 Ghent University IBCN
W. Vermeylen W. Vermeylen
Arts Centre Vooruit Arts Centre Vooruit
Nicolas Riou Nicolas Riou
Schneider Electric Schneider Electric
February 3, 2009 August 7, 2009
Building Automation Routing Requirements in Low Power and Lossy Building Automation Routing Requirements in Low Power and Lossy
Networks Networks
draft-ietf-roll-building-routing-reqs-05 draft-ietf-roll-building-routing-reqs-06
Status of this Memo Status of this Memo
This Internet-Draft is submitted to IETF in full conformance with the This Internet-Draft is submitted to IETF in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as Internet- other groups may also distribute working documents as Internet-
Drafts. Drafts.
skipping to change at page 1, line 37 skipping to change at page 1, line 37
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt. http://www.ietf.org/ietf/1id-abstracts.txt.
The list of Internet-Draft Shadow Directories can be accessed at The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html. http://www.ietf.org/shadow.html.
This Internet-Draft will expire on August 3, 2009. This Internet-Draft will expire on February 7, 2010.
Copyright Notice Copyright Notice
Copyright (c) 2009 IETF Trust and the persons identified as the Copyright (c) 2009 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents in effect on the date of
(http://trustee.ietf.org/license-info) in effect on the date of publication of this document (http://trustee.ietf.org/license-info).
publication of this document. Please review these documents Please review these documents carefully, as they describe your rights
carefully, as they describe your rights and restrictions with respect and restrictions with respect to this document.
to this document.
Abstract Abstract
The Routing Over Low power and Lossy network (ROLL) Working Group has The Routing Over Low power and Lossy network (ROLL) Working Group has
been chartered to work on routing solutions for Low Power and Lossy been chartered to work on routing solutions for Low Power and Lossy
networks (LLN) in various markets: Industrial, Commercial (Building), networks (LLN) in various markets: Industrial, Commercial (Building),
Home and Urban. Pursuant to this effort, this document defines the Home and Urban networks. Pursuant to this effort, this document
routing requirements for building automation. defines the IPv6 routing requirements for building automation.
Requirements Language Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in [RFC2119]. document are to be interpreted as described in (RFC2119).
Table of Contents Table of Contents
1. Terminology....................................................4 1. Terminology....................................................4
2. Introduction...................................................4 2. Introduction...................................................4
3. Facility Management System (FMS) Topology......................5 3. Overview of Building Automation Networks.......................5
3.1. Introduction..............................................5 3.1. Introduction..............................................5
3.2. Sensors/Actuators.........................................7 3.2. Building Systems Equipment................................6
3.3. Area Controllers..........................................7 3.2.1. Sensors/Actuators....................................6
3.4. Zone Controllers..........................................7 3.2.2. Area Controllers.....................................7
4. Installation Methods...........................................7 3.2.3. Zone Controllers.....................................7
4.1. Wired Communication Media.................................7 3.3. Equipment Installation Methods............................7
4.2. Device Density............................................8 3.4. Device Density............................................8
4.2.1. HVAC Device Density..................................8 3.4.1. HVAC Device Density..................................8
4.2.2. Fire Device Density..................................9 3.4.2. Fire Device Density..................................9
4.2.3. Lighting Device Density..............................9 3.4.3. Lighting Device Density..............................9
4.2.4. Physical Security Device Density.....................9 3.4.4. Physical Security Device Density.....................9
4.3. Installation Procedure....................................9 4. Traffic Pattern................................................9
5. Building Automation Routing Requirements......................10 5. Building Automation Routing Requirements......................11
5.1. Installation.............................................10 5.1. Device and Network Commissioning.........................11
5.1.1. Zero-Configuration Installation.....................11 5.1.1. Zero-Configuration Installation.....................12
5.1.2. Sleeping Devices....................................11 5.1.2. Local Testing.......................................12
5.1.3. Local Testing.......................................11 5.1.3. Device Replacement..................................12
5.1.4. Device Replacement..................................12
5.2. Scalability..............................................12 5.2. Scalability..............................................12
5.2.1. Network Domain......................................12 5.2.1. Network Domain......................................13
5.2.2. Peer-to-Peer Communication..........................12 5.2.2. Peer-to-Peer Communication..........................13
5.3. Mobility.................................................13 5.3. Mobility.................................................13
5.3.1. Mobile Device Requirements..........................13 5.3.1. Mobile Device Requirements..........................13
5.4. Resource Constrained Devices.............................14 5.4. Resource Constrained Devices.............................14
5.4.1. Limited Processing Power for Non-routing Devices....14 5.4.1. Limited memory footprint on host devices............14
5.4.2. Limited Processing Power for Routing Devices........14 5.4.2. Limited Processing Power for routers................14
5.5. Addressing...............................................14 5.4.3. Sleeping Devices....................................14
5.5.1. Unicast/Multicast/Anycast...........................14 5.5. Addressing...............................................15
5.6. Manageability............................................14 5.6. Manageability............................................15
5.6.1. Diagnostics.........................................15 5.6.1. Diagnostics.........................................15
5.6.2. Route Tracking......................................15 5.6.2. Route Tracking......................................16
5.7. Route Selection..........................................15 5.7. Route Selection..........................................16
5.7.1. Path Cost...........................................15 5.7.1. Route Cost..........................................16
5.7.2. Path Adaptation.....................................15 5.7.2. Route Adaptation....................................16
5.7.3. Route Redundancy....................................16 5.7.3. Route Redundancy....................................16
5.7.4. Route Discovery Time................................16 5.7.4. Route Discovery Time................................16
5.7.5. Route Preference....................................16 5.7.5. Route Preference....................................17
6. Traffic Pattern...............................................16 5.7.6. Real-time Performance Measures......................17
7. Security Considerations.......................................17 5.7.7. Prioritized Routing.................................17
7.1. Security Requirements....................................17 5.8. Security Requirements....................................17
7.1.1. Authentication......................................17 5.8.1. Authentication......................................18
7.1.2. Encryption..........................................18 5.8.2. Encryption..........................................18
7.1.3. Disparate Security Policies.........................18 5.8.3. Disparate Security Policies.........................18
7.1.4. Routing Security Policies To Sleeping Devices.......18 5.8.4. Routing Security Policies To Sleeping Devices.......18
8. IANA Considerations...........................................19 6. IANA Considerations...........................................19
9. Acknowledgments...............................................19 7. Acknowledgments...............................................19
10. References...................................................19 8. References....................................................19
10.1. Normative References....................................19 8.1. Normative References.....................................19
10.2. Informative References..................................19 8.2. Informative References...................................19
11. Appendix A: Additional Building Requirements.................20 9. Appendix A: Additional Building Requirements..................19
11.1. Additional Commercial Product Requirements..............20 9.1. Additional Commercial Product Requirements...............20
11.1.1. Cost...............................................20 9.1.1. Wired and Wireless Implementations..................20
11.1.2. Wired and Wireless Implementations.................20 9.1.2. World-wide Applicability............................20
11.1.3. World-wide Applicability...........................20 9.2. Additional Installation and Commissioning Requirements...20
11.1.4. Support of Application Layer Protocols.............20 9.2.1. Unavailability of an IP network.....................20
11.1.5. Use of Constrained Devices.........................21 9.3. Additional Network Requirements..........................20
11.2. Additional Installation and Commissioning Requirements..21 9.3.1. TCP/UDP.............................................20
11.2.1. Device Setup Time..................................21 9.3.2. Interference Mitigation.............................20
11.2.2. Unavailability of an IP network....................21 9.3.3. Packet Reliability..................................20
11.3. Additional Network Requirements.........................21 9.3.4. Merging Commissioned Islands........................21
11.3.1. TCP/UDP............................................21 9.3.5. Adjustable Routing Table Sizes......................21
11.3.2. Interference Mitigation............................21 9.3.6. Automatic Gain Control..............................21
11.3.3. Real-time Performance Measures.....................21 9.3.7. Device and Network Integrity........................21
11.3.4. Packet Reliability.................................22 9.4. Additional Performance Requirements......................21
11.3.5. Merging Commissioned Islands.......................22 9.4.1. Data Rate Performance...............................21
11.3.6. Adjustable System Table Sizes......................22 9.4.2. Firmware Upgrades...................................22
11.3.7. Communication Distance.............................22 9.4.3. Route Persistence...................................22
11.3.8. Automatic Gain Control.............................22
11.3.9. IPv4 Compatibility.................................23
11.3.10. Proxying for Sleeping Devices.....................23
11.3.11. Device and Network Integrity......................23
11.4. Additional Performance Requirements.....................23
11.4.1. Data Rate Performance..............................23
11.4.2. Firmware Upgrades..................................23
11.4.3. Prioritized Routing................................23
11.4.4. Path Persistence...................................24
11.5. Additional Network Security Requirements................24
11.5.1. Encryption Levels..................................24
11.5.2. Security Policy Flexibility........................24
12. Appendix B: FMS Use-Cases....................................24
12.1. Locking and Unlocking the Building......................25
12.2. Building Energy Conservation............................25
12.3. Inventory and Remote Diagnosis of Safety Equipment......25
12.4. Life Cycle of Field Devices.............................26
12.5. Surveillance............................................26
12.6. Emergency...............................................26
12.7. Public Address..........................................27
1. Terminology 1. Terminology
For description of the terminology used in this specification, please For description of the terminology used in this specification, please
see [I-D.ietf-roll-terminology]. see [I-D.ietf-roll-terminology].
2. Introduction 2. Introduction
The Routing Over Low power and Lossy network (ROLL) Working Group has
been chartered to work on routing solutions for Low Power and Lossy
networks (LLN) in various markets: Industrial, Commercial (Building),
Home and Urban networks. Pursuant to this effort, this document
defines the IPv6 routing requirements for building automation.
Commercial buildings have been fitted with pneumatic and subsequently Commercial buildings have been fitted with pneumatic and subsequently
electronic communication pathways connecting sensors to their electronic communication pathways connecting sensors to their
controllers for over one hundred years. Recent economic and controllers for over one hundred years. Recent economic and
technical advances in wireless communication allow facilities to technical advances in wireless communication allow facilities to
increasingly utilize a wireless solution in lieu of a wired solution; increasingly utilize a wireless solution in lieu of a wired solution;
thereby reducing installation costs while maintaining highly reliant thereby reducing installation costs while maintaining highly reliant
communication. communication.
The cost benefits and ease of installation of wireless sensors allow The cost benefits and ease of installation of wireless sensors allow
customers to further instrument their facilities with additional customers to further instrument their facilities with additional
skipping to change at page 5, line 22 skipping to change at page 5, line 11
vertical markets including universities; hospitals; government vertical markets including universities; hospitals; government
facilities; Kindergarten through High School (K-12); pharmaceutical facilities; Kindergarten through High School (K-12); pharmaceutical
manufacturing facilities; and single-tenant or multi-tenant office manufacturing facilities; and single-tenant or multi-tenant office
buildings. These buildings range in size from 100K sqft structures (5 buildings. These buildings range in size from 100K sqft structures (5
story office buildings), to 1M sqft skyscrapers (100 story story office buildings), to 1M sqft skyscrapers (100 story
skyscrapers) to complex government facilities such as the Pentagon. skyscrapers) to complex government facilities such as the Pentagon.
The described topology is meant to be the model to be used in all The described topology is meant to be the model to be used in all
these types of environments, but clearly must be tailored to the these types of environments, but clearly must be tailored to the
building class, building tenant and vertical market being served. building class, building tenant and vertical market being served.
The following sections describe the sensor, actuator, area controller Section 3 describes the necessary background to understand the
and zone controller layers of the topology. (NOTE: The Building context of building automation including the sensor, actuator, area
Controller and Enterprise layers of the FMS are excluded from this controller and zone controller layers of the topology; typical device
discussion since they typically deal in communication rates requiring density; and installation practices.
LAN/WLAN communication technologies).
Section 3 describes FMS architectures commonly installed in Section 4 defines the traffic flow of the aforementioned sensors,
commercial buildings. Section 4 describes installation methods actuators and controllers in commercial buildings.
deployed for new and remodeled construction. Appendix A documents
important commercial building requirements that are out of scope for
routing yet will be essential to the final acceptance of the
protocols used within the building. Appendix B describes various FMS
use-cases and the interaction with humans for energy conservation and
life-safety applications.
Sections 3, 4, Appendix A and Appendix B are mainly included for Section 5 defines the full set of IPv6 routing requirements for
educational purposes. The aim of this document is to provide the set commercial buildings.
of IPv6 routing requirements for LLNs in buildings as described in
Section 5.
3. Facility Management System (FMS) Topology Appendix A documents important commercial building requirements that
are out of scope for routing yet will be essential to the final
acceptance of the protocols used within the building.
Sections 3 and Appendix A are mainly included for educational
purposes.
The expressed aim of this document is to provide the set of IPv6
routing requirements for LLNs in buildings as described in Section 5.
3. Overview of Building Automation Networks
3.1. Introduction 3.1. Introduction
To understand the network systems requirements of a facility To understand the network systems requirements of a facility
management system in a commercial building, this document uses a management system in a commercial building, this document uses a
framework to describe the basic functions and composition of the framework to describe the basic functions and composition of the
system. An FMS is a hierarchical system of sensors, actuators, system. An FMS is a hierarchical system of sensors, actuators,
controllers and user interface devices based on spatial extent. controllers and user interface devices that interoperate to provide a
Additionally, an FMS may also be divided functionally across alike, safe and comfortable environment while constraining energy costs.
but different building subsystems such as HVAC, Fire, Security,
Lighting, Shutters and Elevator control systems as denoted in Figure An FMS may is divided functionally across alike, but different
1. building subsystems such as heating, ventilation and air conditioning
(HVAC); Fire; Security; Lighting; Shutters and Elevator control
systems as denoted in Figure 1.
Much of the makeup of an FMS is optional and installed at the behest Much of the makeup of an FMS is optional and installed at the behest
of the customer. Sensors and actuators have no standalone of the customer. Sensors and actuators have no standalone
functionality. All other devices support partial or complete functionality. All other devices support partial or complete
standalone functionality. These devices can optionally be tethered standalone functionality. These devices can optionally be tethered
to form a more cohesive system. The customer requirements dictate to form a more cohesive system. The customer requirements dictate
the level of integration within the facility. This architecture the level of integration within the facility. This architecture
provides excellent fault tolerance since each node is designed to provides excellent fault tolerance since each node is designed to
operate in an independent mode if the higher layers are unavailable. operate in an independent mode if the higher layers are unavailable.
skipping to change at page 7, line 5 skipping to change at page 6, line 42
Actuators | | | | | T | | N | | R | | O | Actuators | | | | | T | | N | | R | | O |
| | | | | Y | | G | | S | | R | | | | | | Y | | G | | S | | R |
Sensors | | | | | | | | | | | | Sensors | | | | | | | | | | | |
+------+ +-----+ +------+ +------+ +------+ +------+ +------+ +-----+ +------+ +------+ +------+ +------+
Figure 1: Building Systems and Devices Figure 1: Building Systems and Devices
3.2. Sensors/Actuators 3.2. Building Systems Equipment
3.2.1. Sensors/Actuators
As Figure 1 indicates an FMS may be composed of many functional As Figure 1 indicates an FMS may be composed of many functional
stacks or silos that are interoperably woven together via Building stacks or silos that are interoperably woven together via Building
Applications. Each silo has an array of sensors that monitor the Applications. Each silo has an array of sensors that monitor the
environment and actuators that effect the environment as determined environment and actuators that effect the environment as determined
by the upper layers of the FMS topology. The sensors typically are by the upper layers of the FMS topology. The sensors typically are
the fringe of the network structure providing environmental data into at the edge of the network structure providing environmental data
the system. The actuators are the sensor's counterparts modifying into the system. The actuators are the sensors' counterparts
the characteristics of the system based on the input sensor data and modifying the characteristics of the system based on the sensor data
the applications deployed. and the applications deployed.
3.3. Area Controllers 3.2.2. Area Controllers
An area describes a small physical locale within a building, An area describes a small physical locale within a building,
typically a room. HVAC (temperature and humidity) and Lighting (room typically a room. HVAC (temperature and humidity) and Lighting (room
lighting, shades, solar loads) vendors oft times deploy area lighting, shades, solar loads) vendors oft times deploy area
controllers. Area controls are fed by sensor inputs that monitor the controllers. Area controls are fed by sensor inputs that monitor the
environmental conditions within the room. Common sensors found in environmental conditions within the room. Common sensors found in
many rooms that feed the area controllers include temperature, many rooms that feed the area controllers include temperature,
occupancy, lighting load, solar load and relative humidity. Sensors occupancy, lighting load, solar load and relative humidity. Sensors
found in specialized rooms (such as chemistry labs) might include air found in specialized rooms (such as chemistry labs) might include air
flow, pressure, CO2 and CO particle sensors. Room actuation includes flow, pressure, CO2 and CO particle sensors. Room actuation includes
temperature setpoint, lights and blinds/curtains. temperature setpoint, lights and blinds/curtains.
3.4. Zone Controllers 3.2.3. Zone Controllers
Zone Control supports a similar set of characteristics as the Area Zone Control supports a similar set of characteristics as the Area
Control albeit to an extended space. A zone is normally a logical Control albeit to an extended space. A zone is normally a logical
grouping or functional division of a commercial building. A zone may grouping or functional division of a commercial building. A zone may
also coincidentally map to a physical locale such as a floor. also coincidentally map to a physical locale such as a floor.
Zone Control may have direct sensor inputs (smoke detectors for Zone Control may have direct sensor inputs (smoke detectors for
fire), controller inputs (room controllers for air-handlers in HVAC) fire), controller inputs (room controllers for air-handlers in HVAC)
or both (door controllers and tamper sensors for security). Like or both (door controllers and tamper sensors for security). Like
area/room controllers, zone controllers are standalone devices that area/room controllers, zone controllers are standalone devices that
operate independently or may be attached to the larger network for operate independently or may be attached to the larger network for
more synergistic control. more synergistic control.
4. Installation Methods 3.3. Equipment Installation Methods
4.1. Wired Communication Media Commercial controllers have been traditionally deployed in a facility
using serial media following the EIA-485 electrical standard
operating nominally at 76800 baud with distances upward to 15000
feet. EIA-485 is a multi-drop media allowing upwards to 255 devices
to be connected to a single trunk.
Commercial controllers are traditionally deployed in a facility using Wired FMS installation is a multifaceted procedure depending on the
twisted pair serial media following the EIA-485 electrical standard extent of the system and the software interoperability requirement.
operating nominally at 38400 to 76800 baud. This allows runs to 5000
ft without a repeater. With the maximum of three repeaters, a single
communication trunk can serpentine 15000 ft. EIA-485 is a multi-drop
media allowing upwards to 255 devices to be connected to a single
trunk.
Most sensors and virtually all actuators currently used in However, at the sensor/actuator and controller level, the procedure
commercial buildings are "dumb", non-communicating hardwired devices. is typically a two or three step process.
However, sensor buses are beginning to be deployed by vendors which
are used for smart sensors and point multiplexing. The Fire
industry deploys addressable fire devices, which usually use some
form of proprietary communication wiring driven by fire codes.
4.2. Device Density The installer arrives on-site during the construction of the building
prior to drywall and ceiling installation. The installer allocates
wall space installs the controller and sensor networks. The Building
Controllers and Enterprise network are not normally installed until
months later. The electrician completes the task by running a
verification procedure that verifies proper wired or wireless
continuity between the devices.
Months later, the higher order controllers are installed, programmed
and commissioned together with the previously installed sensors,
actuators and controllers. In most cases the IP network is still not
in place. The Building Controllers are completely commissioned using
a crossover cable or a temporary IP switch together with static IP
addresses.
After occupancy, when the IP network is operational, the FMS often
connects to the enterprise network. Dynamic IPs replace static IPs.
VLANs oft time segregate the facility and IT systems. For multi-
building multi-site facilities VPNs, NATs and firewalls are also
introduced.
3.4. Device Density
Device density differs depending on the application and as dictated Device density differs depending on the application and as dictated
by the local building code requirements. The following sections by the local building code requirements. The following sections
detail typical installation densities for different applications. detail typical installation densities for different applications.
4.2.1. HVAC Device Density 3.4.1. HVAC Device Density
HVAC room applications typically have sensors/actuators and HVAC room applications typically have sensors/actuators and
controllers spaced about 50ft apart. In most cases there is a 3:1 controllers spaced about 50ft apart. In most cases there is a 3:1
ratio of sensors/actuators to controllers. That is, for each room ratio of sensors/actuators to controllers. That is, for each room
there is an installed temperature sensor, flow sensor and damper there is an installed temperature sensor, flow sensor and damper
actuator for the associated room controller. actuator for the associated room controller.
HVAC equipment room applications are quite different. An air handler HVAC equipment room applications are quite different. An air handler
system may have a single controller with upwards to 25 sensors and system may have a single controller with upwards to 25 sensors and
actuators within 50 ft of the air handler. A chiller or boiler is actuators within 50 ft of the air handler. A chiller or boiler is
skipping to change at page 9, line 5 skipping to change at page 9, line 15
installed per floor, but many times service a wing, building or the installed per floor, but many times service a wing, building or the
entire complex via a central plant. entire complex via a central plant.
These numbers are typical. In special cases, such as clean rooms, These numbers are typical. In special cases, such as clean rooms,
operating rooms, pharmaceuticals and labs, the ratio of sensors to operating rooms, pharmaceuticals and labs, the ratio of sensors to
controllers can increase by a factor of three. Tenant installations controllers can increase by a factor of three. Tenant installations
such as malls would opt for packaged units where much of the sensing such as malls would opt for packaged units where much of the sensing
and actuation is integrated into the unit. Here a single device and actuation is integrated into the unit. Here a single device
address would serve the entire unit. address would serve the entire unit.
4.2.2. Fire Device Density 3.4.2. Fire Device Density
Fire systems are much more uniformly installed with smoke detectors Fire systems are much more uniformly installed with smoke detectors
installed about every 50 feet. This is dictated by local building installed about every 50 feet. This is dictated by local building
codes. Fire pull boxes are installed uniformly about every 150 feet. codes. Fire pull boxes are installed uniformly about every 150 feet.
A fire controller will service a floor or wing. The fireman's fire A fire controller will service a floor or wing. The fireman's fire
panel will service the entire building and typically is installed in panel will service the entire building and typically is installed in
the atrium. the atrium.
4.2.3. Lighting Device Density 3.4.3. Lighting Device Density
Lighting is also very uniformly installed with ballasts installed Lighting is also very uniformly installed with ballasts installed
approximately every 10 feet. A lighting panel typically serves 48 to approximately every 10 feet. A lighting panel typically serves 48 to
64 zones. Wired systems tether many lights together into a single 64 zones. Wired systems tether many lights together into a single
zone. Wireless systems configure each fixture independently to zone. Wireless systems configure each fixture independently to
increase flexibility and reduce installation costs. increase flexibility and reduce installation costs.
4.2.4. Physical Security Device Density 3.4.4. Physical Security Device Density
Security systems are non-uniformly oriented with heavy density near Security systems are non-uniformly oriented with heavy density near
doors and windows and lighter density in the building interior space. doors and windows and lighter density in the building interior space.
The recent influx of interior and perimeter camera systems is The recent influx of interior and perimeter camera systems is
increasing the security footprint. These cameras are atypical increasing the security footprint. These cameras are atypical
endpoints requiring upwards to 1 megabit/second (Mbit/s) data rates endpoints requiring upwards to 1 megabit/second (Mbit/s) data rates
per camera as contrasted by the few Kbits/s needed by most other FMS per camera as contrasted by the few Kbits/s needed by most other FMS
sensing equipment. Previously, camera systems had been deployed on sensing equipment. Previously, camera systems had been deployed on
proprietary wired high speed network. More recent implementations proprietary wired high speed network. More recent implementations
utilize wired or wireless IP cameras integrated to the enterprise utilize wired or wireless IP cameras integrated to the enterprise
LAN. LAN.
4.3. Installation Procedure 4. Traffic Pattern
Wired FMS installation is a multifaceted procedure depending on the The independent nature of the automation subsystems within a building
extent of the system and the software interoperability requirement. plays heavy onto the network traffic patterns. Much of the real-time
However, at the sensor/actuator and controller level, the procedure sensor environmental data and actuator control stays within the local
is typically a two or three step process. LLN environment; while alarming and other event data will percolate
to higher layers.
Most FMS equipment will utilize 24 VAC power sources that can be Each sensor in the LLN unicasts P2P about 200 bytes of sensor data to
installed by a low-voltage electrician. He/she arrives on-site its associated controller each minute and expects an application
during the construction of the building prior to drywall and ceiling acknowledgement unicast returned from the destination. Each
installation. This allows him/her to allocate wall space, easily controller unicasts messages at a nominal rate of 6kB/min to peer or
land the equipment and run the wired controller and sensor networks. supervisory controllers. 30% of each node's packets are destined for
The Building Controllers and Enterprise network are not normally other nodes within the LLN. 70% of each node's packets are destined
installed until months later. The electrician completes his task by for an aggregation device (MP2P)and routed off the LLN. These
running a wire verification procedure that shows proper continuity messages also require a unicast acknowledgement from the destination.
between the devices and proper local operation of the devices. The above values assume direct node-to-node communication; meshing
and error retransmissions are not considered.
Later in the installation cycle, the higher order controllers are Multicasts (P2MP) to all nodes in the LLN occur for node and object
installed, programmed and commissioned together with the previously discovery when the network is first commissioned. This data is
installed sensors, actuators and controllers. In most cases the IP typically a one-time bind that is henceforth persisted. Lighting
network is still not operable. The Building Controllers are systems will also readily use multicasting during normal operations
completely commissioned using a crossover cable or a temporary IP to turn banks of lights 'on' and 'off' simultaneously.
switch together with static IP addresses.
Once the IP network is operational, the FMS may optionally be added FMS systems may be either polled or event based. Polled data systems
to the enterprise network. The wireless installation process must will generate a uniform and constant packet load on the network.
follow the same work flow. The electrician installs the products as Polled architectures, however have proven not scalable. Today, most
before and executes local functional tests between the wireless vendors have developed event based systems which pass data on event.
device to assure operation before leaving the job. The electrician These systems are highly scalable and generate low data on the
does not carry a laptop so the commissioning must be built into the network at quiescence. Unfortunately, the systems will generate a
device operation. heavy load on startup since all initial sensor data must migrate to
the controller level. They also will generate a temporary but heavy
load during firmware upgrades. This latter load can normally be
mitigated by performing these downloads during off-peak hours.
Devices will also need to reference peers periodically for sensor
data or to coordinate operation across systems. Normally, though,
data will migrate from the sensor level upwards through the local,
area then supervisory level. Traffic bottlenecks will typically form
at the funnel point from the area controllers to the supervisory
controllers.
Initial system startup after a controlled outage or unexpected power
failure puts tremendous stress on the network and on the routing
algorithms. An FMS system is comprised of a myriad of control
algorithms at the room, area, zone, and enterprise layers. When
these control algorithms are at quiescence, the real-time data rate
is small and the network will not saturate. An overall network
traffic load of 6KBps is typical at quiescence. However, upon any
power loss, the control loops and real-time data quickly atrophy. A
ten minute power outage may require many hours to regain building
control. Traffic flow may increase ten-fold until the building
control stabilizes.
Power disruptions are unexpected and in most cases will immediately
impact lines-powered devices. Power disruptions however, are
transparent to battery powered devices. These devices will continue
to attempt to access the LLN during the outage. Battery powered
devices designed to buffer data that has not been delivered will
further stress the network operation when power returns.
Upon restart, lines-powered devices will naturally dither due to
primary equipment delays or variance in the device self-tests.
However, most lines-powered devices will be ready to access the LLN
network within 10 seconds of power up. Empirical testing indicates
that routes acquired during startup will tend to be very oblique
since the available neighbor lists are incomplete. This demands an
adaptive routing protocol to allow for route optimization as the
network stabilizes.
5. Building Automation Routing Requirements 5. Building Automation Routing Requirements
Following are the building automation routing requirements for a Following are the building automation routing requirements for
network used to integrate building sensor, actuator and control networks used to integrate building sensor, actuator and control
products. These requirements have been limited to routing products. These requirements are written not presuming any
requirements only. These requirements are written not presuming any
preordained network topology, physical media (wired) or radio preordained network topology, physical media (wired) or radio
technology (wireless). See Appendix A for additional requirements technology (wireless).
that have been deemed outside the scope of this document yet will
pertain to the successful deployment of building automation systems.
5.1. Installation 5.1. Device and Network Commissioning
Building control systems typically are installed and tested by Building control systems typically are installed and tested by
electricians having little computer knowledge and no network electricians having little computer knowledge and no network
knowledge whatsoever. These systems are often installed during the knowledge whatsoever. These systems are often installed during the
building construction phase before the drywall and ceilings are in building construction phase before the drywall and ceilings are in
place. For new construction projects, the building enterprise IP place. For new construction projects, the building enterprise IP
network is not in place during installation of the building control network is not in place during installation of the building control
system. For retrofit applications, the installer will still operate system. For retrofit applications, the installer will still operate
independently from the IP network so as not to affect network independently from the IP network so as not to affect network
operations during the installation phase. operations during the installation phase.
Local (ad hoc) testing of sensors and room controllers must be
completed before the tradesperson can complete his/her work. This
testing allows the tradesperson to verify correct client (e.g. light
switch) and server (e.g. light ballast) before leaving the jobsite.
In traditional wired systems correct operation of a light In traditional wired systems correct operation of a light
switch/ballast pair was as simple as flipping on the light switch. switch/ballast pair was as simple as flipping on the light switch.
In wireless applications, the tradesperson has to assure the same In wireless applications, the tradesperson has to assure the same
operation, yet be sure the operation of the light switch is operation, yet be sure the operation of the light switch is
associated to the proper ballast. associated to the proper ballast.
System level commissioning will later be deployed using a more System level commissioning will later be deployed using a more
computer savvy person with access to a commissioning device (e.g. a computer savvy person with access to a commissioning device (e.g. a
laptop computer). The completely installed and commissioned laptop computer). The completely installed and commissioned
enterprise IP network may or may not be in place at this time. enterprise IP network may or may not be in place at this time.
Following are the installation routing requirements. Following are the installation routing requirements.
5.1.1. Zero-Configuration Installation 5.1.1. Zero-Configuration Installation
It MUST be possible to fully commission network devices without It MUST be possible to fully commission network devices without
requiring any additional commissioning device (e.g. laptop). requiring any additional commissioning device (e.g. laptop).
5.1.2. Sleeping Devices 5.1.2. Local Testing
Sensing devices will, in some cases, utilize battery power or energy
harvesting techniques for power and will operate mostly in a sleep
mode to maintain power consumption within a modest budget. The
routing protocol MUST take into account device characteristics such
as power budget. If such devices provide routing, rather than merely
host connectivity, the energy costs associated with such routing
needs to fit within the power budget. If the mechanisms for duty
cycling dictate very long response times or specific temporal
scheduling, routing will need to take such constraints into account.
Typically, batteries need to be operational for at least 5 years when
the sensing device is transmitting its data(e.g. 64 octets) once per
minute. This requires that sleeping devices MUST have minimal link
on time when they awake and transmit onto the network. Moreover,
maintaining the ability to receive inbound data MUST be accomplished
with minimal link on time.
Proxies with unconstrained power budgets oft times are used to cache
the inbound data for a sleeping device until the device awakens. In
such cases, the routing protocol MUST discover the capability of a
node to act as a proxy during path calculation; then deliver the
packet to the assigned proxy for later delivery to the sleeping
device upon its next awakened cycle.
5.1.3. Local Testing
The local sensors and requisite actuators and controllers must be The local sensors and requisite actuators and controllers must be
testable within the locale (e.g. room) to assure communication testable within the locale (e.g. room) to assure communication
connectivity and local operation without requiring other systemic connectivity and local operation without requiring other systemic
devices. Routing should allow for temporary ad hoc paths to be devices.
established that are updated as the network physically and
functionally expands.
5.1.4. Device Replacement LLN nodes SHOULD be testable for end-to-end link connectivity and
application conformance without requiring other network
infrastructure.
5.1.3. Device Replacement
Replacement devices need to be plug-and-play with no additional setup Replacement devices need to be plug-and-play with no additional setup
compared to what is normally required for a new device. Devices compared to what is normally required for a new device. Devices
referencing data in the replaced device MUST be able to reference referencing data in the replaced device MUST be able to reference
data in its replacement without being reconfigured to refer to the data in its replacement without requiring reconfiguration. Thus,
new device. Thus, such a reference cannot be a hardware identifier, such a reference cannot be a hardware identifier, such as the MAC
such as the MAC address, nor a hard-coded route. If such a reference address, nor a hard-coded route. If such a reference is an IP
is an IP address, the replacement device MUST be assigned the IP address, the replacement device MUST be assigned the IP addressed
addressed previously bound to the replaced device. Or if the logical previously bound to the replaced device. Or if the logical
equivalent of a hostname is used for the reference, it must be equivalent of a hostname is used for the reference, it must be
translated to the replacement IP address. translated to the replacement IP address.
5.2. Scalability 5.2. Scalability
Building control systems are designed for facilities from 50000 sq. Building control systems are designed for facilities from 50000 sq.
ft. to 1M+ sq. ft. The networks that support these systems must ft. to 1M+ sq. ft. The networks that support these systems must
cost-effectively scale accordingly. In larger facilities cost-effectively scale accordingly. In larger facilities
installation may occur simultaneously on various wings or floors, yet installation may occur simultaneously on various wings or floors, yet
the end system must seamlessly merge. Following are the scalability the end system must seamlessly merge. Following are the scalability
requirements. requirements.
5.2.1. Network Domain 5.2.1. Network Domain
The routing protocol MUST be able to support networks with at least The routing protocol MUST be able to support networks with at least
2000 nodes supporting at least 1000 routing devices and 1000 non- 2000 nodes where 1000 nodes would act as routers and the other 1000
routing device. Subnetworks (e.g. rooms, primary equipment) within nodes would be hosts. Subnetworks (e.g. rooms, primary equipment)
the network must support upwards to 255 sensors and/or actuators. within the network must support upwards to 255 sensors and/or
actuators.
5.2.2. Peer-to-Peer Communication 5.2.2. Peer-to-Peer Communication
The data domain for commercial FMS systems may sprawl across a vast The data domain for commercial FMS systems may sprawl across a vast
portion of the physical domain. For example, a chiller may reside in portion of the physical domain. For example, a chiller may reside in
the facility's basement due to its size, yet the associated cooling the facility's basement due to its size, yet the associated cooling
towers will reside on the roof. The cold-water supply and return towers will reside on the roof. The cold-water supply and return
pipes serpentine through all the intervening floors. The feedback pipes serpentine through all the intervening floors. The feedback
control loops for these systems require data from across the control loops for these systems require data from across the
facility. facility.
A network device MUST be able to communicate in a peer-to-peer manner A network device MUST be able to communicate in a point-to-point
with any other device on the network. Thus, the routing protocol MUST manner with any other device on the network. Thus, the routing
provide routes between arbitrary hosts within the appropriate protocol MUST provide routes between arbitrary hosts within the
administrative domain. appropriate administrative domain.
5.3. Mobility 5.3. Mobility
Most devices are affixed to walls or installed on ceilings within Most devices are affixed to walls or installed on ceilings within
buildings. Hence the mobility requirements for commercial buildings buildings. Hence the mobility requirements for commercial buildings
are few. However, in wireless environments location tracking of are few. However, in wireless environments location tracking of
occupants and assets is gaining favor. Asset tracking applications occupants and assets is gaining favor. Asset tracking applications,
require monitoring movement with granularity of a minute. This soft such as tracking capital equipment (e.g. wheel chairs) in medical
real-time performance requirement is reflected in the performance facilities, require monitoring movement with granularity of a minute.
requirements below. This soft real-time performance requirement is reflected in the
performance requirements below.
5.3.1. Mobile Device Requirements 5.3.1. Mobile Device Requirements
To minimize network dynamics, mobile devices SHOULD not be allowed to To minimize network dynamics, mobile devices should not be allowed to
act as forwarding devices (routers) for other devices in the LLN. act as forwarding devices (routers) for other devices in the LLN.
Network configuration should allow devices to be configured as
routers or hosts.
A mobile device that moves within an LLN SHOULD reestablish end-to- 5.3.1.1. Device Mobility within the LLN
end communication to a fixed device also in the LLN within 2 seconds.
The network convergence time should be less than 5 seconds once the
mobile device stops moving.
A mobile device that moves outside of an LLN SHOULD reestablish end- An LLN typically spans a single floor in a commercial building.
to-end communication to a fixed device in the new LLN within 5 Mobile devices may move within this LLN. For example, a wheel chair
seconds. The network convergence time should be less than 5 seconds may be moved from one room on the floor to another room on the same
once the mobile device stops moving. floor.
A mobile device that moves outside of one LLN into another LLN SHOULD A mobile LLN device that moves within the confines of the same LLN
reestablish end-to-end communication to a fixed device in the old LLN SHOULD reestablish end-to-end communication to a fixed device also in
within 10 seconds. The network convergence time should be less than the LLN within 5 seconds after it ceases movement. The LLN network
10 seconds once the mobile device stops. convergence time should be less than 10 seconds once the mobile
device stops moving.
A mobile device that moves outside of one LLN into another LLN SHOULD 5.3.1.2. Device Mobility across LLNs
reestablish end-to-end communication to another mobile device in the
new LLN within 20 seconds. The network convergence time should be
less than 30 seconds once the mobile devices stop moving.
A mobile device that moves outside of one LLN into another LLN SHOULD A mobile device may move across LLNs, such as a wheel chair being
reestablish end-to-end communication to a mobile device in the old moved to a different floor.
LLN within 30 seconds. The network convergence time should be less
than 30 seconds once the mobile devices stop moving. A mobile device that moves outside its original LLN SHOULD
reestablish end-to-end communication to a fixed device also in the
new LLN within 10 seconds after the mobile device ceases movement.
The network convergence time should be less than 20 seconds once the
mobile device stops moving.
5.4. Resource Constrained Devices 5.4. Resource Constrained Devices
Sensing and actuator device processing power and memory may be 4 Sensing and actuator device processing power and memory may be 4
orders of magnitude less (i.e. 10,000x) than many more traditional orders of magnitude less (i.e. 10,000x) than many more traditional
client devices on an IP network. The routing mechanisms must client devices on an IP network. The routing mechanisms must
therefore be tailored to fit these resource constrained devices. therefore be tailored to fit these resource constrained devices.
5.4.1. Limited Processing Power for Non-routing Devices. 5.4.1. Limited memory footprint on host devices.
The software size requirement for non-routing devices (e.g. sleeping The software size requirement for non-routing devices (e.g. sleeping
sensors and actuators) SHOULD be implementable in 8-bit devices with sensors and actuators) SHOULD be implementable in 8-bit devices with
no more than 128KB of memory. no more than 128KB of memory.
5.4.2. Limited Processing Power for Routing Devices 5.4.2. Limited Processing Power for routers
The software size requirements for routing devices (e.g. room The software size requirements for routing devices (e.g. room
controllers) SHOULD be implementable in 8-bit devices with no more controllers) SHOULD be implementable in 8-bit devices with no more
than 256KB of flash memory. than 256KB of flash memory.
5.4.3. Sleeping Devices
Sensing devices will, in some cases, utilize battery power or energy
harvesting techniques for power and will operate mostly in a sleep
mode to maintain power consumption within a modest budget. The
routing protocol MUST take into account device characteristics such
as power budget. If such devices provide routing, rather than merely
host connectivity, the energy costs associated with such routing
needs to fit within the power budget. If the mechanisms for duty
cycling dictate very long response times or specific temporal
scheduling, routing will need to take such constraints into account.
Typically, battery life (2000mah) needs to extend for at least 5
years when the sensing device is transmitting its data(200 octets)
once per minute over a low power transceiver (25ma). This requires
that sleeping devices MUST upon awakening route its data to its
destination and receive an ACK from the destination within 20msec.
Additionally, awakened sleepy devices MUST be able to receive
awaiting inbound data within 20msec.
Proxies with unconstrained power budgets oft times are used to cache
the inbound data for a sleeping device until the device awakens. In
such cases, the routing protocol MUST discover the capability of a
node to act as a proxy during route calculation; then deliver the
packet to the assigned proxy for later delivery to the sleeping
device upon its next awakened cycle.
5.5. Addressing 5.5. Addressing
Facility Management systems require different communication schemes Facility Management systems require different communication schemes
to solicit or post network information. Broadcasts or anycasts need to solicit or post network information. Multicasts or anycasts need
be used to resolve unresolved references within a device when the be used to resolve unresolved references within a device when the
device first joins the network. device first joins the network.
As with any network communication, broadcasting should be minimized. As with any network communication, multicasting should be minimized.
This is especially a problem for small embedded devices with limited This is especially a problem for small embedded devices with limited
network bandwidth. In many cases a global broadcast could be network bandwidth. Multicasts are typically used for network joins
replaced with a multicast since the application knows the application and application binding in embedded systems. Routing MUST support
domain. Broadcasts and multicasts are typically used for network anycast, unicast, and multicast.
joins and application binding in embedded systems.
5.5.1. Unicast/Multicast/Anycast
Routing MUST support anycast, unicast, and multicast.
5.6. Manageability 5.6. Manageability
In addition to the initial installation of the system (see Section In addition to the initial installation of the system, it is equally
5.1), it is equally important for the ongoing maintenance of the important for the ongoing maintenance of the system to be simple and
system to be simple and inexpensive. inexpensive.
5.6.1. Diagnostics 5.6.1. Diagnostics
To improve diagnostics, the network layer SHOULD be able to be placed To improve diagnostics, the routing protocol SHOULD be able to be
in and out of 'verbose' mode. Verbose mode is a temporary debugging placed in and out of 'verbose' mode. Verbose mode is a temporary
mode that provides additional communication information including at debugging mode that provides additional communication information
least total number of routed packets sent and received, number of including at least total number of routed packets sent and received,
routing failures (no route available), neighbor table members, and number of routing failures (no route available), neighbor table
routing table entries. members, and routing table entries.
5.6.2. Route Tracking 5.6.2. Route Tracking
Route diagnostics SHOULD be supported providing information such as Route diagnostics SHOULD be supported providing information such as
path quality; number of hops; available alternate active paths with route quality; number of hops; available alternate active routes with
associated costs. Path quality is the relative measure of 'goodness' associated costs. Route quality is the relative measure of
of the selected source to destination path as compared to alternate 'goodness' of the selected source to destination path as compared to
paths. This composite value may be measured as a function of hop alternate paths. This composite value may be measured as a function
count, signal strength, available power, existing active paths or any of hop count, signal strength, available power, existing active
other criteria deemed by ROLL as the path cost differentiator. routes or any other criteria deemed by ROLL as the route cost
differentiator.
5.7. Route Selection 5.7. Route Selection
Route selection determines reliability and quality of the Route selection determines reliability and quality of the
communication paths among the devices. Optimizing the routes over communication paths among the devices by optimizing routes over time
time resolve any nuances developed at system startup when nodes are and resolving any nuances developed at system startup when nodes are
asynchronously adding themselves to the network. Path adaptation asynchronously adding themselves to the network.
will reduce latency if the path costs consider hop count as a cost
attribute.
5.7.1. Path Cost 5.7.1. Route Cost
The routing protocol MUST support a metric of route quality and The routing protocol MUST support a metric of route quality and
optimize path selection according to such metrics within constraints optimize path selection according to such metrics within constraints
established for links along the paths. These metrics SHOULD reflect established for links along the routes. These metrics SHOULD reflect
metrics such as signal strength, available bandwidth, hop count, metrics such as signal strength, available bandwidth, hop count,
energy availability and communication error rates. energy availability and communication error rates.
5.7.2. Path Adaptation 5.7.2. Route Adaptation
Communication paths MUST adapt toward the chosen metric(s) (e.g. Communication routes MUST adapt toward the chosen metric(s) (e.g.
signal quality) optimality in time. signal quality) optimality in time.
5.7.3. Route Redundancy 5.7.3. Route Redundancy
The routing layer SHOULD be configurable to allow secondary and The routing layer SHOULD be configurable to allow secondary and
tertiary paths to be established and used upon failure of the primary tertiary paths to be established and used upon failure of the primary
path. route.
5.7.4. Route Discovery Time 5.7.4. Route Discovery Time
Mission critical commercial applications (e.g. Fire, Security) Mission critical commercial applications (e.g. Fire, Security)
require reliable communication and guaranteed end-to-end delivery of require reliable communication and guaranteed end-to-end delivery of
all messages in a timely fashion. Application layer time-outs must all messages in a timely fashion. Application layer time-outs must
be selected judiciously to cover anomalous conditions such as lost be selected judiciously to cover anomalous conditions such as lost
packets and/or path discoveries; yet not be set too large to over packets and/or route discoveries; yet not be set too large to over
damp the network response. If route discovery occurs during packet damp the network response. If route discovery occurs during packet
transmission time, it SHOULD NOT add more than 120ms of latency to transmission time (proactive routing), it SHOULD NOT add more than
the packet delivery time. 120ms of latency to the packet delivery time.
5.7.5. Route Preference 5.7.5. Route Preference
Route cost algorithms SHOULD allow the installer to optionally select The routing protocol SHOULD allow for the support of manually
'preferred' paths based on the known spatial layout of the configured static preferred routes.
communicating devices.
6. Traffic Pattern
The independent nature of the automation systems within a building
plays heavy onto the network traffic patterns. Much of the real-time
sensor data stays within the local environment. Alarming and other
event data will percolate to higher layers.
Systemic data may be either polled or event based. Polled data 5.7.6. Real-time Performance Measures
systems will generate a uniform packet load on the network. This
architecture has proven not scalable. Most vendors have developed
event based systems which pass data on event. These systems are
highly scalable and generate low data on the network at quiescence.
Unfortunately, the systems will generate a heavy load on startup
since all the initial data must migrate to the controller level.
They also will generate a temporary but heavy load during firmware
upgrades. This latter load can normally be mitigated by performing
these downloads during off-peak hours.
Devices will need to reference peers occasionally for sensor data or A node transmitting a 'request with expected reply' to another node
to coordinate across systems. Normally, though, data will migrate must send the message to the destination and receive the response in
from the sensor level upwards through the local, area then not more than 120 msec. This response time should be achievable with
supervisory level. Bottlenecks will typically form at the funnel 5 or less hops in each direction. This requirement assumes network
point from the area controllers to the supervisory controllers. quiescence and a negligible turnaround time at the destination node.
Initial system startup after a controlled outage or unexpected power 5.7.7. Prioritized Routing
failure puts tremendous stress on the network and on the routing
algorithms. An FMS system is comprised of a myriad of control
algorithms at the room, area, zone, and enterprise layers. When
these control algorithms are at quiescence, the real-time data
changes are small and the network will not saturate. However, upon
any power loss, the control loops and real-time data quickly atrophy.
A ten minute outage may take many hours to regain control.
Upon restart all lines-powered devices power-on instantaneously. Network and application packet routing prioritization MUST be
However due to application startup and self tests, these devices will supported to assure that mission critical applications (e.g. Fire
attempt to join the network randomly. Empirical testing indicates Detection) cannot be deferred while less critical applications access
that routing paths acquired during startup will tend to be very the network.
oblique since the available neighbor lists are incomplete. This
demands an adaptive routing protocol to allow for path optimization
as the network stabilizes.
7. Security Considerations 5.8. Security Requirements
Security policies, especially wireless encryption and device Security policies, especially wireless encryption and device
authentication needs to be considered, especially with concern to the authentication needs to be considered, especially with concern to the
impact on the processing capabilities and additional latency incurred impact on the processing capabilities and additional latency incurred
on the sensors, actuators and controllers. on the sensors, actuators and controllers.
FMS systems are typically highly configurable in the field and hence FMS systems are typically highly configurable in the field and hence
the security policy is most often dictated by the type of building to the security policy is most often dictated by the type of building to
which the FMS is being installed. Single tenant owner occupied which the FMS is being installed. Single tenant owner occupied
office buildings installing lighting or HVAC control are candidates office buildings installing lighting or HVAC control are candidates
for implementing low or even no security on the LLN. Antithetically, for implementing low or even no security on the LLN. Antithetically,
military or pharmaceutical facilities require strong security military or pharmaceutical facilities require strong security
policies. As noted in the installation procedures above, security policies. As noted in the installation procedures, security policies
policies must be facile to allow no security during the installation must be facile to allow for no security policy during the
phase (prior to building occupancy), yet easily raise the security installation phase (prior to building occupancy), yet easily raise
level network wide during the commissioning phase of the system. the security level network wide during the commissioning phase of the
system.
7.1. Security Requirements
7.1.1. Authentication 5.8.1. Authentication
Authentication SHOULD be optional on the LLN. Authentication SHOULD Authentication SHOULD be optional on the LLN. Authentication SHOULD
be fully configurable on-site. Authentication policy and updates MUST be fully configurable on-site. Authentication policy and updates MUST
be routable over-the-air. Authentication SHOULD occur upon joining be routable over-the-air. Authentication SHOULD occur upon joining
or rejoining a network. However, once authenticated devices SHOULD or rejoining a network. However, once authenticated devices SHOULD
NOT need to reauthenticate with any other devices in the LLN. NOT need to reauthenticate with any other devices in the LLN.
Packets may need authentication at the source and destination nodes, Packets may need authentication at the source and destination nodes,
however, packets routed through intermediate hops should not need however, packets routed through intermediate hops should not need
reauthentication at each hop. reauthentication at each hop.
7.1.2. Encryption 5.8.2. Encryption
7.1.2.1. Encryption Types 5.8.2.1. Encryption Types
Data encryption of packets MUST optionally be supported by use of Data encryption of packets MUST optionally be supported by use of
either a network wide key and/or application key. The network key either a network wide key and/or application key. The network key
would apply to all devices in the LLN. The application key would would apply to all devices in the LLN. The application key would
apply to a subset of devices on the LLN. apply to a subset of devices on the LLN.
The network key and application keys would be mutually exclusive. The network key and application keys would be mutually exclusive.
The routing protocol MUST allow routing a packet encrypted with an The routing protocol MUST allow routing a packet encrypted with an
application key through forwarding devices that without requiring application key through forwarding devices that without requiring
each node in the path have the application key. each node in the route to have the application key.
7.1.2.2. Packet Encryption 5.8.2.2. Packet Encryption
The encryption policy MUST support encryption of the payload only or The encryption policy MUST support encryption of the payload only or
the entire packet. Payload only encryption would eliminate the the entire packet. Payload only encryption would eliminate the
decryption/re-encryption overhead at every hop providing more real- decryption/re-encryption overhead at every hop providing more real-
time performance. time performance.
7.1.3. Disparate Security Policies 5.8.3. Disparate Security Policies
Due to the limited resources of an LLN, the security policy defined Due to the limited resources of an LLN, the security policy defined
within the LLN MUST be able to differ from that of the rest of the IP within the LLN MUST be able to differ from that of the rest of the IP
network within the facility yet packets MUST still be able to route network within the facility yet packets MUST still be able to route
to or through the LLN from/to these networks. to or through the LLN from/to these networks.
7.1.4. Routing Security Policies To Sleeping Devices 5.8.4. Routing Security Policies To Sleeping Devices
The routing protocol MUST gracefully handle routing temporal security The routing protocol MUST gracefully handle routing temporal security
updates (e.g. dynamic keys) to sleeping devices on their 'awake' updates (e.g. dynamic keys) to sleeping devices on their 'awake'
cycle to assure that sleeping devices can readily and efficiently cycle to assure that sleeping devices can readily and efficiently
access then network. access then network.
8. IANA Considerations 6. IANA Considerations
This document includes no request to IANA. This document includes no request to IANA.
9. Acknowledgments 7. Acknowledgments
In addition to the authors, J. P. Vasseur, David Culler, Ted Humpal In addition to the authors, J. P. Vasseur, David Culler, Ted Humpal
and Zach Shelby are gratefully acknowledged for their contributions and Zach Shelby are gratefully acknowledged for their contributions
to this document. to this document.
This document was prepared using 2-Word-v2.0.template.dot. 8. References
10. References
10.1. Normative References 8.1. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997. Requirement Levels", BCP 14, RFC 2119, March 1997.
10.2. Informative References 8.2. Informative References
[I-D.ietf-roll-terminology] Vasseur, J., "Terminology in Low [I-D.ietf-roll-terminology]Vasseur, J., "Terminology in Low power And
power And Lossy Networks", draft-ietf-roll-terminology-00 (work Lossy Networks", draft-ietf-roll-terminology-00 (work in progress),
in progress), October 2008. October 2008.
11. Appendix A: Additional Building Requirements 9. Appendix A: Additional Building Requirements
Appendix A contains additional building requirements that were deemed Appendix A contains additional building requirements that were deemed
out of scope for ROLL, yet provided ancillary substance for the out of scope for ROLL, yet provided ancillary substance for the
reader. reader.
11.1. Additional Commercial Product Requirements 9.1. Additional Commercial Product Requirements
11.1.1. Cost
The total installed infrastructure cost including but not limited to
the media, required infrastructure devices (amortized across the
number of devices); labor to install and commission the network must
not exceed $1.00/foot for wired implementations.
Wireless implementations (total installed cost) must cost no more
than 80% of wired implementations.
11.1.2. Wired and Wireless Implementations 9.1.1. Wired and Wireless Implementations
Vendors will likely not develop a separate product line for both Vendors will likely not develop a separate product line for both
wired and wireless networks. Hence, the solutions set forth must wired and wireless networks. Hence, the solutions set forth must
support both wired and wireless implementations. support both wired and wireless implementations.
11.1.3. World-wide Applicability 9.1.2. World-wide Applicability
Wireless devices must be supportable at the 2.4Ghz ISM band. Wireless devices must be supportable at the 2.4Ghz ISM band.
Wireless devices should be supportable at the 900 and 868 ISM bands Wireless devices should be supportable at the 900 and 868 ISM bands
as well. as well.
11.1.4. Support of Application Layer Protocols 9.2. Additional Installation and Commissioning Requirements
11.1.4.1. BACnet Building Protocol
BACnet is an ISO world-wide application layer IP protocol. Devices
implementing ROLL routing protocol should support the BACnet
protocol.
11.1.5. Use of Constrained Devices
The network may be composed of a heterogeneous mix of full, battery
and energy harvested powered devices. The routing protocol must
support these constrained devices.
11.1.5.1. Energy Harvested Sensors
Devices utilizing available ambient energy (e.g. solar, air flow,
temperature differential)for sensing and communicating should be
supported by the solution set.
11.2. Additional Installation and Commissioning Requirements
11.2.1. Device Setup Time
Device and Network setup by the installer must take no longer than 20
seconds per device installed.
11.2.2. Unavailability of an IP network 9.2.1. Unavailability of an IP network
Product commissioning must be performed by an application engineer Product commissioning must be performed by an application engineer
prior to the installation of the IP network (e.g. switches, routers, prior to the installation of the IP network (e.g. switches, routers,
DHCP, DNS). DHCP, DNS).
11.3. Additional Network Requirements 9.3. Additional Network Requirements
11.3.1. TCP/UDP 9.3.1. TCP/UDP
Connection based and connectionless services must be supported Connection based and connectionless services must be supported
11.3.2. Interference Mitigation 9.3.2. Interference Mitigation
The network must automatically detect interference and seamlessly The network must automatically detect interference and seamlessly
migrate the network hosts channel to improve communication. Channel migrate the network hosts channel to improve communication. Channel
changes and nodes response to the channel change must occur within 60 changes and nodes response to the channel change must occur within 60
seconds. seconds.
11.3.3. Real-time Performance Measures 9.3.3. Packet Reliability
A node transmitting a 'request with expected reply' to another node
must send the message to the destination and receive the response in
not more than 120 msec. This response time should be achievable with
5 or less hops in each direction. This requirement assumes network
quiescence and a negligible turnaround time at the destination node.
11.3.4. Packet Reliability
Reliability must meet the following minimum criteria : In building automation, it is required for the network to meet the
following minimum criteria :
< 1% MAC layer errors on all messages; After no more than three < 1% MAC layer errors on all messages; After no more than three
retries retries
< .1% Network layer errors on all messages; < .1% Network layer errors on all messages;
After no more than three additional retries; After no more than three additional retries;
< 0.01% Application layer errors on all messages. < 0.01% Application layer errors on all messages.
Therefore application layer messages will fail no more than once Therefore application layer messages will fail no more than once
every 100,000 messages. every 100,000 messages.
11.3.5. Merging Commissioned Islands 9.3.4. Merging Commissioned Islands
Subsystems are commissioned by various vendors at various times Subsystems are commissioned by various vendors at various times
during building construction. These subnetworks must seamlessly during building construction. These subnetworks must seamlessly
merge into networks and networks must seamlessly merge into merge into networks and networks must seamlessly merge into
internetworks since the end user wants a holistic view of the system. internetworks since the end user wants a holistic view of the system.
11.3.6. Adjustable System Table Sizes 9.3.5. Adjustable Routing Table Sizes
Routing must support adjustable router table entry sizes on a per
node basis to maximize limited RAM in the devices.
11.3.7. Communication Distance
A source device may be upwards to 1000 feet from its destination. The routing protocol must allow constrained nodes to hold an
Communication may need to be established between these devices abbreviated set of routes. That is, the protocol should not mandate
without needing to install other intermediate 'communication only' that the node routing tables be exhaustive.
devices such as repeaters.
11.3.8. Automatic Gain Control 9.3.6. Automatic Gain Control
For wireless implementations, the device radios should incorporate For wireless implementations, the device radios should incorporate
automatic transmit power regulation to maximize packet transfer and automatic transmit power regulation to maximize packet transfer and
minimize network interference regardless of network size or density. minimize network interference regardless of network size or density.
11.3.9. IPv4 Compatibility 9.3.7. Device and Network Integrity
The routing protocol must support cost-effective intercommunication
among IPv4 and IPv6 devices.
11.3.10. Proxying for Sleeping Devices
Routing must support in-bound packet caches for low-power (battery
and energy harvested) devices when these devices are not accessible
on the network.
These devices must have a designated powered proxying device to which
packets will be temporarily routed and cached until the constrained
device accesses the network.
11.3.11. Device and Network Integrity
Commercial Building devices must all be periodically scanned to Commercial Building devices must all be periodically scanned to
assure that the device is viable and can communicate data and alarm assure that the device is viable and can communicate data and alarm
information as needed. Network routers should maintain previous information as needed. Router should maintain previous packet flow
packet flow information temporally to minimize overall network information temporally to minimize overall network overhead.
overhead.
11.4. Additional Performance Requirements 9.4. Additional Performance Requirements
11.4.1. Data Rate Performance 9.4.1. Data Rate Performance
An effective data rate of 20kbits/s is the lowest acceptable An effective data rate of 20kbits/s is the lowest acceptable
operational data rate acceptable on the network. operational data rate acceptable on the network.
11.4.2. Firmware Upgrades 9.4.2. Firmware Upgrades
To support high speed code downloads, routing MUST support transports
that provide parallel downloads to targeted devices yet guarantee
packet delivery. In cases where the spatial position of the devices
requires multiple hops, the algorithm must recurse through the
network until all targeted devices have been serviced. Devices
receiving a download MAY cease normal operation, but upon completion
of the download must automatically resume normal operation.
11.4.3. Prioritized Routing
Network and application routing prioritization is required to assure To support high speed code downloads, routing should support
that mission critical applications (e.g. Fire Detection) cannot be transports that provide parallel downloads to targeted devices yet
deferred while less critical application access the network. guarantee packet delivery. In cases where the spatial position of
the devices requires multiple hops, the algorithm should recurse
through the network until all targeted devices have been serviced.
Devices receiving a download may cease normal operation, but upon
completion of the download must automatically resume normal
operation.
11.4.4. Path Persistence 9.4.3. Route Persistence
To eliminate high network traffic in power-fail or brown-out To eliminate high network traffic in power-fail or brown-out
conditions previously established routes SHOULD be remembered and conditions previously established routes should be remembered and
invoked prior to establishing new routes for those devices reentering invoked prior to establishing new routes for those devices reentering
the network. the network.
11.5. Additional Network Security Requirements
11.5.1. Encryption Levels
Encryption SHOULD be optional on the LLN. Encryption SHOULD be fully
configurable on-site. Encryption policy and updates SHOULD be
transmittable over-the-air and in-the-clear.
11.5.2. Security Policy Flexibility
In most facilities authentication and encryption will be turned off
during installation.
More complex encryption policies might be put in force at
commissioning time. New encryption policies MUST be allowed to be
presented to all devices in the LLN over the network without needing
to visit each device.
12. Appendix B: FMS Use-Cases
Appendix B contains FMS use-cases that describes the use of sensors
and controllers for various applications with a commercial building
and how they interplay with energy conservation and life-safety
applications.
The Vooruit arts centre is a restored monument which dates from 1913.
This complex monument consists of over 350 different rooms including
a meeting rooms, large public halls and theaters serving as many as
2500 guests. A number of use cases regarding Vooruit are described
in the following text. The situations and needs described in these
use cases can also be found in all automated large buildings, such as
airports and hospitals.
12.1. Locking and Unlocking the Building
The member of the cleaning staff arrives first in the morning
unlocking the building (or a part of it) from the control room. This
means that several doors are unlocked; the alarms are switched off;
the heating turns on; some lights switch on, etc. Similarly, the
last person leaving the building has to lock the building. This will
lock all the outer doors, turn the alarms on, switch off heating and
lights, etc.
The "building locked" or "building unlocked" event needs to be
delivered to a subset of all the sensors and actuators. It can be
beneficial if those field devices form a group (e.g. "all-sensors-
actuators-interested-in-lock/unlock-events). Alternatively, the area
and zone controllers could form a group where the arrival of such an
event results in each area and zone controller initiating unicast or
multicast within the LLN.
This use case is also described in the home automation, although the
requirement about preventing the "popcorn effect" I-D.ietf-roll-home-
routing-reqs] can be relaxed a bit in building automation. It would
be nice if lights, roll-down shutters and other actuators in the same
room or area with transparent walls execute the command around (not
'at') the same time (a tolerance of 200 ms is allowed).
12.2. Building Energy Conservation
A room that is not in use should not be heated, air conditioned or
ventilated and the lighting should be turned off or dimmed. In a
building with many rooms it can happen quite frequently that someone
forgets to switch off the HVAC and lighting, thereby wasting valuable
energy. To prevent this occurrence, the facility manager might
program the building according to the day's schedule. This way
lighting and HVAC is turned on prior to the use of a room, and turned
off afterwards. Using such a system Vooruit has realized a saving of
35% on the gas and electricity bills.
12.3. Inventory and Remote Diagnosis of Safety Equipment
Each month Vooruit is obliged to make an inventory of its safety
equipment. This task takes two working days. Each fire extinguisher
(100), fire blanket (10), fire-resistant door (120) and evacuation
plan (80) must be checked for presence and proper operation. Also
the battery and lamp of every safety lamp must be checked before each
public event (safety laws). Automating this process using asset
tracking and low-power wireless technologies would reduce a heavy
burden on working hours.
It is important that these messages are delivered very reliably and
that the power consumption of the sensors/actuators attached to this
safety equipment is kept at a very low level.
12.4. Life Cycle of Field Devices
Some field devices (e.g. smoke detectors) are replaced periodically.
The ease by which devices are added and deleted from the network is
very important to support augmenting sensors/actuators during
construction.
A secure mechanism is needed to remove the old device and install the
new device. New devices need to be authenticated before they can
participate in the routing process of the LLN. After the
authentication, zero-configuration of the routing protocol is
necessary.
12.5. Surveillance
Ingress and egress are real-time applications needing response times
below 500msec, for example for cardkey authorization. It must be
possible to configure doors individually to restrict use on a per
person basis with respect to time-of-day and person entering. While
much of the surveillance application involves sensing and actuation
at the door and communication with the centralized security system,
other aspects, including tamper, door ajar, and forced entry
notification, are to be delivered to one or more fixed or mobile user
devices within 5 seconds.
12.6. Emergency
In case of an emergency it is very important that all the visitors be
evacuated as quickly as possible. The fire and smoke detectors set
off an alarm and alert the mobile personnel on their user device
(e.g. PDA). All emergency exits are instantly unlocked and the
emergency lighting guides the visitors to these exits. The necessary
sprinklers are activated and the electricity grid monitored if it
becomes necessary to shut down some parts of the building. Emergency
services are notified instantly.
A wireless system could bring in some extra safety features.
Locating fire fighters and guiding them through the building could be
a life-saving application.
These life critical applications ought to take precedence over other
network traffic. Commands entered during these emergencies have to
be properly authenticated by device, user, and command request.
12.7. Public Address
It should be possible to send audio and text messages to the visitors
in the building. These messages can be very diverse, e.g. ASCII text
boards displaying the name of the event in a room, audio
announcements such as delays in the program, lost and found children,
evacuation orders, etc.
The control network is expected be able to readily sense the presence
of an audience in an area and deliver applicable message content.
Authors' Addresses Authors' Addresses
Jerry Martocci Jerry Martocci
Johnson Control Johnson Control
507 E. Michigan Street 507 E. Michigan Street
Milwaukee, Wisconsin, 53202 Milwaukee, Wisconsin, 53202
USA USA
Phone: 414.524.4010 Phone: 414.524.4010
Email: jerald.p.martocci@jci.com Email: jerald.p.martocci@jci.com
Nicolas Riou Nicolas Riou
Schneider Electric Schneider Electric
Technopole 38TEC T3 Technopole 38TEC T3
37 quai Paul Louis Merlin 37 quai Paul Louis Merlin
38050 Grenoble Cedex 9 38050 Grenoble Cedex 9
France France
Phone: +33 4 76 57 66 15 Phone: +33 4 76 57 66 15
Email: nicolas.riou@fr.schneider-electric.com Email: nicolas.riou@fr.schneider-electric.com
Pieter De Mil Pieter De Mil
Ghent University - IBCN Ghent University - IBCN
G. Crommenlaan 8 bus 201 G. Crommenlaan 8 bus 201
Ghent 9050 Ghent 9050
Belgium Belgium
Phone: +32-9331-4981 Phone: +32-9331-4981
Fax: +32--9331--4899 Fax: +32--9331--4899
Email: pieter.demil@intec.ugent.be Email: pieter.demil@intec.ugent.be
Wouter Vermeylen Wouter Vermeylen
Arts Centre Vooruit Arts Centre Vooruit
??? ???
Ghent 9000 Ghent 9000
Belgium Belgium
 End of changes. 120 change blocks. 
600 lines changed or deleted 394 lines changed or added

This html diff was produced by rfcdiff 1.35. The latest version is available from http://tools.ietf.org/tools/rfcdiff/