Networking Working Group                                J. Martocci, Ed.
Internet-Draft                                     Johnson Controls Inc.
Intended status: Informational                            Pieter De Mil
Expires: July 14, August 2, 2009                           Ghent University IBCN
                                                           W. Vermeylen
                                                    Arts Centre Vooruit
                                                           Nicolas Riou
                                                     Schneider Electric
                                                        January 14,
                                                        February 2, 2009

      Building Automation Routing Requirements in Low Power and Lossy
                                 Networks
                 draft-ietf-roll-building-routing-reqs-02
                 draft-ietf-roll-building-routing-reqs-03

Status of this Memo

   This Internet-Draft is submitted to IETF in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups.  Note that
   other groups may also distribute working documents as Internet-
   Drafts.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   The list of current Internet-Drafts can be accessed at
   http://www.ietf.org/ietf/1id-abstracts.txt.

   The list of Internet-Draft Shadow Directories can be accessed at
   http://www.ietf.org/shadow.html.

   This Internet-Draft will expire on July 14, August 2, 2009.

Copyright Notice

   Copyright (c) 2009 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.

Abstract

   The Routing Over Low power and Lossy network (ROLL) Working Group has
   been chartered to work on routing solutions for Low Power and Lossy
   networks (LLN) in various markets: Industrial, Commercial (Building),
   Home and Urban. Pursuant to this effort, this document defines the
   routing requirements for building automation.

Requirements Language

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in RFC-2119.

Table of Contents

   1. Terminology....................................................4
   2. Introduction...................................................4
      2.1.
   3. Facility Management System (FMS) Topology.................5
         2.1.1. Introduction.........................................5
         2.1.2. Sensors/Actuators....................................6
         2.1.3. Topology......................5
      3.1. Introduction..............................................5
      3.2. Sensors/Actuators.........................................6
      3.3. Area Controllers.....................................6
         2.1.4. Controllers..........................................7
      3.4. Zone Controllers.....................................7
      2.2. Controllers..........................................7
   4. Installation Methods......................................7
         2.2.1. Methods...........................................7
      4.1. Wired Communication Media............................7
         2.2.2. Media.................................7
      4.2. Device Density............................................8
         4.2.1. HVAC Device Density..................................8
         4.2.2. Fire Device Density..................................8
         4.2.3. Lighting Device Density..............................9
         4.2.4. Physical Security Device Density.......................................7
         2.2.3. Density.....................9
      4.3. Installation Procedure...............................9
   3. Building Automation Applications..............................10
      3.1. Locking and Unlocking the Building.......................10
      3.2. Building Energy Conservation.............................10
      3.3. Inventory and Remote Diagnosis of Safety Equipment.......11
      3.4. Life Cycle of Field Devices..............................11
      3.5. Surveillance.............................................11
      3.6. Emergency................................................12
      3.7. Public Address...........................................12
   4. Procedure....................................9
   5. Building Automation Routing Requirements......................12
      4.1. Installation.............................................13
         4.1.1. Requirements......................10
      5.1. Installation.............................................10
         5.1.1. Zero-Configuration installation.....................13
         4.1.2. Installation.....................11
         5.1.2. Sleeping devices....................................13
         4.1.3. Devices....................................11
         5.1.3. Local Testing.......................................14
         4.1.4. Testing.......................................11
         5.1.4. Device Replacement..................................14
      4.2. Scalability..............................................15
         4.2.1. Replacement..................................12
      5.2. Scalability..............................................12
         5.2.1. Network Domain......................................15
         4.2.2. Peer-to-peer Communication..........................15
      4.3. Mobility.................................................15
         4.3.1. Domain......................................12
         5.2.2. Peer-to-Peer Communication..........................12
      5.3. Mobility.................................................13
         5.3.1. Mobile Device Association...........................15
      4.4. Requirements..........................13
      5.4. Resource Constrained Devices.............................16
         4.4.1. Devices.............................14
         5.4.1. Limited Processing Power Sensors/Actuators..........16
         4.4.2. for Non-routing Devices....14
         5.4.2. Limited Processing Power Controllers................16
      4.5. Addressing...............................................16
         4.5.1. Unicast/Multicast/Anycast...........................16
      4.6. Manageability............................................17
         4.6.1. for Routing Devices........14
      5.5. Addressing...............................................14
         5.5.1. Unicast/Multicast/Anycast...........................14
      5.6. Manageability............................................14
         5.6.1. Firmware Upgrades...................................17
         4.6.2. Diagnostics.........................................17
         4.6.3. Upgrades...................................15
         5.6.2. Diagnostics.........................................15
         5.6.3. Route Tracking......................................17
      4.7. Compatibility............................................17
         4.7.1. IPv4 Compatibility..................................18
         4.7.2. Maximum Packet Size.................................18
      4.8. Tracking......................................15
      5.7. Route Selection..........................................18
         4.8.1. Selection..........................................15
         5.7.1. Path Cost...........................................18
         4.8.2. Cost...........................................15
         5.7.2. Path Adaptation.....................................18
         4.8.3. Adaptation.....................................16
         5.7.3. Route Redundancy....................................18
         4.8.4. Redundancy....................................16
         5.7.4. Route Discovery Time................................18
         4.8.5. Time................................16
         5.7.5. Route Preference....................................19
         4.8.6. Path Persistence....................................19
   5. Traffic Pattern...............................................19 Preference....................................16
   6. Open issues...................................................20 Traffic Pattern...............................................16
   7. Security Considerations.......................................20 Considerations.......................................17
      7.1. Security Requirements....................................18
         7.1.1. Authentication......................................18
         7.1.2. Encryption..........................................18
         7.1.3. Disparate Security Policies.........................19
   8. IANA Considerations...........................................20 Considerations...........................................19
   9. Acknowledgments...............................................20 Acknowledgments...............................................19
   10. References...................................................20 References...................................................19
      10.1. Normative References....................................20 References....................................19
      10.2. Informative References..................................21 References..................................20
   11. Appendix A: Additional Building Requirements.................21 Requirements.................20
      11.1. Additional Commercial Product Requirements..............21 Requirements..............20
         11.1.1. Wired and Wireless Implementations.................21 Implementations.................20
         11.1.2. World-wide Applicability...........................21 Applicability...........................20
         11.1.3. Support of the BACnet Building Protocol............21
         11.1.4. Support of the LON Building Protocol...............21
         11.1.5. Energy Harvested Sensors...........................22 Sensors...........................21
         11.1.6. Communication Distance.............................22 Distance.............................21
         11.1.7. Automatic Gain Control.............................22 Control.............................21
         11.1.8. Cost...............................................22 Cost...............................................21
         11.1.9. IPv4 Compatibility.................................21
      11.2. Additional Installation and Commissioning Requirements..22
         11.2.1. Device Setup Time..................................22
         11.2.2. Unavailability of an IT network....................22
      11.3. Additional Network Requirements.........................22
         11.3.1. TCP/UDP............................................22
         11.3.2. Data Rate Performance..............................23 Performance..............................22
         11.3.3. High Speed Downloads...............................23 Downloads...............................22
         11.3.4. Interference Mitigation............................23 Mitigation............................22
         11.3.5. Real-time Performance Measures.....................23 Measures.....................22
         11.3.6. Packet Reliability.................................23 Reliability.................................22
         11.3.7. Merging Commissioned Islands.......................23
         11.3.8. Adjustable System Table Sizes......................24 Sizes......................23
      11.4. Prioritized Routing.....................................24 Routing.....................................23
         11.4.1. Packet Prioritization..............................24 Prioritization..............................23
      11.5. Constrained Devices.....................................24 Devices.....................................23
         11.5.1. Proxying for Constrained Devices...................24
      11.6. Reliability.............................................24
         11.6.1. Device Integrity...................................24
      11.7. Path Persistence........................................24
   12. Appendix B: FMS Use-Cases....................................24
      12.1. Locking and Unlocking the Building......................25
      12.2. Building Energy Conservation............................25
      12.3. Inventory and Remote Diagnosis of Safety Equipment......25
      12.4. Life Cycle of Field Devices.............................26
      12.5. Surveillance............................................26
      12.6. Emergency...............................................26
      12.7. Public Address..........................................27

1. Terminology

   For description of the terminology used in this specification, please
   see the Terminology ID referenced in Section 10.1. [I-D.ietf-roll-terminology].

2. Introduction

   Commercial buildings have been fitted with pneumatic and subsequently
   electronic communication pathways connecting sensors to their
   controllers for over one hundred years.  Recent economic and
   technical advances in wireless communication allow facilities to
   increasingly utilize a wireless solution in lieu of a wired solution;
   thereby reducing installation costs while maintaining highly reliant
   communication.

   The cost benefits and ease of installation of wireless sensors allow
   customers to further instrument their facilities with additional
   sensors; providing tighter control while yielding increased energy
   savings.

   Wireless solutions will be adapted from their existing wired
   counterparts in many of the building applications including, but not
   limited to Heating, Ventilation, and Air Conditioning (HVAC),
   Lighting, Physical Security, Fire, and Elevator systems. These
   devices will be developed to reduce installation costs; while
   increasing installation and retrofit flexibility, as well as
   increasing the sensing fidelity to improve efficiency and building
   service quality.

   Sensing devices may be battery or mains powered.  Actuators and area
   controllers will be mains powered.  Still it is envisioned to see a
   mix of wired and wireless sensors and actuators within buildings.

   Facility Management Systems (FMS) are deployed in a large set of
   vertical markets including universities; hospitals; government
   facilities; Kindergarten through High School (K-12); pharmaceutical
   manufacturing facilities; and single-tenant or multi-tenant office
   buildings. These buildings range in size from 100K sqft structures (5
   story office buildings), to 1M sqft skyscrapers (100 story
   skyscrapers) to complex government facilities such as the Pentagon.
   The described topology is meant to be the model to be used in all
   these types of environments, but clearly must be tailored to the
   building class, building tenant and vertical market being served.

   The following sections describe the sensor, actuator, area controller
   and zone controller layers of the topology.  (NOTE: The Building
   Controller and Enterprise layers of the FMS are excluded from this
   discussion since they typically deal in communication rates requiring
   WLAN
   LAN/WLAN communication technologies).

2.1.

   Section 3 describes FMS architectures commonly installed in
   commercial buildings.  Section 4 describes installation methods
   deployed for new and remodeled construction.  Appendix B describes
   various FMS use-cases and the interaction with humans for energy
   conservation and life-safety applications.

   Sections 3, 4, and Appendix B are mainly included for educational
   purposes.  The aim of this document is to provide the set of IPv6
   routing requirements for LLNs in buildings as described in Section 5.

3. Facility Management System (FMS) Topology

 2.1.1.

3.1. Introduction

   To understand the network systems requirements of a facility
   management system in a commercial building, this document uses a
   framework to describe the basic functions and composition of the
   system. An FMS is a hierarchical system of sensors, actuators,
   controllers and user interface devices based on spatial extent.
   Additionally, an FMS may also be divided functionally across alike,
   but different building subsystems such as HVAC, Fire, Security,
   Lighting, Shutters and Elevator control systems as denoted in Figure
   1.

   Much of the makeup of an FMS is optional and installed at the behest
   of the customer.  Sensors and actuators have no standalone
   functionality. All other devices support partial or complete
   standalone functionality.  These devices can optionally be tethered
   to form a more cohesive system.  The customer requirements dictate
   the level of integration within the facility.  This architecture
   provides excellent fault tolerance since each node is designed to
   operate in an independent mode if the higher layers are unavailable.

              +------+ +-----+ +------+ +------+ +------+ +------+

Bldg App'ns   |      | |     | |      | |      | |      | |      |

              |      | |     | |      | |      | |      | |      |

Building Cntl |      | |     | |   S  | |   L  | |   S  | |  E   |

              |      | |     | |   E  | |   I  | |   H  | |  L   |

Area Control  |  H   | |  F  | |   C  | |   G  | |   U  | |  E   |

              |  V   | |  I  | |   U  | |   H  | |   T  | |  V   |

Zone Control  |  A   | |  R  | |   R  | |   T  | |   T  | |  A   |

              |  C   | |  E  | |   I  | |   I  | |   E  | |  T   |

Actuators     |      | |     | |   T  | |   N  | |   R  | |  O   |

              |      | |     | |   Y  | |   G  | |   S  | |  R   |

Sensors       |      | |     | |      | |      | |      | |      |

              +------+ +-----+ +------+ +------+ +------+ +------+

                  Figure 1: Building Systems and Devices

 2.1.2.

3.2. Sensors/Actuators

   As Figure 1 indicates an FMS may be composed of many functional
   stacks or silos that are interoperably woven together via Building
   Applications.  Each silo has an array of sensors that monitor the
   environment and actuators that effect the environment as determined
   by the upper layers of the FMS topology.   The sensors typically are
   the fringe of the network structure providing environmental data into
   the system.  The actuators are the sensors counterparts modifying the
   characteristics of the system based on the input sensor data and the
   applications deployed.

 2.1.3.

3.3. Area Controllers

   An area describes a small physical locale within a building,
   typically a room.  HVAC (temperature and humidity) and Lighting (room
   lighting, shades, solar loads) vendors oft times deploy area
   controllers. Area controls are fed by sensor inputs that monitor the
   environmental conditions within the room.  Common sensors found in
   many rooms that feed the area controllers include temperature,
   occupancy, lighting load, solar load and relative humidity.  Sensors
   found in specialized rooms (such as chemistry labs) might include air
   flow, pressure, CO2 and CO particle sensors.  Room actuation includes
   temperature setpoint, lights and blinds/curtains.

 2.1.4.

3.4. Zone Controllers

   Zone Control supports a similar set of characteristics as the Area
   Control albeit to an extended space.  A zone is normally a logical
   grouping or functional division of a commercial building.  A zone may
   also coincidentally map to a physical locale such as a floor.

   Zone Control may have direct sensor inputs (smoke detectors for
   fire), controller inputs (room controllers for air-handlers in HVAC)
   or both (door controllers and tamper sensors for security).  Like
   area/room controllers, zone controllers are standalone devices that
   operate independently or may be attached to the larger network for
   more synergistic control.

2.2.

4. Installation Methods

 2.2.1.

4.1. Wired Communication Media

   Commercial controllers are traditionally deployed in a facility using
   twisted pair serial media following the EIA-485 electrical standard
   operating nominally at 38400 to 76800 baud.  This allows runs to 5000
   ft without a repeater.  With the maximum of three repeaters, a single
   communication trunk can serpentine 15000 ft.  EIA-485 is a multi-drop
   media allowing upwards to 255 devices to be connected to a single
   trunk.

    Most sensors and virtually all actuators currently used in
   commercial buildings are "dumb", non-communicating hardwired devices.
   However, sensor buses are beginning to be deployed by vendors which
   are used for smart sensors and point multiplexing.   The Fire
   industry deploys addressable fire devices, which usually use some
   form of proprietary communication wiring driven by fire codes.

 2.2.2.

4.2. Device Density

   Device density differs depending on the application and as dictated
   by the local building code requirements.  The following sections
   detail typical installation densities for different applications.

2.2.2.1.

 4.2.1. HVAC Device Density

   HVAC room applications typically have sensors/actuators and
   controllers spaced about 50ft apart.  In most cases there is a 3:1
   ratio of sensors/actuators to controllers.  That is, for each room
   there is an installed temperature sensor, flow sensor and damper
   actuator for the associated room controller.

   HVAC equipment room applications are quite different.  An air handler
   system may have a single controller with upwards to 25 sensors and
   actuators within 50 ft of the air handler.  A chiller or boiler is
   also controlled with a single equipment controller instrumented with
   25 sensors and actuators.  Each of these devices would be
   individually addressed since the devices are mandated or optional as
   defined by the specified HVAC application.  Air handlers typically
   serve one or two floors of the building.  Chillers and boilers may be
   installed per floor, but many times service a wing, building or the
   entire complex via a central plant.

   These numbers are typical.  In special cases, such as clean rooms,
   operating rooms, pharmaceuticals and labs, the ratio of sensors to
   controllers can increase by a factor of three.  Tenant installations
   such as malls would opt for packaged units where much of the sensing
   and actuation is integrated into the unit.  Here a single device
   address would serve the entire unit.

2.2.2.2.

 4.2.2. Fire Device Density

   Fire systems are much more uniformly installed with smoke detectors
   installed about every 50 feet.  This is dictated by local building
   codes.  Fire pull boxes are installed uniformly about every 150 feet.
   A fire controller will service a floor or wing.  The fireman's fire
   panel will service the entire building and typically is installed in
   the atrium.

2.2.2.3.

 4.2.3. Lighting Device Density

   Lighting is also very uniformly installed with ballasts installed
   approximately every 10 feet.  A lighting panel typically serves 48 to
   64 zones.  Wired systems typically tether many lights together into a
   single zone.  Wireless systems configure each fixture independently
   to increase flexibility and reduce installation costs.

2.2.2.4.

 4.2.4. Physical Security Device Density

   Security systems are non-uniformly oriented with heavy density near
   doors and windows and lighter density in the building interior space.
   The recent influx of interior and perimeter camera systems is
   increasing the security footprint.  These cameras are atypical
   endpoints requiring upwards to 1 megabit/second (Mbit/s) data rates
   per camera as contrasted by the few Kbits/s needed by most other FMS
   sensing equipment.  Previously, camera systems had been deployed on
   proprietary wired high speed network. More recent implementations
   utilize wired or wireless IP cameras integrated to the enterprise
   LAN.

2.2.3.

4.3. Installation Procedure

   Wired FMS installation is a multifaceted procedure depending on the
   extent of the system and the software interoperability requirement.
   However, at the sensor/actuator and controller level, the procedure
   is typically a two or three step process.

   Most FMS equipment is will utilize 24 VAC equipment power sources that can be
   installed by a low-voltage electrician.  He/she arrives on-site
   during the construction of the building prior to the sheet wall and
   ceiling installation.  This allows him/her to allocate wall space,
   easily land the equipment and run the wired controller and sensor
   networks.  The Building Controllers and Enterprise network are not
   normally installed until months later.  The electrician completes his
   task by running a wire verification procedure that shows proper
   continuity between the devices and proper local operation of the
   devices.

   Later in the installation cycle, the higher order controllers are
   installed, programmed and commissioned together with the previously
   installed sensors, actuators and controllers.  In most cases the IP
   network is still not operable.  The Building Controllers are
   completely commissioned using a crossover cable or a temporary IP
   switch together with static IP addresses.

   Once the IP network is operational, the FMS may optionally be added
   to the enterprise network.  The wireless installation process must
   follow the same work flow.  The electrician will install the products
   as before and run local functional tests between the wireless device
   to assure operation before leaving the job.   The electrician does
   not carry a laptop so the commissioning must be built into the device
   operation.

   The wireless installation process must follow the same work flow.
   The electrician will install the products as before and run local
   functional tests between the wireless devices to assure operation
   before leaving the job.   The electrician does not carry a laptop so
   the commissioning must be built into the device operation.

3.

5. Building Automation Applications

   Vooruit is an arts centre in a restored monument which dates from
   1913.  This complex monument consists of over 350 different rooms
   including a meeting rooms, large public halls and theaters serving as
   many as 2500 guests.  A number of use cases regarding Vooruit Routing Requirements

   Following are
   described in the following text.  The situations building automation routing requirements for a
   network used to integrate building sensor actuator and needs described
   in these use cases can also be found in all automated large
   buildings, such as airports and hospitals.

3.1. Locking and Unlocking control
   products.  These requirements have been limited to routing
   requirements only.  These requirements are written not presuming any
   preordained network topology, physical media (wired) or radio
   technology (wireless).  See Appendix A for additional requirements
   that have been deemed outside the Building

   The member scope of this document yet will
   pertain to the cleaning staff arrives first in the morning
   unlocking the building (or a part successful deployment of it) from the building automation systems.

5.1. Installation

   Building control room.  This
   means that several doors systems typically are unlocked; the alarms installed and tested by
   electricians having little computer knowledge and no network
   knowledge whatsoever.  These systems are switched off;
   the heating turns on; some lights switch on, etc.  Similarly, the
   last person leaving often installed during the
   building has to lock construction phase before the building.  This will
   lock all drywall and ceilings are in
   place.  For new construction projects, the outer doors, turn building enterprise IP
   network is not in place during installation of the alarms on, switch off heating and
   lights, etc.

   The ''building locked'' or ''building unlocked'' event needs building control
   system.

   In retrofit applications, pulling wires from sensors to controllers
   can be
   delivered to a subset costly and in some applications (e.g. museums) not feasible.

   Local (ad hoc) testing of all the sensors and actuators. It can room controllers must be
   beneficial if those field devices form a group (e.g. ''all-sensors-
   actuators-interested-in-lock/unlock-events). Alternatively,
   completed before the area tradesperson can complete his/her work.  This
   testing allows the tradesperson to verify correct client (e.g. light
   switch) and zone controllers could form a group where server (e.g. light ballast) before leaving the arrival jobsite.
   In traditional wired systems correct operation of such an
   event results in each area and zone controller initiating unicast or
   multicast within the LLN.

   This use case is also described in the home automation, although the
   requirement about preventing the "popcorn effect" draft [I-D.ietf-
   roll-home-routing-reqs] can be relaxed a little bit in building
   automation. It would be nice if lights, roll-down shutters and other
   actuators in light
   switch/ballast pair was as simple as flipping on the same room or area with transparent walls execute light switch.
   In wireless applications, the
   command around (not 'at') tradesperson has to assure the same time (a tolerance
   operation, yet be sure the operation of 200 ms is
   allowed).

3.2. Building Energy Conservation

   A room that the light switch is not in use should not be heated, air conditioned or
   ventilated and
   associated to the lighting should proper ballast.

   System level commissioning will later be turned off.  In deployed using a building more
   computer savvy person with
   a lot of rooms it can happen quite frequently that someone forgets access to
   switch off the HVAC and lighting.  This is a real waste of valuable
   energy.  To prevent commissioning device (e.g. a
   laptop   computer).  The completely installed and commissioned
   enterprise IP network may or may not be in place at this from happening, the janitor can program time.
   Following are the
   building according installation routing requirements.

 5.1.1. Zero-Configuration Installation

   It MUST be possible to the day's schedule.  This way lighting fully commission network devices without
   requiring any additional commissioning device (e.g. laptop).

 5.1.2. Sleeping Devices

   Sensing devices will, in some cases, utilize battery power or energy
   harvesting techniques for power and HVAC
   is turned on prior to the use of a room, and turned off afterwards.
   Using such will operate mostly in a system Vooruit has realized sleep
   mode to maintain power consumption within a saving of 35% on modest budget.  The
   routing protocol MUST take into account device characteristics such
   as power budget.  If such devices provide routing, rather than merely
   host connectivity, the gas
   and electricity bills.

3.3. Inventory and Remote Diagnosis of Safety Equipment

   Each month Vooruit is obliged energy costs associated with such routing
   needs to make an inventory of fit within the power budget.  If the mechanisms for duty
   cycling dictate very long response times or specific temporal
   scheduling, routing will need to take such constraints into account.

   Typically, batteries need to be operational for at least 5 years when
   the sensing device is transmitting its safety
   equipment. data(e.g. 64 bytes) once per
   minute.  This task takes two working days.  Each fire extinguisher
   (100), fire blanket (10), fire-resistant door (120) and evacuation
   plan (80) requires that sleeping devices must be checked for presence have minimal link
   on time when they awake and proper operation.  Also transmit onto the battery and lamp of every safety lamp network. Moreover,
   maintaining the ability to receive inbound data must be checked before each
   public event (safety laws).  Automating this process using asset
   tracking and low-power wireless technologies would reduce a heavy
   burden accomplished
   with minimal link on working hours.

   It is important that these messages are delivered very reliably and
   that the time.

   In many cases, proxies with unconstrained power consumption of the sensors/actuators attached budgets are used to this
   safety equipment is kept at
   cache the inbound data for a very low level.

3.4. Life Cycle of Field Devices

   Some field devices (e.g. smoke detectors) are replaced periodically.
   The ease by which devices are added and deleted from sleeping device until the network is
   very important device
   awakens.  In such cases, the routing protocol MUST discover the
   capability of a node to support augmenting sensors/actuators act as a proxy during
   construction.

   A secure mechanism is needed path calculation;
   deliver the packet to remove the old assigned proxy for later delivery to the
   sleeping device upon its next awake cycle.

 5.1.3. Local Testing

   The local sensors and install requisite actuators and controllers must be
   testable within the
   new device.  New locale (e.g. room) to assure communication
   connectivity and local operation without requiring other systemic
   devices.  Routing should allow for temporary ad hoc paths to be
   established that are updated as the network physically and
   functionally expands.

 5.1.4. Device Replacement

   Replacement devices need to be authenticated before they can
   participate plug-and-play with no additional setup
   compared to what is normally required for a new device.  Devices
   referencing data in the routing process of replaced device must be able to reference
   data in its replacement without being reconfigured to refer to the LLN. After
   new device.  Thus, such a reference cannot be a hardware identifier,
   such as the
   authentication, zero-configuration of MAC address, nor a hard-coded route.  If such a reference
   is an IP address, the routing protocol replacement device must be assigned the IP
   addressed previously bound to the replaced device.  Or if the logical
   equivalent of a hostname is
   necessary.

3.5. Surveillance

   Ingress and egress are real-time applications needing response times
   below 500msec, for example used for cardkey authorization.  It the reference, it must be
   possible
   translated to configure doors individually the replacement IP address.

5.2. Scalability

   Building control systems are designed for facilities from 50000 sq.
   ft. to restrict use 1M+ sq. ft.  The networks that support these systems must
   cost-effectively scale accordingly.  In larger facilities
   installation may occur simultaneously on a per
   person basis with respect to time-of-day and person entering.  While
   much of the surveillance application involves sensing and actuation
   at the door and communication with various wings or floors, yet
   the centralized security system,
   other aspects, including tamper, door ajar, and forced entry
   notification, end system must seamlessly merge.  Following are to the scalability
   requirements.

 5.2.1. Network Domain

   The routing protocol MUST be delivered able to one or more fixed or mobile user support networks with at least
   2000 nodes supporting at least 1000 routing devices within 5 seconds.

3.6. Emergency

   In case of an emergency it is very important that all the visitors be
   evacuated as quickly as possible.  The fire and smoke detectors set
   off an alarm and alert the mobile personnel on their user device 1000 non-
   routing device.  Subnetworks (e.g. PDA).  All emergency exits are instantly unlocked and the
   emergency lighting guides rooms, primary equipment) within
   the visitors network must support upwards to these exits. 255 sensors and/or actuators.

 5.2.2. Peer-to-Peer Communication

   The necessary
   sprinklers are activated and the electricity grid monitored if it
   becomes necessary to shut down some parts data domain for commercial FMS systems may sprawl across a vast
   portion of the building. Emergency
   services are notified instantly.

   A wireless system could bring physical domain.  For example, a chiller may reside in some extra safety features.
   Locating fire fighters and guiding them through
   the building could be
   a life-saving application.

   These life critical applications ought to take precedence over other
   network traffic.  Commands entered during these emergencies have facility's basement due to
   be properly authenticated by device, user, its size, yet the associated cooling
   towers will reside on the roof.  The cold-water supply and command request.

3.7. Public Address

   It should return
   pipes serpentine through all the intervening floors.  The feedback
   control loops for these systems require data from across the
   facility.

   A network device must be possible to send audio and text messages able to the visitors communicate in a peer-to-peer manner
   with any other device on the building.  These messages can be very diverse, e.g. ASCII text
   boards displaying network. Thus, the name of routing protocol MUST
   provide routes between arbitrary hosts within the event appropriate
   administrative domain.

5.3. Mobility

   Most devices are affixed to walls or installed on ceilings within
   buildings.  Hence the mobility requirements for commercial buildings
   are few.  However, in wireless environments location tracking of
   occupants and assets is gaining favor.  Asset tracking applications
   require monitoring movement with granularity of a room, audio
   announcements such as delays minute.  This soft
   real-time performance requirement is reflected in the program, lost and found children,
   evacuation orders, etc.

   The control performance
   requirements below.

 5.3.1. Mobile Device Requirements

   To minimize network is expected dynamics, mobile devices SHOULD not be able allowed to readily sense
   act as forwarding devices (routers) for other devices in the presence
   of LLN.

   A mobile device that moves within an audience LLN SHOULD reestablish end-to-
   end communication to a fixed device also in an area and deliver applicable message content.

4. Building Automation Routing Requirements

   Following are the building automation routing requirements for a LLN within 2 seconds.
   The network used to integrate building sensor actuator and control
   products.  These requirements have been limited convergence time should be less than 5 seconds once the
   mobile device stops moving.

   A mobile device that moves outside of an LLN SHOULD reestablish end-
   to-end communication to routing
   requirements only.  These requirements are written not presuming any
   preordained a fixed device in the new LLN within 5
   seconds.  The network topology, physical media (wired) or radio
   technology (wireless).  See Appendix convergence time should be less than 5 seconds
   once the mobile device stops moving.

   A for additional requirements mobile device that have been deemed moves outside the scope of this document yet will
   pertain one LLN into another LLN SHOULD
   reestablish end-to-end communication to a fixed device in the successful deployment of building automation systems.

4.1. Installation

   Building control systems typically are installed and tested by
   electricians having little computer knowledge and no old LLN
   within 10 seconds.  The network
   knowledge whatsoever.  These systems are often installed during the
   building construction phase before convergence time should be less than
   10 seconds once the drywall and ceilings are mobile device stops.

   A mobile device that moves outside of one LLN into another LLN SHOULD
   reestablish end-to-end communication to another mobile device in
   place.  For new construction projects, the building enterprise IP
   new LLN within 20 seconds.  The network is not in place during installation of convergence time should be
   less than 30 seconds once the building control
   system.

   In retrofit applications, pulling wires from sensors mobile devices stop moving.

   A mobile device that moves outside of one LLN into another LLN SHOULD
   reestablish end-to-end communication to controllers
   can be costly and a mobile device in some applications (e.g. museums) not feasible.

   Local (ad hoc) testing of sensors and room controllers must the old
   LLN within 30 seconds.  The network convergence time should be
   completed before less
   than 30 seconds once the tradesperson can complete his/her work.  This
   testing allows the tradesperson to verify correct client (e.g. light
   switch) mobile devices stop moving.

5.4. Resource Constrained Devices

   Sensing and server (e.g. light ballast) before leaving the jobsite.
   In traditional wired systems correct operation of a light
   switch/ballast pair was as simple as flipping on the light switch.
   In wireless applications, the tradesperson has to assure the same
   operation, yet actuator device processing power and memory may be sure the operation 4
   orders of the light switch is
   associated to the proper ballast.

   System level commissioning will later be deployed using a magnitude less (i.e. 10,000x) than many more
   computer savvy person with access traditional
   client devices on an IP network.  The routing mechanisms must
   therefore be tailored to a commissioning device (e.g. a
   laptop   computer). fit these resource constrained devices.

 5.4.1. Limited Processing Power for Non-routing Devices.

   The completely installed software size requirement for non-routing devices (e.g. sleeping
   sensors and commissioned
   enterprise IP network may or may not actuators) SHOULD be implementable in place at this time.
   Following are the installation 8-bit devices with
   no more than 128KB of memory.

 5.4.2. Limited Processing Power for Routing Devices

   The software size requirements for routing requirements.

 4.1.1. Zero-Configuration installation

   It MUST be possible to fully commission network devices without
   requiring any additional commissioning device (e.g. laptop). The
   device MAY support up room
   controllers) SHOULD be implementable in 8-bit devices with no more
   than 256KB of flash memory.

5.5. Addressing

   Facility Management systems require different communication schemes
   to sixteen integrated switches solicit or post network information. Broadcasts or anycasts need
   be used to uniquely
   identify resolve unresolved references within a device when the
   device on first joins the network.

 4.1.2. Sleeping devices

   Sensing devices will, in cases, utilize battery power or energy
   harvesting techniques for power and will operate in

   As with any network communication, broadcasting should be minimized.
   This is especially a mostly sleeping
   mode to maintain power consumption within problem for small embedded devices with limited
   network bandwidth.  In many cases a modest budget. global broadcast could be
   replaced with a multicast since the application knows the application
   domain.  Broadcasts and multicasts are typically used for network
   joins and application binding in embedded systems.

 5.5.1. Unicast/Multicast/Anycast

   Routing MUST recognize the constraints associated support anycast, unicast, and multicast.

5.6. Manageability

   In addition to the power budget initial installation of such
   low duty cycle devices.  If such devices provide routing, rather than
   merely host connectivity, the energy costs associated with such
   routing need to fit within system (see Section
   4.1), it is equally important for the power budget.  If ongoing maintenance of the mechanisms for
   duty cycling dictate very long response times or specific temporal
   scheduling, routing and forwarding will need to take such constraints
   into account.

   Communication
   system to these mostly sleeping devices MUST be bidirectional.
   Typically, batteries need simple and inexpensive.

 5.6.1. Firmware Upgrades

   To support high speed code downloads, routing MUST support transports
   that provide parallel downloads to be operational for at least 5 years when targeted devices yet guarantee
   packet delivery.  In cases where the spatial position of the sensing device is transmitting its data(e.g. 64 bytes) once per
   minute.  This requires that sleeping devices
   requires multiple hops, the algorithm must have minimal link
   on time when they awake and transmit onto recurse through the network. Moreover,
   maintaining
   network until all targeted devices have been serviced.

 5.6.2. Diagnostics

   To improve diagnostics, the ability network layer SHOULD be able to receive inbound data must be accomplished
   with minimal link on time.

   In many cases, proxies with unconstrained power budgets are used to
   cache the inbound data for placed
   in and out of 'verbose' mode.  Verbose mode is a sleeping device until the device
   awakens.  In such cases, temporary debugging
   mode that provides additional communication information including at
   least total number of routing MUST recognize the selected proxy
   for the sleeping device.

 4.1.3. Local Testing

   The local sensors and requisite actuators packets sent and controllers must be
   testable within the locale (e.g. room) to assure communication
   connectivity received, number of
   routing failure (no route available), neighbor table, and local operation without requiring other systemic
   devices.  Routing must allow for temporary ad hoc paths to routing
   table entries.

 5.6.3. Route Tracking

   Route diagnostics SHOULD be
   established that are updated supported providing information such as the network physically and
   functionally expands.

 4.1.4. Device Replacement

   Replacement devices need to be plug-and-play
   path quality; number of hops; available alternate active paths with no additional setup
   compared to what
   associated costs.  Path quality is normally required for a new device.  Devices
   referencing data in the replaced device must be able to reference
   data in its replacement without being reconfigured relative measure of 'goodness'
   of the selected source to refer destination path as compared to the
   new device.  Thus, such a reference cannot alternate
   paths.  This composite value may be measured as a hardware identifier,
   such function of hop
   count, signal strength, available power, existing active paths or any
   other criteria deemed by ROLL as the MAC address, nor a hardcoded route.  If such a reference
   is an IP address, path cost differentiator.

5.7. Route Selection

   Route selection determines reliability and quality of the replacement device must be assigned
   communication paths among the IP
   addressed previously bound to devices. Optimizing the replaced device.  Or if the logical
   equivalent of a hostname is used for the reference, it must be
   translated to the replacement IP address.

4.2. Scalability

   Building control systems routes over
   time resolve any nuances developed at system startup when nodes are designed for facilities from 50000 sq.
   ft.
   asynchronously adding themselves to 1M+ sq. ft.  The networks that support these systems must
   cost-effectively scale accordingly.  In larger facilities
   installation may occur simultaneously on various wings or floors, yet the end system must seamlessly merge.  Following are network.  Path adaptation
   will reduce latency if the scalability
   requirements.

 4.2.1. Network Domain path costs consider hop count as a cost
   attribute.

 5.7.1. Path Cost

   The routing protocol MUST be able to support networks with at least
   1000 routers and 1000 hosts.  Subnetworks (e.g. rooms, primary
   equipment) within the network must support upwards to 255 sensors
   and/or actuators.

   .

 4.2.2. Peer-to-peer Communication

   The data domain for commercial FMS systems may sprawl across a vast
   portion metric of the physical domain.  For example, a chiller may reside in
   the facility's basement due route quality and
   optimize path selection according to its size, yet the associated cooling
   towers will reside on such metrics within constraints
   established for links along the roof.  The cold-water supply paths. These metrics SHOULD reflect
   metrics such as signal strength, available bandwidth, hop count,
   energy availability and return
   pipes serpentine through all communication error rates.

 5.7.2. Path Adaptation

   Communication paths MUST adapt toward the intervening floors. chosen metric(s) (e.g.
   signal quality) optimality in time.

 5.7.3. Route Redundancy

   The feedback
   control loops for these systems require data from across the
   facility.

   A network device must layer SHOULD be able configurable to communicate in allow secondary and
   tertiary paths to be established and used upon failure of the primary
   path.

 5.7.4. Route Discovery Time

   Mission critical commercial applications (e.g. Fire, Security)
   require reliable communication and guaranteed end-to-end delivery of
   all messages in a peer-to-peer manner
   with any other device on timely fashion.  Application layer time-outs must
   be selected judiciously to cover anomalous conditions such as lost
   packets and/or path discoveries; yet not be set too large to over
   damp the network. Thus, network response.  If route discovery occurs during packet
   transmission time, it SHOULD NOT add more than 120ms of latency to
   the routing protocol MUST
   provide routes between arbitrary hosts within packet delivery time.

 5.7.5. Route Preference

   Route cost algorithms SHOULD allow the appropriate
   administrative domain.

4.3. Mobility

   Most devices are affixed installer to walls or installed optionally select
   'preferred' paths based on ceilings within
   buildings.  Hence the mobility requirements for commercial buildings
   are few.  However, in wireless environments location tracking known spatial layout of
   occupants and assets is gaining favor.

 4.3.1. Mobile Device Association

   Mobile devices SHOULD be capable the
   communicating devices.

6. Traffic Pattern

   The independent nature of unjoining (handing-off) from an
   old network joining onto the automation systems within a new building
   plays heavy onto the network traffic patterns.  Much of the real-time
   sensor data stays within 15 seconds.

4.4. Resource Constrained Devices

   Sensing and actuator device processing power the local environment.  Alarming and memory other
   event data will percolate to higher layers.

   Systemic data may be 4
   orders of magnitude less (i.e. 10,000x) than many more traditional
   client devices either polled or event based.  Polled data
   systems will generate a uniform packet load on an IP the network.  The routing mechanisms MUST
   therefore be tailored to fit these resource constrained devices.

 4.4.1. Limited Processing Power Sensors/Actuators

   The software stack requirements for sensors  This
   architecture has proven not scalable.  Most vendors have developed
   event based systems which pass data on event.  These systems are
   highly scalable and actuators MUST be
   implementable in 8-bit devices with no more than 128KB of flash
   memory (including at least 32KB for generate low data on the application code) and no more
   than 8KB of RAM (including network at least 1KB RAM available for quiescence.
   Unfortunately, the
   application).

 4.4.2. Limited Processing Power Controllers

   The software stack requirements for room controllers SHOULD systems will generate a heavy load on startup
   since all the initial data must migrate to the controller level.
   They also will generate a temporary but heavy load during firmware
   upgrades.  This latter load can normally be
   implementable in 8-bit devices with no more than 256KB of flash
   memory (including at least 32KB mitigated by performing
   these downloads during off-peak hours.

   Devices will need to reference peers occasionally for sensor data or
   to coordinate across systems.  Normally, though, data will migrate
   from the application code) and no more
   than 8KB of RAM (including sensor level upwards through the local, area then
   supervisory level.  Bottlenecks will typically form at least 1KB RAM available for the
   application)

4.5. Addressing

   Facility Management systems require different communication schema funnel
   point from the area controllers to
   solicit the supervisory controllers.

   Initial system startup after a controlled outage or post unexpected power
   failure puts tremendous stress on the network information. Broadcasts or anycasts need be
   used to resolve unresolved references within and on the routing
   algorithms.  An FMS system is comprised of a device when myriad of control
   algorithms at the device
   first joins room, area, zone, and enterprise layers.  When
   these control algorithms are at quiescence, the real-time data
   changes are small and the network.

   As with any network communication, broadcasting should will not saturate.  However, upon
   any power loss, the control loops and real-time data quickly atrophy.
   A ten minute outage may take many hours to regain control.

   Upon restart all lines-powered devices power-on instantaneously.
   However due to application startup and self tests, these devices will
   attempt to join the network randomly.  Empirical testing indicates
   that routing paths acquired during startup will tend to be minimized. very
   oblique since the available neighbor lists are incomplete.  This is especially a problem
   demands an adaptive routing protocol to allow for small embedded devices with limited path optimization
   as the network bandwidth.  In many cases a global broadcast could stabilizes.

7. Security Considerations

   Security policies, especially wireless encryption and device
   authentication needs to be
   replaced considered, especially with a multicast since concern to the application knows
   impact on the application
   domain.  Broadcasts processing capabilities and multicasts additional latency incurred
   on the sensors, actuators and controllers.

   FMS systems are typically used for network
   joins and application binding highly configurable in embedded systems.

 4.5.1. Unicast/Multicast/Anycast

   Routing MUST support anycast, unicast, multicast the field and broadcast
   services (or IPv6 equivalent).

4.6. Manageability

   In addition to hence
   the initial installation security policy is most often dictated by the type of building to
   which the system (see Section
   4.1), it FMS is equally important being installed.   Single tenant owner occupied
   office buildings installing lighting or HVAC control are candidates
   for implementing low or even no security on the ongoing maintenance of LLN.  Antithetically,
   military or pharmaceutical facilities require strong security
   policies.  As noted in the
   system to installation procedures above, security
   policies must be simple and inexpensive.

 4.6.1. Firmware Upgrades

   To support high speed code downloads, routing MUST support transports
   that provide parallel downloads facile to targeted devices allow no security during the installation
   phase (prior to building occupancy), yet guarantee
   packet delivery.

 4.6.2. Diagnostics

   To improve diagnostics, easily raise the security
   level network layer wide during the commissioning phase of the system.

7.1. Security Requirements

 7.1.1. Authentication

   Authentication SHOULD be able to optional on the LLN.  Authentication SHOULD
   be placed
   in fully configurable on-site. Authentication policy and out of 'verbose' mode.  Verbose mode is updates MUST
   be transmittable over-the-air.  Authentication SHOULD occur upon
   joining or rejoining a temporary debugging
   mode that provides additional communication information including network.  However, once authenticated devices
   SHOULD not need to reauthenticate themselves with any other devices
   in the LLN.  Packets may need authentication at
   least total number of packets sent, the source and
   destination nodes, however, packets received, number of
   failed communication attempts, neighbor table routed through intermediate hops
   should not need to be reauthenticated at each hop.

 7.1.2. Encryption

7.1.2.1. Encryption Levels

   Encryption SHOULD be optional on the LLN.  Encryption SHOULD be fully
   configurable on-site.  Encryption policy and routing table
   entries.

 4.6.3. Route Tracking

   Route diagnostics updates SHOULD be supported providing information such as
   path quality; number of hops; available alternate active paths with
   associated costs.

4.7. Compatibility

   The building automation industry adheres to application layer
   protocol standards to achieve vendor interoperability.  These
   standards are BACnet
   transmittable over-the-air and LON.  It is estimated that fully 80% of the
   customer bid requests received world-wide in-the-clear.

7.1.2.2. Security Policy Flexibility

   In most facilities authentication and encryption will require compliance be turned off
   during installation.

   More complex encryption policies might be put in force at
   commissioning time.  New encryption policies MUST be allowed to
   one or both of these standards.  ROLL routing will therefore need be
   presented to
   dovetail all devices in the LLN over the network without needing
   to these visit each device.

7.1.2.3. Encryption Types

   Data encryption of packets MUST optionally be supported by use of
   either a network wide key and/or application protocols key.  The network key
   would apply to assure acceptance all devices in the
   building automation industry.  These protocols have been in place for
   over 10 years.  Many sites will require backwards compatibility with LLN.  The application key would
   apply to a subset of devices on the existing legacy devices.

 4.7.1. IPv4 Compatibility LLN.

   The routing protocol MUST support intercommunication among IPv4 network key and
   IPv6 devices..

 4.7.2. Maximum Packet Size

   Routing application keys would be mutually exclusive.
   Forwarding devices in the mesh MUST support packet sizes to 1526 octets (to be backwards
   compatible with 802.3 subnetworks)

4.8. Route Selection

   Route selection determines reliability and quality able to forward a packet
   encrypted with an application key without needing to have the
   application key.

7.1.2.4. Packet Encryption

   The encryption policy MUST support encryption of the
   communication paths among payload only or
   the devices. Optimizing entire packet.  Payload only encryption would eliminate the routes over
   time resolve any nuances developed
   decryption/re-encryption overhead at system startup when nodes are
   asynchronously adding themselves every hop.

 7.1.3. Disparate Security Policies

   Due to the network.  Path adaptation
   will reduce latency if the path costs consider hop count as a cost
   attribute.

 4.8.1. Path Cost

   The routing protocol MUST support a metric limited resources of route quality and
   optimize path selection according to such metrics an LLN, the security policy defined
   within constraints
   established for links along the paths. These metrics SHOULD reflect
   metrics such as signal strength, available bandwidth, hop count,
   energy availability and communication error rates.

 4.8.2. Path Adaptation

   Communication paths LLN MUST adapt toward the chosen metric(s) (e.g.
   signal quality) optimality in time.

 4.8.3. Route Redundancy

   The network layer SHOULD be configurable to allow secondary and
   tertiary paths able to be established and used upon failure differ from that of the primary
   path.

 4.8.4. Route Discovery Time

   Mission critical commercial applications (e.g. Fire,Security) require
   reliable communication and guaranteed end-to-end delivery rest of all
   messages in a timely fashion.  Application layer time-outs must be
   selected judiciously to cover anomalous conditions such as lost
   packets and/or path discoveries; yet not be set too large to over
   damp the IP
   network response.  Route discovery occurring during packet
   transmission within the facility yet packets MUST not exceed 120 msecs.

 4.8.5. Route Preference

   The still be able to route discovery mechanism SHOULD allow a source node (sensor)
   to
   dictate a configured destination node (controller) as a preferred
   routing path.

 4.8.6. Path Persistence

   To eliminate high network traffic in power-fail or brown-out
   conditions previously established routes SHOULD be remembered through the LLN from/to these networks.

8. IANA Considerations

   This document includes no request to IANA.

9. Acknowledgments

   In addition to the authors, J. P. Vasseur, David Culler, Ted Humpal
   and
   invoked prior Zach Shelby are gratefully acknowledged for their contributions
   to establishing new routes this document.

   This document was prepared using 2-Word-v2.0.template.dot.

10. References

10.1. Normative References

   [RFC2119]  Bradner, S., "Key words for use in RFCs to Indicate
             Requirement Levels", BCP 14, RFC 2119, March 1997.

10.2. Informative References

   [1]      [I-D.ietf-roll-home-routing-reqs] Brandt, A., Buron, J., and
         G. Porcu, "Home Automation Routing Requirements in Low Power
         and Lossy Networks", draft-ietf-roll-home-routing-reqs-06 (work
         in progress), November 2008.

   [2]      [I-D.ietf-roll-indus-routing-reqs] Networks, D., Thubert,
         P., Dwars, S., and T. Phinney, "Industrial Routing Requirements
         in Low Power and Lossy Networks", draft-ietf-roll-indus-
         routing-reqs-03 (work in progress), December 2008.

   [3]      [I-D.ietf-roll-terminology]Vasseur, J., "Terminology in Low
         power And Lossy Networks", draft-ietf-roll-terminology-00 (work
         in progress), October 2008.

   [4]      "RS-485 EIA Standard: Standard for Electrical
         Characteristics of Generators and Receivers for use in Balanced
   [5]   "BACnet: A Data Communication Protocol for Building and
         Automation Control Networks" ANSI/ASHRAE Standard 135-2004",
         2004

11. Appendix A: Additional Building Requirements

   Appendix A contains additional informative building requirements that
   were deemed out of scope for those devices reentering the network.

5. Traffic Pattern routing document yet provided
   ancillary informational substance to the reader.  The independent nature requirements
   should be addressed by ROLL or other WGs before adoption by the
   building automation industry.

11.1. Additional Commercial Product Requirements

11.1.1.  Wired and Wireless Implementations

   Solutions must support both wired and wireless implementations.

11.1.2. World-wide Applicability

   Wireless devices must be supportable at the 2.4Ghz ISM band.
   Wireless devices should be supportable at the 900 and 868 ISM bands
   as well.

11.1.3.  Support of the BACnet Building Protocol

   Devices implementing the ROLL features should support the BACnet
   protocol.

11.1.4.  Support of the automation systems within a building
   plays heavy onto LON Building Protocol

   Devices implementing the ROLL features should support the LON
   protocol.

11.1.5.  Energy Harvested Sensors

   RFDs should target for operation using viable energy harvesting
   techniques such as ambient light, mechanical action, solar load, air
   pressure and differential temperature.

11.1.6.  Communication Distance

   A source device may be upwards to 1000 feet from its destination.
   Communication may need to be established between these devices
   without needing to install other intermediate 'communication only'
   devices such as repeaters

11.1.7.  Automatic Gain Control

   For wireless implementations, the device radios should incorporate
   automatic transmit power regulation to maximize packet transfer and
   minimize network interference regardless of network size or density.

 11.1.8.   Cost

   The total installed infrastructure cost including but not limited to
   the media, required infrastructure devices (amortized across the network traffic patterns.  Much
   number of the real-time
   sensor data stays within the local environment.  Alarming and other
   event data will percolate devices); labor to higher layers.

   Systemic data may be either polled or event based.  Polled data
   systems will generate a uniform packet load on install and commission the network.  This
   architecture has proven network must
   not scalable.  Most vendors have developed
   event based systems which passes data on event.  These systems are
   highly scalable exceed $1.00/foot for wired implementations.

   Wireless implementations (total installed cost) must cost no more
   than 80% of wired implementations.

 11.1.9. IPv4 Compatibility

   The routing protocol must support cost-effective intercommunication
   among IPv4 and generate low data on IPv6 devices.

11.2.    Additional Installation and Commissioning Requirements

 11.2.1.   Device Setup Time

   Network setup by the installer must take no longer than 20 seconds
   per device installed.

11.2.2.  Unavailability of an IT network at quiescence.
   Unfortunately, the systems will generate a heavy load on startup
   since all the initial data

   Product commissioning must migrate to the controller level.
   They also will generate a temporary but heavy load during firmware
   upgrades.  This latter load can normally be mitigated performed by performing
   these downloads during off-peak hours.

   Devices will need to reference peers occasionally for sensor data or an application engineer
   prior to coordinate across systems.  Normally, though, data will migrate
   from the sensor level upwards through installation of the IT network.

11.3.    Additional Network Requirements

 11.3.1.   TCP/UDP

   Connection based and connectionless services must be supported

 11.3.2.   Data Rate Performance

   An effective data rate of 20kbits/s is the local, area then
   supervisory level.  Bottlenecks will typically form at lowest acceptable
   operational data rate acceptable on the funnel
   point from network.

 11.3.3.   High Speed Downloads

   Devices receiving a download MAY cease normal operation, but upon
   completion of the area controllers to download must automatically resume normal
   operation.

11.3.4.  Interference Mitigation

   The network must automatically detect interference and seamlessly
   migrate the supervisory controllers.

6. Open issues

   Other items network hosts channel to be addressed in further revisions of this document
   include:

   All known open items completed

7. Security Considerations

   Security policies, especially wireless encryption improve communication.  Channel
   changes and overall device
   authentication need nodes response to be considered.  These issues are out of scope
   for the routing requirements, but could have an impact on channel change must occur within 60
   seconds.

11.3.5.  Real-time Performance Measures

   A node transmitting a 'request with expected reply' to another node
   must send the
   processing capabilities of message to the sensors destination and controllers.

   As noted above, receive the FMS systems are typically highly configurable response in
   the field
   not more than 120 msec.  This response time should be achievable with
   5 or less hops in each direction.  This requirement assumes network
   quiescence and hence a negligible turnaround time at the security policy is most often dictated destination node.

 11.3.6.   Packet Reliability

   Reliability must meet the following minimum criteria :

   < 1% MAC layer errors on all messages; After no more than three
   retries

   < .1% Network layer errors on all messages;

   After no more than three additional retries;

   < 0.01% Application layer errors on all messages.

   Therefore application layer messages will fail no more than once
   every 100,000 messages.

 11.3.7.   Merging Commissioned Islands

   Subsystems are commissioned by various vendors at various times
   during building construction.  These subnetworks must seamlessly
   merge into networks and networks must seamlessly merge into
   internetworks since the
   type end user wants a holistic view of building to which the FMS is being installed.

8. IANA Considerations

   This document includes no request to IANA.

9. Acknowledgments

   J. P. Vasseur, Ted Humpal and Zach Shelby are gratefully acknowledged
   for their contributions system.

 11.3.8.   Adjustable System Table Sizes

   Routing must support adjustable router table entry sizes on a per
   node basis to this document.

   This document was prepared using 2-Word-v2.0.template.dot.

10. References

10.1. Normative References

   draft-ietf-roll-home-routing-reqs-03

   draft-ietf-roll-terminology-00.txt

10.2. Informative References

   ''RS-485 EIA Standard: Standard for Electrical Characteristics of
   Generators and Receivers for use maximize limited RAM in Balanced Digital Multipoint
   ''BACnet: A Data Communication Protocol for Building and Automation
   Control Networks'' ANSI/ASHRAE Standard 135-2004'', 2004

   ''LON: OPEN DATA COMMUNICATION IN BUILDING AUTOMATION, CONTROLS AND
   BUILDING MANAGEMENT - BUILDING NETWORK PROTOCOL - PART 1: PROTOCOL
   STACK'', 11/25/2005

11. Appendix A: Additional Building Requirements

   Appendix A contains additional building requirements that were deemed
   out of scope for the devices.

11.4. Prioritized Routing

   Network and application routing document yet provided ancillary
   informational substance to the reader.  The requirements will need prioritization is required to assure
   that mission critical applications (e.g. Fire Detection) cannot be addressed by ROLL or other WGs before adoption by
   deferred while less critical application access the building
   automation industry will network.

11.4.1. Packet Prioritization

   Routers must support quality of service prioritization to assure
   timely response for critical FMS packets.

11.5. Constrained Devices

   The network may be considered.

11.1. Additional Commercial Product Requirements

11.1.1.  Wired composed of a heterogeneous mix of full, battery
   and Wireless Implementations

   Solutions energy harvested devices.  The routing protocol must support
   these constrained devices.

11.5.1. Proxying for Constrained Devices

   Routing must support both wired in-bound packet caches for low-power (battery
   and wireless implementations.

11.1.2. World-wide Applicability

   Wireless energy harvested) devices when these devices are not accessible
   on the network.

   These devices must have a designated powered proxying device to which
   packets will be supportable at temporarily routed and cached until the 2.4Ghz ISM band.
   Wireless constrained
   device accesses the network.

11.6. Reliability

 11.6.1. Device Integrity

   Commercial Building devices should must all be supportable at periodically scanned to
   assure that the 900 device is viable and 868 ISM bands can communicate data and alarm
   information as well.

11.1.3.  Support of the BACnet Building Protocol

   Devices implementing the ROLL features needed. Network routers should support the BACnet
   protocol.

11.1.4.  Support of the LON Building Protocol

   Devices implementing maintain previous
   packet flow information temporally to minimize overall network
   overhead.

11.7. Path Persistence

   To eliminate high network traffic in power-fail or brown-out
   conditions previously established routes SHOULD be remembered and
   invoked prior to establishing new routes for those devices reentering
   the ROLL features should support network.

12. Appendix B: FMS Use-Cases

   Appendix B contains FMS use-cases that describes the LON
   protocol.

11.1.5.  Energy Harvested Sensors

   RFDs should target use of sensors
   and controllers for operation using viable various applications with a commercial building
   and how they interplay with energy harvesting
   techniques such conservation and life-safety
   applications.

   The Vooruit arts centre is a restored monument which dates from 1913.
   This complex monument consists of over 350 different rooms including
   a meeting rooms, large public halls and theaters serving as ambient light, mechanical action, solar load, air
   pressure many as
   2500 guests.  A number of use cases regarding Vooruit are described
   in the following text.  The situations and differential temperature.

11.1.6.  Communication Distance

   A source device may be upwards to 1000 feet from its destination.
   Communication may need to be established between needs described in these devices
   without needing to install other intermediate 'communication only'
   devices
   use cases can also be found in all automated large buildings, such as repeaters

11.1.7.  Automatic Gain Control

   For wireless implementations, the device radios should incorporate
   automatic transmit power regulation to maximize packet transfer
   airports and
   minimize network interference regardless of network size or density.

 11.1.8.   Cost hospitals.

12.1. Locking and Unlocking the Building

   The total installed infrastructure cost including but not limited to member of the media, required infrastructure devices (amortized across cleaning staff arrives first in the
   number of devices); labor to install and commission morning
   unlocking the network must
   not exceed $1.00/foot for wired implementations.

   Wireless implementations (total installed cost) must cost no more
   than 80% building (or a part of wired implementations.

11.2.    Additional Installation and Commissioning Requirements

 11.2.1.   Device Setup Time

   Network setup by it) from the installer must take no longer than 20 seconds
   per device installed.

11.2.2.  Unavailability of an IT network

   Product commissioning must be performed by an application engineer
   prior control room.  This
   means that several doors are unlocked; the alarms are switched off;
   the heating turns on; some lights switch on, etc.  Similarly, the
   last person leaving the building has to lock the installation of building.  This will
   lock all the IT network.

11.3.    Additional Network Requirements

 11.3.1.   TCP/UDP

   Connection based outer doors, turn the alarms on, switch off heating and connectionless services must
   lights, etc.

   The "building locked" or "building unlocked" event needs to be supported
 11.3.2.   Data Rate Performance

   An effective data rate
   delivered to a subset of 20kbits/s is all the lowest acceptable
   operational data rate acceptable on sensors and actuators. It can be
   beneficial if those field devices form a group (e.g. "all-sensors-
   actuators-interested-in-lock/unlock-events). Alternatively, the network.

 11.3.3.   High Speed Downloads

   Devices receiving area
   and zone controllers could form a download MAY cease normal operation, but upon
   completion of group where the download must automatically resume normal
   operation.

11.3.4.  Interference Mitigation

   The network must automatically detect interference arrival of such an
   event results in each area and seamlessly
   migrate zone controller initiating unicast or
   multicast within the network hosts channel to improve communication.  Channel
   changes LLN.

   This use case is also described in the home automation, although the
   requirement about preventing the "popcorn effect" I-D.ietf-roll-home-
   routing-reqs] can be relaxed a bit in building automation. It would
   be nice if lights, roll-down shutters and nodes response to other actuators in the channel change must occur within 60
   seconds.

11.3.5.  Real-time Performance Measures same
   room or area with transparent walls execute the command around (not
   'at') the same time (a tolerance of 200 ms is allowed).

12.2. Building Energy Conservation

   A node transmitting room that is not in use should not be heated, air conditioned or
   ventilated and the lighting should be turned off or dimmed.  In a 'request
   building with expected reply' many rooms it can happen quite frequently that someone
   forgets to another node
   must send switch off the message HVAC and lighting, thereby wasting valuable
   energy.  To prevent this occurrence, the facility manager might
   program the building according to the day's schedule.  This way
   lighting and HVAC is turned on prior to the destination use of a room, and receive turned
   off afterwards.  Using such a system Vooruit has realized a saving of
   35% on the response in
   not more than 120 msec. gas and electricity bills.

12.3. Inventory and Remote Diagnosis of Safety Equipment

   Each month Vooruit is obliged to make an inventory of its safety
   equipment.  This response time should task takes two working days.  Each fire extinguisher
   (100), fire blanket (10), fire-resistant door (120) and evacuation
   plan (80) must be achievable with
   5 or less hops in each direction.  This requirement assumes network
   quiescence checked for presence and a negligible turnaround time at proper operation.  Also
   the destination node.

 11.3.6.   Packet Reliability

   Reliability battery and lamp of every safety lamp must meet the following minimum criteria :

   < 1% MAC layer errors on all messages; After no more than three
   retries

   < .1% Network layer errors on all messages;

   After no more than three additional retries;

   < 0.01% Application layer errors be checked before each
   public event (safety laws).  Automating this process using asset
   tracking and low-power wireless technologies would reduce a heavy
   burden on all messages.

   Therefore application layer working hours.

   It is important that these messages will fail no more than once
   every 100,000 messages.

 11.3.7.   Merging Commissioned Islands

   Subsystems are commissioned by various vendors delivered very reliably and
   that the power consumption of the sensors/actuators attached to this
   safety equipment is kept at various times a very low level.

12.4. Life Cycle of Field Devices

   Some field devices (e.g. smoke detectors) are replaced periodically.
   The ease by which devices are added and deleted from the network is
   very important to support augmenting sensors/actuators during building
   construction.  These subnetworks must seamlessly
   merge into networks

   A secure mechanism is needed to remove the old device and networks must seamlessly merge into
   internetworks since install the end user wants a holistic view
   new device.  New devices need to be authenticated before they can
   participate in the routing process of the system.

 11.3.8.   Adjustable System Table Sizes

   Routing LLN. After the
   authentication, zero-configuration of the routing protocol is
   necessary.

12.5. Surveillance

   Ingress and egress are real-time applications needing response times
   below 500msec, for example for cardkey authorization.  It must support adjustable router table entry sizes be
   possible to configure doors individually to restrict use on a per
   node
   person basis with respect to maximize limited RAM in time-of-day and person entering.  While
   much of the devices.

11.4. Prioritized Routing

   Network surveillance application involves sensing and actuation
   at the door and communication with the centralized security system,
   other aspects, including tamper, door ajar, and forced entry
   notification, are to be delivered to one or more fixed or mobile user
   devices within 5 seconds.

12.6. Emergency

   In case of an emergency it is very important that all the visitors be
   evacuated as quickly as possible.  The fire and smoke detectors set
   off an alarm and alert the mobile personnel on their user device
   (e.g. PDA).  All emergency exits are instantly unlocked and the
   emergency lighting guides the visitors to these exits.  The necessary
   sprinklers are activated and the electricity grid monitored if it
   becomes necessary to shut down some parts of the building. Emergency
   services are notified instantly.

   A wireless system could bring in some extra safety features.
   Locating fire fighters and application routing prioritization is required to assure
   that mission critical applications (e.g. Fire Detection) cannot guiding them through the building could be
   deferred while less
   a life-saving application.

   These life critical application access the network.

11.4.1. Packet Prioritization

   Routers must support quality of service prioritization applications ought to assure
   timely response for critical FMS packets.

11.5. Constrained Devices

   The take precedence over other
   network may traffic.  Commands entered during these emergencies have to
   be composed of a heterogeneous mix of full, battery properly authenticated by device, user, and energy harvested devices.  The routing protocol must support
   these constrained devices.

11.5.1. Proxying for Constrained Devices

   Routing must support in-bound packet caches for low-power (battery command request.

12.7. Public Address

   It should be possible to send audio and energy harvested) devices when these devices are not accessible
   on text messages to the network. visitors
   in the building.  These devices must have a designated powered proxying device to which
   packets will messages can be temporarily routed and cached until very diverse, e.g. ASCII text
   boards displaying the constrained
   device accesses name of the network.

11.6. Reliability

 11.6.1. Device Integrity

   Commercial Building devices must all event in a room, audio
   announcements such as delays in the program, lost and found children,
   evacuation orders, etc.

   The control network is expected be periodically scanned able to
   assure that readily sense the device is viable and can communicate data presence
   of an audience in an area and alarm
   information as needed. Network routers should maintain previous
   packet flow information temporally to minimize overall network
   overhead. deliver applicable message content.

Authors' Addresses

   Jerry Martocci
   Johnson Control
   507 E. Michigan Street
   Milwaukee, Wisconsin, 53202
   USA

   Phone: 414.524.4010
   Email: jerald.p.martocci@jci.com

   Nicolas Riou
   Schneider Electric
   Technopole 38TEC T3
   37 quai Paul Louis Merlin
   38050 Grenoble Cedex 9
   France

   Phone: +33 4 76 57 66 15
   Email: nicolas.riou@fr.schneider-electric.com

   Pieter De Mil
   Ghent University - IBCN
   G. Crommenlaan 8 bus 201
   Ghent  9050
   Belgium

   Phone: +32-9331-4981
   Fax:   +32--9331--4899
   Email: pieter.demil@intec.ugent.be

   Wouter Vermeylen
   Arts Centre Vooruit
   ???
   Ghent  9000
   Belgium

   Phone: ???
   Fax:   ???
   Email: wouter@vooruit.be