Apply Deep Learning to Building-Automation IoT Sensors

Real-time systems like smart sensors in commercial buildings are taking advantage of the richer computation level of deep-learning-based technology.
Electronicdesign Com Sites Electronicdesign com Files Uploads 2015 12 Pg Yt Cover Image New
In building automation, sensors such as motion detectors, photocells, temperature, and CO2 and smoke detectors are used primarily for energy savings and safety. Next-generation buildings, however, are intended to be significantly more intelligent, with the capability to analyze space utilization, monitor occupants’ comfort, and generate business intelligence.

To support such robust features, building-automation infrastructure requires considerably richer information that details what’s happening across the building space. Since current sensing solutions are limited in their ability to address this need, a new generation of smart sensors (see figure below) is required to enhance the accuracy, reliability, flexibility, and granularity of the data they provide.

Data Analytics at the Sensor Node

In the new era of the Internet of Things (IoT), there arises the opportunity to introduce a new approach to building automation that decentralizes the architecture and pushes the analytics processing to the edge (the sensor unit) instead of the cloud or a central server. Commonly referred to as edge computing, or fog computing, this approach provides real-time intelligence and enhanced control agility while simultaneously offloading the heavy communications traffic.

Continued innovation in computing technology has yielded cheap and energy-efficient embedded processors that can handle such data processing. In principle, this makes it possible to process the data at the sensor level and only send the final summary of the analysis over the network. This approach, if implemented, will yield a thinner volume of data and a shorter response time. The major question, however, is what kind of data-analysis approach is best suited for these embedded analytics sensors.


Rule-Based or Data-Driven?

The challenges associated with rich data analysis can be addressed in different ways. The conventional rule-based systems are supposedly easier to analyze. However, this advantage is negated as the system evolves, with patches of rules being stacked upon each other to account for the proliferation of new rule exceptions, thus resulting in a hard-to-decipher tangle of coded rules.

As the hard work of rule creation and modification is managed by human programmers, rule-based systems suffer from compromised performance. They have shown to be less responsive in adapting to new types of data, such as data sourced from an upgraded sensor, or a new sensor of previously unutilized data. Rule-based systems can also fail to adapt to a changing domain, e.g., a new furniture layout or new lighting sources.


Electronicdesign Com Sites Electronicdesign com Files Uploads 2015 12 Fig 1 Point Grab Cogni Point Sensor 02 811x511 Op1


PointGrab’s CogniPoint sensor utilizes deep-learning-based technology to track movement of building occupants to provide energy savings in commercial buildings.

These deficiencies can be readily overcome with data-driven “machine-learning” systems, which have proven to be superior tools for rich data analysis, especially when cameras are employed at the sensing layer. Machine-learning systems transfer the labor of defining effective rules from the engineers to the algorithm. As a result, the engineers are only tasked with defining the features of the raw data that hold relevant information.

Once the features have been defined, the rules and/or formulas that use these features are learned automatically by the algorithm. The algorithm must have access to a multitude of data samples labeled with the desired outcomes for this to work, so that it can properly adapt itself.

When the rules are implemented within the sensor, it runs a two-staged, repeating process. In stage one, the human-defined features are extracted from the sensor data. In stage two, the learned rules are applied to perform the task at hand.


Electronicdesign Com Sites Electronicdesign com Files Uploads 2016 07 18 Pullquote260px


The Deep-Learning Approach

Within the machine-learning domain, “deep learning” is emerging as a superior new approach that even alleviates engineers from the task of defining features. With deep learning, based on the numerous labeled samples, the algorithm determines for itself an end-to-end computation that extends from the raw sensor data all the way to the final output. The algorithm must discern the correct features and how best to compute them.

This ultimately fosters a deeper level of computation that’s much more effective than any rule or formula used by traditional machine learning. Typically, a neural network will perform this computation, leveraging a complex computational circuit with millions of parameters that the algorithm will tune until the right function is pinpointed.

The implications of deep learning on system engineering are profound, and the contrast with rule-based systems is significant. In the rule-based system world, and even with traditional machine learning, the system engineer requires extensive information about the domain in order to build a good system. In the deep-learning world, this is no longer necessary.

With the arrival of the IoT and the proliferation of data across the network, deep learning allows for faster iteration on new data sources and can use them without requiring intimate knowledge. When applying a deep-learning approach, the engineer’s main focus is to define the neural network’s core architecture. The network must be large enough to have the capacity to optimize to a useful computation, but simple enough so that available processing resources aren’t outstripped.

A neural network can be tailored to fit any given time budget to the maximum threshold to ensure maximum exploitation of the processing power. If the computational budget rises and there’s more time to run the calculation, a larger network can be assessed utilizing the new budget.

Once the architecture is defined, it stays fixed while the parameters of the neural network are tuned. This process can take days and even weeks for even the highest performance machines. However, the computation itself, extending from raw inputs to output, takes a fraction of a second, and it will remain exactly the same throughout the process.

The scalability and flexibility of deep learning distinguish it as a powerful approach for a real-time system like a smart sensor in the continuously changing environment of commercial buildings. Another advantage of using neural networks is that they’re extremely portable, and can be very easily built and customized using available software libraries. This allows the neural network to run the same network on different types of devices.

Moreover, such portability allows for quick turnarounds between the learning sessions, which typically use powerful machines. On top of that, engineers can observe how the neural network behaves when it’s deployed on embedded processors.

News & Press

Mar 8th, 2022

PointGrab CEO Doron Shachar: AI-Powered Smart Buildings and Effective Building-Management Systems

How was the idea of PointGrab born? How would you describe its ultimate mission?

PointGrab started off with totally different technology. In 2015, the Company decided to make a pivotal change from gesture control technology to smart sensing, making some use of the original algorithm yet developing a brand new product that was aimed to assist tenants – companies and people – in making better use of their real estate, through capturing the count of the location of people in any given space at any given time while strictly protecting people’s privacy. Our ultimate mission is to allow organizations to manage the human flow in their buildings while protecting the privacy of the people.

read more
Dec 22nd, 2021

Workplace occupancy sensors are the easiest and most cost-effective solution for safe back to office strategy.

Organizations and Professionals around the world are getting restless after months of working remotely. Managers want to interact with their teams in a traditional office setting, but also do not want to put the employees at risk. Offices will reopen, even if it is too early to determine if this is going to be for all employees on all days of the week.

read more
Dec 16th, 2021

– Easily Navigate Sites, Buildings and floors – Live and Historical Data in a single pane of glass – Real time Occupancy Data and Space Utilization – See Space Distribution (By Type: Desk, Meeting Rooms, Reception) – Select preset or custom Time-ranges

read more
Nov 22nd, 2021

The new age of office space utilization

We used to expend the office to accommodate more personal. But Covid came along and recreated the way office is used both from the aspect of office owners and from the employees perspectives.

The new way of office use is much more flexible, employees come and go and expect to have a much more loose office time mixed with time at home, this situation is both an opportunity and an headache.

read more
see more news & press

    Skip to content