Show products in category Nop.Web.Models.Catalog.CategoryModel

Edge Computing

CHALLENGE: Using cloud-based image processing can increase latency and network traffic. It can also pose privacy and security risks.

Solution:
Processing image data at its source into actionable information using edge computing. FLIR can help with cameras that offer:

Reliable and detailed image capture in challenging condition
• IEEE1588 compatibility for easy camera synchronizatio
Full SDK support for ARM and x64 platforms



Edge Computing on Embedded Systems with FLIR cameras

A Quick Introduction to Edge Computing

Edge computing is a network model where data processing occurs at the edge of the network, near the source of the data. Edge computing can eliminate the need to send image data to a central server or cloud service for processing. For example, edge computing used for road toll collection would enable the system carry out the licence plate recognition on a low-power single board computer close to the camera. Only licence plate numbers would be transmitted, not whole images of vehicles or the road. Today, this is made possible with affordable and powerful single board computers. FLIR’s Spinnaker SDK supports X64 and ARM hardware, as well as many third-party vision libraries, so you don’t need to worry about compatibility. Cross platform support means makes it easy to develop applications on a familiar desktop environment, then deploy them to embedded systems.

Key Benefits of Edge Computing

Processing image data on the edge decreases system latency and jitter by decreasing the number of switches and hosts between the data's source and destination. Each network node data packets travel through increases the delay between image acquisition and action. Edge computing lowers system latency further by eliminating delay caused by the time it takes to upload image data. To enhance system security and mitigate privacy concerns, edge nodes can anonymize data sent to the cloud for further analysis.

Figure 1: Edge computing processes image data close to the source for low system latency

Figure 2: Cloud computing results in a long signal path for image data which increases system latency

Reduce Bandwidth 

Processing your data at the source, eliminates the need for transmitting images back to a central server. Since only actionable information is being sent, far less bandwidth is required.

Reduce Latency 

Reducing the amount of data being sent away from the edge speeds the system up, and minimizes delays between images being captured, and information arriving.

Improve privacy and security 

Sensitive information like license plates, and faces are not transmitted to the cloud.


When to use Edge Computing

The use of decentralized edge computing networks to handle the ever-increasing volume of data generated by the Internet of Things is sometimes called Fog Computing. In the fog computing model, cloud computing is not eliminated, but its role in the system changes. Edge nodes are used for low latency machine to machine communication, while the cloud is used for more complex data analysis such as those covering a wide geographic area or longer time-scale.

 A key decision when designing an edge computing system, is what data to pass up to the cloud for further analysis or long-term storage. Information which is required on time-scales of up to a few seconds should be processed and acted upon on the edge, while analytics for data on longer time-scales can be sent to the cloud with no latency penalty to the system.

Application

Advantage

Intelligent Traffic Systems

Lower bandwidth consumption, increase system security, and minimize privacy risks

Industrial Automation 

Lower latency and jitter to enable higher throughput 

Autonomous Vehicle Guidance 

Minimize system latency to enable rapid decision making on high speed vehicles, while eliminating dependence on an always-on data connection 

FLIR Machine Vision Camera Support Edge Computing

FLIR cameras streamline the development of vision applications for the edge. By pairing the latest CMOS sensors with advanced auto-control algorithms for color correction and exposure, FLIR cameras reliably capture detailed images in challenging lighting conditions. FLIR Blackfly S cameras feature Sony Pregius sensors. Their high quantum efficiency, and low read noise enables these sensors to capture clear, low-noise images in low light. Wide dynamic range ensures details will be captured in both shaded and brightly lit regions of high contrast scenes.


FLIR cameras have powerful onboard image processing including color interpolation, sharpening, and gamma correction, which reduce host side processing requirements. Support fort the IEEE 1588 Precision Time Protocol makes it easy to synchronize GigE Blackfly S to a common time base with other IEEE 1588 enabled devices.





Blackfly-S Machine Vision Camera







Spinnaker SDK compatibility

With support for x64 and ARM based systems, the FLIR cameras powered by the Spinnaker SDK can be deployed to a wide range of off-the-shelf hardware. Cross platform support provides a consistent user experience on both x64 Windows and Linux. 

ARM64 X64
Windows 7/8/10 No Yes
Ubuntu 14.01/16.04 Yes Yes
LinuxGUI No Yes

More Helpful Resources

If you would like to compare the EMVA 1288 imaging performance of our cameras, please visit our online sensor comparison tool and camera selector page.
For definitions of EMVA 1288 imaging performance terms such as quantum efficiency and dynamic range, visit our EMVA 1288 overview.

MORE PROBLEM SOLVING LESSONS

Lesson #1: True Color: Capturing consistent color images under varying lighting conditions 
Lesson #2: Higher Resolution: Inspecting Higher Density PCBs & Flat Panel Displays 
Lesson #3: Precision System Synchronization: Using IEEE-1588 PTP to synchronize cameras & devices
Lesson #4: Deep Learning: Collect image data, train a neural network, and deploy
Lesson #5: Edge Computing: Process image data closer to the source for low system latency