What is Edge AI – FESCH.TV

What is Edge AI & FESCH.TV:

Edge AI is the use of machine learning (AI) algorithms that run at or near the source of data they analyze (i.e., the edge). Edge AI is a new term that describes what is fundamentally an old way of doing things — using on-device processing to crunch data and predict an outcome, as opposed to sending that data up to the cloud for processing. The reason edge AI has emerged as a term, though, has a lot to do with the specific kind of AI we’re usually talking about, in the form of on-device machine learning models.

Machine learning models are core to the concept of edge AI. In the very simplest terms, a machine learning model uses vector math to analyze data inputs and generate a predictive outcome. A given ML model is often trained on a particular use case, such as speech recognition, object detection, or otherwise determining the likelihood of a particular condition (e.g., the number of defects in a stainless steel bolt). It’s important to remember that ML models are predictive, not conclusive — they are trained on relevant data that is intended to “teach” them how to predict an output for a given use case.

Edge AI differs from cloud AI in that edge AI definitionally runs at or very near the source of the data that the AI processes to produce a prediction or result, whereas Cloud AI runs on a centralized resource (e.g., a server farm) and requires sending data from the source to the cloud.

One of the most widely deployed forms of edge AI today is assisted (or automated) driving in passenger cars. Using a suite of cameras and sensors, many modern vehicles “see” the road you drive on, recognizing speed limit signs, lane markers, pedestrians, and other vehicles. Machine learning (in this case, computer vision) algorithms running locally on processors inside the vehicle analyze the data produced by these sensors to determine road conditions. Those conditions then are processed by behavioral models on the car into an outcome, such as adjusting vehicle speed automatically when the speed limit of a road changes. In this scenario, the car’s machine learning algorithms likely only receive major updates infrequently (if ever, in some cases), and data about the behavior of the vehicle and performance of the model may not even be transmitted back up to the cloud — such systems can operate in an entirely “closed loop” fashion.

Edge AI could even be reduced in application to the algorithm that detects faces in the viewfinder of a standalone digital camera. After all, the camera is running a computer vision algorithm (i.e., doing vector math on image data), which is a form of machine learning, and that algorithm is running on-device at the “edge.” Therefore, edge AI. And the term edge AI has been retroactively applied to other use cases ranging from medical pacemakers, glucose monitors, industrial sensors, and smart thermostats, to video surveillance and retail automation.

Some of the most pertinent examples of edge AI to understand the broader concept are:

Autonomous vehicles: Automated driving requires a lot of local computing power and demands near-real-time responsiveness. The data processed is complex, heterogeneous, and highly specialized. This is a clear-cut example of edge AI.

Retail robotics: Robots designed to spot hazards, validate placement of goods, and identify security risks in retail stores must process very large amounts of visual data from many cameras and other sensors. This means on-device processing is crucial, and requires a sophisticated machine learning model. Another strong case for the “edge AI” label.

Retail automation: Retailers are increasingly exploring the idea of using computer vision to identify the goods a customer has decided to buy and provide a seamless checkout experience, as opposed to traditional barcode scanner-based checkout systems. Such implementations require byzantine camera and sensor networks that must process data rapidly on edge AI servers inside the retail location.

Manufacturing automation: Manufacturing goods requires validating the quality of product coming off the line, and computer vision and various sensors can be indispensable to the quality assurance process. Similarly, ensuring the performance of manufacturing systems through constant monitoring with AI advances this goal. Real-time or near-real-time performance is crucial in such environments, where a production fault could easily cascade into substantial lost productivity or damage to equipment.

Regarding the future of edge AI, its growth is dependent on two inverse functions: the growth of on-device AI computing power, and the efficiency of on-device machine learning models. As devices grow more powerful, they can use more complex ML models for on-device AI to generate more accurate predictions, eventually opening up new use cases. And as ML models become more efficient, the more successfully they can leverage on-device processing.







Deinen Freunden empfehlen:
FESCH.TV
Hubu.de | Hubu.news | Hubu.FM | Hubu.cloud