What is an Edge Computing?
The “Edge” means “Local” or “Near Local” processing. “Edge” applications are often used where there is requirement for low latency is necessary.
Edge computing is also used where a network may or may not be available. It comes from a desire for real-time decision making.
Some use cases for Edge Computing :-
1) Image Analytics :-
The real time recognition of constantly changing scene based on video streaming requires high data bandwidth if performed in the cloud.
AI on the Edge enables the local analysis of the visual scene, such as understanding the scene for context analysis, simultaneous multi object detection and recognition for obstacle avoidance, people identification for secure access etc.
2) Surveillance and monitoring :-
Deep Learning-enabled smart cameras could locally processed the captured images to identify and track the multiple objects and people, detecting the suspicious activities directly on the edge node.
These smart cameras actually minimise the communication with the remote servers in cloud by only sending the data on a triggering event, also reducing the the remote processing.
3) Autonomous Vehicles :-
A smart automotive camera can recognise the vehicles, traffic signs, pedestrians, road and objects locally, sending only the information needed to perform autonomous driving to the main controller. The similar concept is of the robots and drones.
4) Expressions Analysis to improve the shopping/advertising or driving:-
An individual’s emotional reaction can be analysised and can provide clues to their degree of acceptance of a service, likes/dislikes of various products shown on the selves in a shop, or their stress these data when processed locally can be used to understand the customers better.
5) Audio Analytics :-
AI and Deep Learning can analyse a visual scene in all its elements. An audio scene can be split into its basics parts to enable the following functions by deep learning.
a) Audio Scene Classification :-
It can help understand the location to trigger the features, including ad-hoc noise reduction-specific voice tnterface, and disable the touch/write capabilities to a smartphone when a car is in Driver mode.
b) Audio Event Detection :-
Detecting the sounds such as baby crying, glass breaking, or a gunshots can trigger an action that includes the notifications or location detection, via triangulation.
Triangulation is an algorithm in survery where by knowing the length of the side you will be able to know the angle.
Since understanding specific sound events in multi source conditions is a latency-critical task, AI at the edge can be very fast and effective in recognising an audio event among numerous overlapping sound sources.
Recognising a car or truck approaching or the sound of brakes can be lifesaver.
The NLP (Natural Language Processing).
The NLP tasks like speech processing can
1) The Keyword Recognition :-
This approach uses a LIMITED vocabulary of activating words that are useful to the application. These are those words that are very critical to know about some information or can identify anything with just limited information. like Okay Google, Hey Siri etc.
For example a lamp does not need to know much more than just “on” or “of”.
2) Text-to-Speech(TTS) and Speech-to-Text(STT) :-
These are two examples of complex tasks in which AI and DL are used to bring these functionalities on the Edge.
For examples a hands free text read and write functions in automotive, where the driver can keep attention on his main task(drive the car) while interacting with the infotainment system.
Like Teaching by IITs NPTEL professors where the focus is to teach Concepts instead of writing, the text will be generated by NLP by speech-to-text Algorithm.
Finally, DL based Speech Recognition is used in Conversational User Interface. like ChatBots. Where the abilities of NLP are drastically augmented by allowing the human grade conversation.
Inertial Sensors/ Environmental Sensors Analytics :-
Smartwatches and Fitness bands, as well as smart buildings, home, and factories extensively uses the data from the inertial and environmental sensors.
A Deep-Learning Enabled processing on the Edge allows quicker analysis of the local situations and faster response.
1) Predictive Maintenance in Factories :-
Sensors attached to a machine can measure the Vibration, temperature, and noise levels and AI performed locally can infer the state of the equipment, potential anomalies, and early indications of failure.
2) Body Monitoring :-
Our wearable devices collect a lot of data about our activity, location, heart rate, among other things. This information can be correlated with heath, stress levels, diet and potentially alert wearers to a potential health issue before it becomes critical.
Keep Learning and Sharing with Grouply.
It is very easy to Defeat someone but very Hard to win someone — By A.P.J Abdul Kalam (Bharat Ratna)