DeepLens is the latest addition to Amazon’s list of AI-powered solutions. For starters, it is the developers’ version of Google Clips Camera, therefore more sophisticated. According to Amazon, this is the first video camera that can teach the basics of deep learning.
What is most fascinating about this product is its seamless integration with Amazon Alexa. Consequently, one can use voice commands to control the device. Alexa then queries the results and narrates the output.
Bridging gaps in Machine LearningDevelopers learn the basics of deep-learning in two ways. One is through academic research, and the other is by tinkering with open source codes. Unfortunately, the methods learnt through these routes always prove insufficient for solving day-to-day problems.
Deep learning enthusiasts recognise the need for a solution that fills the gaps in these two sources of information. According to Amazon, AWS DeepLens provides this bridge. Its architecture offers insight on how it achieves this. Below is a glance at each of the features
AWS IoT ruleThe platform supports the feeding of the telemetry details. It also manages M2M communications, which allows developers to connect sensors and other actuators. It has a rules engine that coordinates the rest of the workflow.
AWS GreengrassIt is a sort of an advanced version of the AWS IoT platform as it runs hubs and gateways that can feed telemetry data in offline modes. It also has the same M2M capability.
Amazon Kinesis Data StreamsThe feature helps in continuously capturing data from multiple sources. It comes in handy for a developer who wants to process and analyse vast chunks of real-time data. It can store more than 1Tb per hour. Developers can use Amazon Kinesis data streams to develop streaming applications.
Amazon Kinesis Data AnalyticsIt uses the common SQL language to analyse real-time streaming data. There is no need for learning a new algorithm or framework.
AWS LambdaIt allows the user to determine how theDeepLens data is analysed and queried. It enables seamless running of codes for nearly all backend and real-life applications. An important feature is the absence of administration restrictions. Developers can use it to control other AWS services, mobile apps, and an Amazon Alexa Skill.
Alexa SkillsIt allows the user to do all everyday activities such as hail a ride, control home entertainment system, stream music, workout, and much more. A developer can customise it using AWS Lambda so that it can retrieve information. Amazon Alexa then verbalises the result to the user.
Step by step on how AWS DeepLens works
- First, the user sets the device by following the instructions in the manual. It takes less than 10 minutes
- Once set up, the camera uploads detected images to the AWS Greengrass platform, which then sends them as MQTT messages to the IoT rule engine for further processing.
- The rule engine transfers the processed messages to the Kinetic Data Stream
- Kinetic Data Analytics analyses the information on the Kinetic Data Stream and sends the result to the next data stream to wait for querying.
- Customised Alexa skills invoke a Lambda function upon query by the user and verbalises a list of the objects detected by the DeepLens device.
AWS DeepLens is a device that captures the attention of any AI, Internet-of-Things, and edge-computing enthusiasts. Most developers AI developers will find it easy to incorporate as it supports most of the machine learning frameworks such as TensorFlow, MXNET, Pytorch, and Caffe2. It currently runs on Ubuntu, but Amazon might produce Android and iOS compatible versions for mobile developers.
Maybe you'll find this white paper interesting: