Blog

Amazon SageMaker Makes Object Detection Algorithm Available

Amazon-SageMaker-object-Detection-Algorithm

Amazon SageMaker is a machine learning platform provided by Amazon Web Services (AWS) that is fully-managed and scalable. Allowing developers enhanced access to machine learning, the platform includes an image classification algorithm. Using pre-defined criteria to categorise images, it is amongst the most popular algorithms offered by Amazon.

Download 5 Steps to optimize your SAP on AWS costs

The Object Detection Enhancement

As of July 2018 AWS enhanced its product by launching the Object Detection algorithm. A process of identifying and localising objects within an image, the algorithm improves developers ability to manipulate machine learning.

Specifically, the new algorithm moves image classification forward by delivering a bounding box around an object within an image. This identifies both where the object is and what object the box encapsulates.

This is different to Amazon Rekognition which provides APIs that use pre-defined classes to identify objects. Though valuable, the new algorithm improves upon Amazon Rekognition by allowing programmers to both train with their own dataset/ classes and localise the objects in the image.


How to Get Started

To get started, customer’s will be expected to have their training dataset on Amazon Simple Storage Service (Amazon S3). Following the completion of training, Amazon will also provide the model artifacts which are produced at Amazon S3.

Further, Amazon will start and stop Amazon Elastic Compute Cloud (Amazon EC2) instances for the customers during training. This benefits the developers organisation by capitalising upon the Amazon EC2 ability to provide scalable computing capacity in the AWS cloud. Summarised within the Amazon SageMaker documentation, developers can read further into this process and achieve a high-level overview of the workflow.


Using the New Algorithm

Having met the requirements, developers must begin by inputting the algorithm in either MXNet RecordIO format or using raw images with JSON annotations. This will allow the programmer to establish the training configuration. Of the two, the MXNet RecordIO format is recommended for two reasons:

  • Firstly, it is quicker to download one larger file than many small files
  • Secondly, the algorithm uses the RecordIO format

This will mean that when the developer runs multiple iterations of the algorithm having the data in MXNet RecordIO format will save conversion time. In addition to these recommendations, it is highlighted that the object detection algorithm only supports GPU instances for training. Because of this it is recommended that when training with large batch sizes developers use GPU instances with more memory.

Finally, it is noted that develops can, while the algorithm trains, monitor progress by using either a SageMaker notebook or Amazon CloudWatch. Once the training is complete, developers will have specified an Amazon S3 output location through the training configuration for the model artifacts to be uploaded to. Developers can then choose to use either a CPU or a GPU instance to deploy the model as an endpoint.

 

Maybe you'll be interested in this ebook: 

Linke SAP on AWS

Stay tunned for more content like this.

Linke SAP on AWS
Key steps to adopt Devops on a Cloud-Native Company
Download The Linke AWS Connector for SAP in PDF