Simple Cloud for SAP

Stream-based Processing Operations Are Easier Than Ever Now With AWS Lambda

[fa icon="calendar"] 26/04/17 08:15 by Editorial Team

Editorial Team

aws-lambda-stream-based-processing-operations.jpg

Computer scientists and app developers use the term stream processing to refer to the act of programming dataflow (sometimes otherwise referred to as reactive programming). The objective here being to take advantage of parallel processing for certain applications.

These applications are able to make use of diverse computational units, from a GPU's floating point units to FPGAs (field-programmable gate arrays) in a streamlined, automated fashion. That is to say without there being any need for close monitoring and management of the various units, the way they synchronize and mutually allocate resources, etc.

CS and DevOps professionals like the stream processing paradigm because it makes it a whole lot easier to run parallel software and hardware by restricting any redundant parallel computations.

It would take us too long to go into the specifics of stream processing, but the broad strokes are this: for each element in a stream (a stream is a sequence of data) a series of known operations is assigned to it. These operations are referred to as kernel functions.

 

What Is Uniform streaming?

In the case that a singular kernel function can be applied to all of the elements in a data stream, this is known as uniform streamingUniform streaming is quite rare, and it is normally the case that kernel functions are pipelined.

As for the local on-chip memory, it gets used repeatedly in order to keep the external memory bandwidth used to a minimum. Kernel and stream abstractions do lead to data dependencies, but compiler tools can be used to optimize and automate the on-chip management tasks.

There are stream processing devices that use scoreboarding and other techniques to start-up a DMA (direct memory access) when these data dependencies become apparent. If a developer is able to eliminate the necessity of managing DMA manually (which is very time consuming) then they are also able to keep their software simple. Likewise, if they can eliminate hardware caches they can reduce the disc space that's dedicated to arithmetic logic and other computational units.

Stream Processing with Amazon Kinesis and Amazon DynamoDB

It is now possible to closely monitor and keep track of the length of time that that packets of data spend in the Amazon Kinesis and Amazon DynamoDB streams before they are passed on for processing through the AWS Lambda functions.

A useful new AWS Lambda metric called InteratorAge also makes it easy to quickly identify delays in the stream processing sequence. The user is able to use this metric to create alarms that sound whenever there are problems with stream processing so that the right measures can be taken to remedy the situation and keep the entire DevOps workflow running smoothly.

In this highly-competitive and deadline-based industry, having these tools which identify the points of delay is crucial to success. This InteratorAge metric tool is available at no extra cost as a default feature of Amazon's serverless Lambda platform. It is accessible through the monitoring tab in the Lambda console.

Categories: AWS, AWS Lambda

Editorial Team

Written by Editorial Team

Subscribe to Linke's Blog!

Download The Linke AWS Connector for SAP in PDF
Key steps to adopt Devops on a Cloud-Native Company
Linke SAP on AWS