Top 10 Takeaways From AWS Summit New York
AWS Summit New York 2018 hit the Jacob Javits Center in Manhattan last week, and the 10,000+ in attendance weren’t disappointed with the announcements Werner Vogels, CTO of Amazon and Matt Wood, GM of Artificial Intelligence at Amazon Web Services had to share. With a focus on ML/AI, containers, compute, and speech synthesis, AWS really brought the heat to the public cloud competition this week. Here are the highlights from the A Cloud Guru team:
Machine Learning/Artificial Intelligence
- Amazon SageMaker Batch Transform is a new high-performance and high-throughput method for transforming data and generating inferences. It’s ideal for scenarios where you’re dealing with large batches of data, don’t need sub-second latency, or need to both preprocess and transform the training data. Best of all, no additional code needs to be written to use it!
- SageMaker Pipe Input Mode for Tensorflow allows customers to stream their training dataset directly from Amazon Simple Storage Service (S3) into Amazon SageMaker using a highly optimized multi-threaded background process. This mode offers significantly better read throughput than the File input mode that must first download the data to the local Amazon Elastic Block Store (EBS) volume. This means your training jobs will start sooner, finish faster, and use less disk space — lowering the costs associated with training your models.
- Amazon Transcribe With Channel Synthesis | Amazon Transcribe is an existing AWS automatic speech recognition service that delivers speech-to-text capabilities to applications. And now, a newly-announced feature called Channel Synthesis will support automated, multi-channel audio transcription.Contact centers stand to benefit significantly by using Amazon Transcribe’s channel synthesis feature as they make transcriptions of multi-channel customer call recordings. Contact centers can now submit a single, multi-channel audio file (typically, the caller and the call center agent are recorded on two separate channels) to Amazon Transcribe, which will identify the two channels, split them out, make transcriptions of each speaker per channel, and then produce a coherent merged transcript with channel labels, making it easier to understand from a transcription just who said what.
- New Languages and Syntax Analysis | Amazon Translate, an API that delivers fast, high-quality, and affordable language translation — now offers support for 6 new languages: Japanese, Russian, Italian, Traditional Chinese, Turkish, and Czech.AWS Syntax Analysis for Amazon Comprehend now lets customers identify Parts of Speech like nouns and adjectives to better understand things like customer sentiment. Developers can write their own analysis rules looking for specific conditions within a given text. For example, an application can look for all of the nouns mentioned within a document and then look for the correlating adjectives related to those nouns, offering a better understanding of customer sentiments like urgency.
- Time-Driven Prosody allows you to specify the desired duration for the synthesized speech that corresponds to part or all of the input text. So, If you want to emulate the rapid-fire speech of an auctioneer, Polly’s got you covered.
- Polly’s Asynchronous Synthesis feature lets you process large blocks of text and store the synthesized speech in Amazon S3 with a single call. If you need to synthesize speech for long-form content such as articles or book chapters, Asynchronous Synthesis can process up to 100,000 characters of text at a time and deliver the synthesized speech to the S3 bucket of your choice.
- Three new EC2 instance types are in the works and will soon be in General Availability: The compute-intensive Z1d instance type is ideal for Electronic Design Automation and relational database workloads, and they’re also a great fit for several kinds of HPC workloads. The memory-optimized R5 instance runs at up to 3.1 GHz, with up to 50% more vCPUs and 60% more memory than the older R4 instances. R5d instances are R5s on steroids — with up to 3.6 TB local NVMe storage.
- EC2 instances now run on Snowball Edge These ruggedized devices, with 100 TB of local storage, can now use Amazon EC2 Machine Images to collect and process data in hostile environments with limited or non-existent Internet connections before shipping the processed data back to AWS for storage, aggregation, and detailed analysis.
- Fargate For EKS — AWS Fargate is a compute engine for Amazon Elastic Container Service for Kubernetes that allows operators run containers without having to manage servers or clusters. This new release, available only in the Oregon AWS region for now, lets operators to run Kubernetes — an open-source system for automating the deployment, scaling, and management of containerized applications — on AWS without needing to stand up or maintain their own Kubernetes control plane.
- Bring Your Own IP Addresses Applications might use trusted IP addresses that are whitelisted by a company’s partners and customers. And now, Amazon Virtual Private Cloud (VPC) allows them to use their own publicly-routable IP addresses with AWS resources such as Amazon EC2 when moving applications to AWS — precluding partners and customers from having to change their IP address whitelists.