Ecs Case Study

Common Use Cases in Amazon ECS

This topic provides guidance for two common use cases in Amazon ECS: microservices and batch jobs. Here you can find considerations and external resources that may be useful for getting your application running on Amazon ECS, and the common aspects of each solution.


Microservices are built with a software architectural method that decomposes complex applications into smaller, independent services. Containers are optimal for running small, decoupled services, and they offer the following advantages:

  • Containers make services easy to model in an immutable image with all of your dependencies.

  • Containers can use any application and any programming language.

  • The container image is a versioned artifact, so you can track your container images to the source they came from.

  • You can test your containers locally, and deploy the same artifact to scale.

The following sections cover some of the aspects and challenges that you must consider when designing a microservices architecture to run on Amazon ECS. You can also view the microservices reference architecture on GitHub. For more information, see Deploying Microservices with Amazon ECS, AWS CloudFormation, and an Application Load Balancer.

Auto Scaling

The application load for your microservice architecture can change over time. A responsive application can scale out or in, depending on actual or anticipated load. Amazon ECS provides you with several tools to scale not only your services that are running in your clusters, but the actual clusters themselves.

For example, Amazon ECS provides CloudWatch metrics for your clusters and services. For more information, see Amazon ECS CloudWatch Metrics. You can monitor the memory and CPU utilization for your clusters and services. Then, use those metrics to trigger CloudWatch alarms that can automatically scale out your cluster when its resources are running low, and scale them back in when you don't need as many resources. For more information, see Tutorial: Scaling Container Instances with CloudWatch Alarms.

In addition to scaling your cluster size, your Amazon ECS service can optionally be configured to use Service Auto Scaling to adjust its desired count up or down in response to CloudWatch alarms. Service Auto Scaling is available in all regions that support Amazon ECS. For more information, see Service Auto Scaling.

Service Discovery

Service discovery is a key component of most distributed systems and service-oriented architectures. With service discovery, your microservice components are automatically discovered as they get created and terminated on a given infrastructure. There are several approaches that you can use to make your services discoverable. The following resources describe a few examples:

Authorization and Secrets Management

Managing secrets, such as database credentials for an application, has always been a challenging issue. The Managing Secrets for Amazon ECS Applications Using Parameter Store and IAM Roles for Tasks post focuses on how to integrate the IAM roles for tasks functionality of Amazon ECS with the AWS Systems Manager parameter store. Parameter store provides a centralized store to manage your configuration data, whether it's plaintext data such as database strings or secrets such as passwords, encrypted through AWS Key Management Service.


You can configure your container instances to send log information to CloudWatch Logs. This enables you to view different logs from your container instances in one convenient location. For more information about getting started using CloudWatch Logs on your container instances that were launched with the Amazon ECS-optimized AMI, see Using CloudWatch Logs with Container Instances.

You can configure the containers in your tasks to send log information to CloudWatch Logs. This enables you to view different logs from your containers in one convenient location, and it prevents your container logs from taking up disk space on your container instances. For more information about getting started using the log driver in your task definitions, see Using the awslogs Log Driver.

Continuous Integration and Continuous Deployment

Continuous integration and continuous deployment (CICD) is a common process for microservice architectures that are based on Docker containers. You can create a pipeline that takes the following actions:

  • Monitors changes to a source code repository

  • Builds a new Docker image from that source

  • Pushes the image to an image repository such as Amazon ECR or Docker Hub

  • Updates your Amazon ECS services to use the new image in your application

The following resources outline how to do this in different ways:

Batch Jobs

Docker containers are particularly suited for batch job workloads. Batch jobs are often short-lived and embarrassingly parallel. You can package your batch processing application into a Docker image so that you can deploy it anywhere, such as in an Amazon ECS task. If you are interested in running batch job workloads, consider the following resources:

  • AWS Batch: For fully managed batch processing at any scale, you should consider using AWS Batch. AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. AWS Batch dynamically provisions the optimal quantity and type of compute resources (for example, CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. For more information, see the AWS Batch product detail pages.

  • Amazon ECS Reference Architecture: Batch Processing: This reference architecture illustrates how to use AWS CloudFormation, Amazon S3, Amazon SQS, and CloudWatch alarms to handle batch processing on Amazon ECS.

Most of us are curious as to how “Project Sansar” will / won’t / might / might not work (with some already going to far as to pretty much write it off before even seeing it, which to me seems a tad premature).

In her own digging around, reader Persephone came across an interesting piece of information, and was kind enough to pass me a link to a case study from Amazon concerning the Lab’s use of Amazon’s EC2 Container Service (ECS) and Docker technology within Project Sansar.

Amazon’s ECS is a “scalable, high-performance container management service that provides cluster management and container orchestration”, which Linden Lab uses to run “the containerized web applications and back-end services of Project Sansar”.

Docker is an open-source technology that allows a developer (e.g. Linden Lab)to build, test, deploy and run distributed applications –  all the code, runtime elements, system tools and libraries, etc, – within a “container”, a method of operating system virtualisation. The upshot being it allows an application to be presented as a standardised package for rapid and consistent deployment, regardless of the environment in which it is to be used.

Precisely what “back-end” services for “Project Sansar” are being deployed in this manner isn’t clear; I’m certainly no technical expert and so am open to correction / other ideas.

However, we do know that a key element of “Sansar” is the ability for customers to build and deploy their own gateways (e.g. websites / web portals) to draw their own audiences into the experiences. So, is the use of ECS a means of achieving this? Presenting customers with a packaged environment in which they can build and deploy their own “Sansar” gateways? Or might it be the mechanism the Lab are looking to use to handle the management and scaling of support systems such as the chat, asset and other services  – many of which do appear to be of monolithic design with Second Life, and which sometimes don’t scale particularly well.

Could it be that the mechanism might actually be for more than just “back-end” services – such as the actual packaging and presentation of “Sansar” experiences themselves? We know that the Lab are wrestling with the issue of optimising “Sansar” experiences and their content so that they present a performant experience across a range of client platforms.  We also know the Lab intend to provide a means by which experiences can be rapidly deployed when needed (e.g. the WordPress  / You Tube analogy of build and then push a button to deploy) or rapidly scaled via instancing to meet the demands of large audience numbers.

Both of these requirement would seem – to my untutored eyes at least – to fit with the model being presented, although both would tend to suggest the use of ECS beyond the support of “back-end” services.

As it is, we do know the Lab already use Amazon’s services for presenting some of their SL-related services – so developing an existing relationship with the company for the benefit of “Project Sansar” would appear to make sense. At the very least, the case study offers the potential for further “Sansar” questions to be asked at the next Lab Chat.

Like this:



Published by Inara Pey

Eclectic virtual world blogger with a focus on Second Life, VR, virtual environments and technology. View all posts by Inara Pey


Leave a Reply

Your email address will not be published. Required fields are marked *