Last Tuesday I went to the AWS Builders Day here in Manchester at the Victoria Warehouse in Salford Quays. Like most of the country, we woke up to snow on the ground which is pretty rare here in the UK. Having lived in Cleveland, Ohio in the US for 15 years, I am not unaccustomed to getting 3 foot of snow this time of year so the inch or two we got wasn’t putting me off. On the plus side, not too many people travelling the Metrolink Tram, but on the bad side, the trams were only running to MediaCity which meant a 20-minute walk to the Victoria Warehouse. Despite the weather, I was able to get there in plenty of time and was met with free coffee and pastries.

There were three tracks, Serverless, Containers and Artificial Intelligence. I thought I would spend most of the day in the Containers track and maybe take a peek at the Serverless track later. The way we’ve been using AWS for the last 2.5 years has always been evolving, from batch scripts with EC2 to using ECS with Cloudformation and then Kubernetes. During that time the offerings from AWS and others just keep on changing so it has been a continuously evolving landscape. While I am familiar with parts of AWS, my involvement is somewhat secondary and I still have plenty to learn so the Container track might be a good way to fill those gaps and learn some new things.

Since this is the first time I’ve blogged a conference, and I usually blog my thoughts and opinions, I should probably make clear that the sections on the individual talks contains (mostly) material from the talks and are representative of what they were saying. The only time I’ve added my opinion is in the summary at the end.

Containers: State of the Union

Kick off was at 10:00 am with a talk on the state of the union with Abby Fuller which wasn’t so much a technical talk but an introduction to containers. She talked about how people started running containers on EC2 then ECS, then Kubernetes on ECS, and how the complexities of doing so led to new problems that led to the creation of EKS which is AWS’ managed Kubernetes platform. This touched on a theme that was prevalent throughout the day, that developers just want to have their containers run for them which led to Fargate provide deployments with less effort and more time for development.
In terms of timing, ECS and Fargate for ECS are currently available with Amazon EKS currently in preview and Fargate for EKS coming in 2018. Abby finished with a comparison of the different platforms and explained what presentations would be given for the rest of the day.

Deep Dive on Amazon ECS

The ECS deep dive was presented by Ric Harvey who I think was a last minute draft as Manchester was the only Builders Day he was attending and he admittedly wasn’t familiar with all the slides in his presentations but in spite of this, he did a great job. Ric started with a quick introduction to what ECS is about, how JSON task definitions can be passed in and deployed through the scheduling process which determined where to deploy the container. Ric talked about how AWS is responsible for the operations of the cloud such as security, scaling, patching and monitoring while AWS users are responsible for operations in the cloud. This was followed by a slide that showed which AWS building blocks are used to perform the different AWS operations in the cloud. This is a manual process with plain ECS but Fargate would automate a lot of this management.
When talking about deployment, Ric started by talking about deployment with the management console, but how it was fairly manual, unrepeatable, and how it needed to be automated. Next, he talked about using plain shell scripting to implement the deployment, and the problems with that, lack of rollback in case of an error and the inability to handle asynchronicity. He then moved on to cloud formation and using infrastructure as code to create repeatable deployments. To be honest, this is the kind of path we’ve taken over the years as we’ve progressed with AWS.
Ric then gave a demonstration of an ECS deployment which mostly worked given that I think he said he had thrown it together on the train.
Overall, this presentation had a lot of familiar topics while some of the networking aspects went over my head. For saying Ric was a fairly recent hire to Amazon and being put on the spot at the conference, he did a great job.

Amazon Elastic Container Service for Kubernetes (Amazon EKS)

Presented by Ric Harvey, this was an introduction to the Kubernetes managed service from AWS. A CNCF survey claims 63% of Kubernetes workloads runs on AWS, albeit painfully. This identified the need to be able to run Kubernetes on AWS more easily. While there are some third-party solutions for running Kubernetes on AWS, AWS needs to provide its own solution to deploying Kubernetes on AWS.

Ric introduced the 4 basic tenets of EKS:

  • Production Grade platform for enterprises
  • EKS provides a native Kubernetes experience, not an AWS derivative.
  • Allow seamless integration with additional AWS services.
  • EKS team contributes back to the Kubernetes project.

After a whirlwind introduction to EKS, he describes how Kubernetes deployments will use Kubernetes artefacts to deploy onto EKS and EKS will keep out of the way as much as possible. He went into some detail about how the Kubernetes deployment, including master nodes, could be spread across multiple availability zones. You can also pin your cluster to a specific Kubernetes version in case you have compatibility issues with the latest version and AWS will even support the more recent (but not all) versions. It sounds like there will be limited managed Kubernetes add-ons initially with the possibility of more being bundled by default. After an explanation of the networking intricacies of Kubernetes on AWS (that mostly went over my head) he covered how IAM works with Kubernetes pods and using AWS authentication. Finally, Ric talked about EKS with Fargate and how deployments can be driven by pod definition and with less configuration needed.

Deep Dive into AWS Fargate

Fargate is the latest Container technology from AWS with a goal of achieving the same results with less effort. I know its one of the talks we were most looking forward to. Abby started off with a quick introduction to Fargate and the problems it sought to solve, namely no worrying about scaling, underlying infrastructure or cluster resources. You can just specify a task definition or pod (when available in Fargate for EKS) with some resource limits and you should be good to go. She discussed how ECS solved some of these problems but was still not completely hands-off management. She effectively summed up the need for Fargate with a quote:

“When someone asks you for a sandwich, they aren’t asking you to put them in charge of a global sandwich logistic chain. They just want a sandwich”

Fargate lets you use the same task definition, and you can have Fargate and EC2 tasks in a hybrid cluster and switch back and forth between the Fargate and EC2 launch type. This is useful for cases where you start with Fargate but maybe need some of the additional features of EC2. The workflow for deploying and running a container instance is simpler since Fargate handles more of that for you. There followed a brief comparison on whether Fargate is the right option for you, and it came down the tradeoffs of convenience over control since there are limitations on what you can do with a container on Fargate.
Abby then talked about computing resource allocations in Fargate regarding how the task level resources such as memory and CPU could be allocated amongst the different containers in the task so one container doesn’t hog all the resources. Like EKS, you can pin the version of Fargate should your deployments depend on a specific version.
We then headed into networking with a brief comparison of local vs. external networking and how Fargate is limited to using AWSVPC networking. With AWSVPC the task is allocated an ENI that is shared amongst the containers which can use the local loopback to communicate with other containers in the same task. Abby then flew through several slides on networking patterns which I really need to go back to research and re-read a few times.
She then talked about the cluster, application and housekeeping types of permissions that can be used to determine what actions can be performed on the cluster, by the services, and by AWS to execute the task respectively.
One key concept was that since Fargate is managing resources and you might have multiple deployments, everything is isolated at the cluster level so you might have a cluster for the different environments you need which is a common pattern people already use.
Abby talked about some existing CLIs for Fargate/ECS, some of which are official and supported while some weren’t.

Advanced Container Management and Scheduling

This was Abby again discussing some of the ins and outs of Advanced Container Management and Scheduling. It started with a discussion of what scheduling is, why you need it, and how managing a few containers is easy, but managing many is hard. She covered the various types of schedulers that can be used to schedule your containers. The placement engine lets you describe how you want to deploy your clusters based on strategies:

  • Binpacking – to fit the most tasks onto existing instances
  • Spread – to allocated tasks evenly across available instances
  • Affinity – to group different tasks the same across different instances
  • Distinct – which places each task on a different container instance

It also allows you to specify criteria to determine where a cluster is placed using attributes such as ec2 instance type or availability zone. These criteria can be used in conjunction with other strategies. She talked about how load balancers fitted in with scheduling given that they distribute the requests to the deployments. Abby then talked about docker image size, the benefits of smaller images sizes and how to reduce the image sizes when constructing your docker images.

Abby explained what happens when we start a task and how we can use the cluster query language to query the instances. There were several examples of the command line to run tasks with different placement constraints and strategies, with additional examples of doing it through the management console.

Abby painted this talk as one of the drier ones, but I actually found it quite interesting and more accessible than some of the others, most likely because it covered some higher level concepts without dipping down into the low-level networking aspects too much. As this was the last talk of the day, we finished a little early and drinks were available back in the main areas.

Summary

Overall, it was an enjoyable and informative day. While some parts were more familiar than others, it was surprising how familiar a lot of it was.

I don’t consider myself an expert in the market of cloud computing platforms so I don’t know fully how they compare, but AWS seems to do a great job providing more and more services that you can use (and pay for) on demand. It really makes the case for buying this kind of infrastructure, especially for a single or small team of developers without dedicated infrastructure support. The list of features added in 2017 alone would be difficult to implement on your own without significant investment and of course, those features all get maintained for you going forward. The fact that you can start small but easily grow into services that are fully replicated around the world with multiple availability zones, hot/cold standbys and backups is amazing.

Having such a feature-rich platform is no good unless you can get started with it so one key theme was the desire to make deployments simpler. As a developer, it appeals to me to be able to just write some code and deploy it out with minimal configuration. I can dedicate more time to writing code and less time worrying about deployment issues. Obviously, the usual caveats would apply in that you have fewer options for customisation, and at some point, you are going to start needing all the customisations of ECS. However, it can certainly be enough for early-stage production or even ongoing development deployments where you don’t want to invest too much deployment time for something that is a throwaway experimental project.

Overall, a good conference, and it reminds me that just as a developer has endless possibilities for what code they can write, AWS can make us feel like a kid in the candy store with all the services and infrastructure on which to build them.