Gruntwork Newsletter, November 2019

Once a month, we send out a newsletter to all Gruntwork customers that describes all the updates we’ve made in the last month, news in the DevOps industry, and important security updates. Note that many of the links below go to private repos in the Gruntwork Infrastructure as Code Library and Reference Architecture that are only accessible to customers.

Hello Grunts,

In the last month, we launched Gruntwork Compliance for the CIS AWS Foundations Benchmark, added cluster autoscaling support to our EKS modules, open sourced two new modules for setting up CI / CD with Google Cloud Build, used Terraform 0.12 features to consolidate and simplify the code for module-ecs, package-lambda, and module-load-balancer, and announced the dates for our winter holiday break. In other news, Helm 3 has been released, EKS now supports managed worker nodes, the ALB supports weighted routing, and Terraform 0.11 provider support is being deprecated.

As always, if you have any questions or need help, email us at support@gruntwork.io!

Gruntwork Updates

Announcing Gruntwork Compliance for the CIS AWS Foundations Benchmark

Motivation: The AWS Foundations Benchmark is an objective, consensus-driven guideline for establishing secure infrastructure on AWS from the Center for Internet Security. Several of our customers were interested in improving their security posture by achieving compliance with this benchmark, but meeting all the recommendations—from IAM to VPCs and CloudTrail to AWS Config—from scratch proved to be challenging.

Solution: We’re excited to announce Gruntwork Compliance for the CIS AWS Foundations Benchmark! We’ve made a number of changes that will make it possible for you to achieve compliance in days, not months:

  1. We’ve extended the Infrastructure as Code Library to include compliance with the AWS Foundations Benchmark as a first class citizen. These compliance modules make it straightforward to implement the recommendations in the AWS Foundations Benchmark using infrastructure as code techniques for a repeatable compliance experience.
  2. We’ve published the How to achieve compliance with the CIS AWS Foundations Benchmark deployment guide, which will walk you through the core concepts of the Benchmark, design guidelines for implementing the recommendations as code, and how to use the compliance modules.

What to do about it: Check out our announcement blog post and the compliance product page for all the details. If you’re interested in becoming a Gruntwork Compliance customer, contact us!

Cluster Autoscaler Support for EKS

Motivation: While Kubernetes natively supports autoscaling of the Pods that are deployed on a cluster, additional work was needed to setup autoscaling of the worker nodes. This meant that when you ran out of capacity on your EKS clusters such that you couldn’t schedule any more Pods, the only solution was to manually scale out more nodes in the cluster.

Solution: We’re happy to announce that we have added support for setting up cluster-autoscaler for use with EKS in terraform-aws-eks! This monitors Pod scheduling activities on your EKS cluster and automatically scales up more worker nodes if the cluster needs more capacity to schedule Pods. Additionally, the cluster-autoscaler application will automatically scale down worker nodes that are underutilized!

Other EKS updates:

What to do about it: Update your EKS worker module calls to v0.9.7 and deploy the new eks-k8s-cluster-autoscaler module to try it out!

Consolidated modules for ecs-service

Motivation: Due to limitations with Terraform that prevented usage of dynamic inline blocks, we had to break up the ECS service code into 4 modules: one for use with either a CLB or no load balancer, one for use with an ALB or NLB, one for use with Service Discovery, and one for use with Fargate. If you needed some combination of these features, it wasn’t possible. Moreover, maintenance was more difficult, and some of the modules started to drift apart in terms of what features they offered.

Solution: With Terraform 0.12, we now have the ability to dynamically set inline blocks on resources. As a result we were able to consolidate all the separate ecs-service flavors into a single ecs-service module that can be configured for any of the different scenarios. Starting with v0.16.0, module-ecs will now only ship one ecs-service module.

A nice side effect of this is that all 4 use cases now have feature parity:

However, the consequence of this is that this is a massive backwards incompatible change. Due to the differences in naming of resources across the three modules (especially IAM resources), there is no way to avoid redeploying the ECS service when upgrading to the consolidated version.

Refer to the migration guide in the release notes for more information on how to upgrade.

What to do about it: Follow the migration guide in the release notes and update to module-ecs v0.16.1 to try out the consolidated ecs-service module!

Consolidated modules for package-lambda and module-load-balancer

Motivation: Just as with module-ecs, the lack of support for dynamic inline blocks in older Terraform versions led to a lot of code duplication in our package-lambda and module-load-balancer modules.

Solution: We’ve updated the modules to take advantage of Terraform 0.12’s support of dynamic inline blocks!

What to do about it: Update to the new module versions above and let us know how they work for you!

New Open Source Modules for CI/CD on GCP using Google Cloud Build

Motivation: In May, in partnership with Google, we open-sourced a collection of reusable modules for Google Cloud Platform (GCP), including modules for Kubernetes (GKE), VPCs, Cloud SQL, Cloud Load Balancer, and more. These modules make it much easier to deploy and manage infrastructure on GCP, but lacked a way to setup an automated CI/CD pipeline.

Solution: We’ve open-sourced two new repos that show how you can set up an automated CI/CD pipeline using Google Cloud Build, a Google Kubernetes Engine (GKE) cluster and either a Google Cloud Source Repository or a GitHub repo.

What to do about it: Try out the two repos above and let us know how they work for you!

Winter break, 2019

Motivation: At Gruntwork, we are a human-friendly company, and we believe employees should be able to take time off to spend time with their friends and families, away from work.

Solution: The entire Gruntwork team will be on vacation December 23rd — January 3rd. During this time, there may not be anyone around to respond to support inquiries, so please plan accordingly.

What to do about it: We hope you’re able to relax and enjoy some time off as well. Happy holidays!

Open source updates

Other updates

DevOps News

Helm 3 has been released

What happened: A major new release of Helm, version 3.0.0, is now generally available.

Why it matters: The major new features in Helm 3 are:

What to do about it: We will be updating the Infrastructure as Code Library and Reference Architecture with support for Helm 3 as soon as we can. We will announce when this is ready.

EKS now supports managed worker nodes

What happened: AWS has added support for provisioning managed worker nodes for EKS.

Why it matters: Originally, EKS supported a fully managed control plane, but you had to run and manage the worker nodes yourself. Now, AWS can run the worker nodes for you automatically too, including handling node updates and terminations, gracefully draining nodes to ensure your applications stay available.

What to do about it: We will be updating terraform-aws-eks with support for managed worker nodes as soon as we can. We will announce when this is ready.

ALBs now support weighted target groups

What happened: Amazon’s Application Load Balancers (ALBs) now support weighted target groups. For example, if you have two target groups, you could assign one a weight of 8 and the other a weight of 2, and the load balancer will route 80% of the traffic to the first target group and 20% to the other.

Why it matters: This gives more fine-grained control over ALB routing, which can be used to implement features such as blue/green deployments and canary deployments. For example, to do a blue/green deployment, you would start with your old code (e.g., running in an ASG) in a “blue” target group with weight 1. You would then deploy your new code (e.g., into a new ASG) and register it with a new “green” target group with weight 0. Once the green target group is passing all health checks, you can switch its weight to 1 and the blue target group’s weight to 0, and all traffic will switch to your new code. You can then undeploy the old code.

What to do about it: Weighted target groups are not yet supported in Terraform; follow this issue for progress. We will be looking into this feature to add support for blue/green deployment to our ASG, ECS, and EKS code. We will announce when this is ready.

EC2 Instance Metadata Service v2

What happened: AWS has released version 2 of the EC2 Instance Metadata Service.

Why it matters: The new version of the EC2 Instance Metadata Service protects every request via session authentication. This helps to mitigate open firewalls, reverse proxies, and SSRF vulnerabilities.

What to do about it: Check out the announcement blog post for all the details. We’ll be looking into this functionality to see if/when we should support it in our modules.

Terraform 0.11 provider support is being deprecated

What happened: HashiCorp has announced that they will be deprecating Terraform 0.11 support for Terraform providers.

Why it matters: This change has two effects:

  1. Provider bugs that are reproducible in Terraform 0.11, but not in Terraform 0.12 will be closed and left unfixed. So if you want fixes for those issues, you’ll have to upgrade to 0.12.
  2. You’ll only be able to download the newest provider versions (via terraform init) with Terraform 0.12. So if you want any of the new features in your providers (e.g., support for a new AWS service announced at the upcoming re:Invent), you’ll have to upgrade to 0.12.

What to do about it: If you haven’t already, it’s time to upgrade to Terraform 0.12. Check out our migration guide for instructions.

Issue with destroying VPC using Terraform

What happened: On September 3rd, AWS announced improvements to the way AWS Lambda works with VPCs. These improvements introduced what is called a Hyperplane ENI that acts as a single entrypoint for multiple Lambda execution environments into your VPC. However, this change was not compatible with the way Terraform queries for ENIs when destroying network resources for Lambda. This caused terraform operations to fail with DependencyViolation errors. You can read more about this in the relevant Github issue for the AWS Provider.

What to do about it: This has been fixed in the Terraform AWS Provider version 2.31.0 or later. We recommend upgrading your AWS provider versions. You can use the version property on provider blocks to force terraform to use a version later than 2.31.0 :

provider "aws" {
version = "~> 2.31"
}
Text Link