Once a month, we send out a newsletter to all Gruntwork customers that describes all the updates we’ve made in the last month, news in the DevOps industry, and important security updates. Note that many of the links below go to private repos in the Gruntwork Infrastructure as Code Library and Reference Architecture that are only accessible to customers.
Hello Grunts,
In the last month, we updated our AWS Reference Architecture to work with Terraform 0.12 and Terragrunt 0.19, began building a GCP Reference Architecture, released a set of open source modules for running the TICK stack on GCP, made some major improvements to Terragrunt (including the ability to fetch outputs from dependencies!), and lots of other fixes and improvements. Read on for the full details.
As always, if you have any questions or need help, email us at support@gruntwork.io!
Motivation: Last month we announced that all our modules had been upgraded to be compatible with Terraform 0.12. However, we had yet to update the Gruntwork Reference Architecture, which uses all of our modules under the hood, to be compatible with Terraform 0.12.
Solution: We’ve updated the Reference Architecture to be compatible with Terraform 0.12!
What to do about it: The Reference Architecture example repos have now been updated to Terraform 0.12 and Terragrunt 0.19 syntax:
Normally we link you to a diff with the changes that are necessary to update the Reference Architecture code. However, since the change set for this update is huge (see this PR for an example), we instead recommend following our upgrade guide, and using the Acme repos as a reference for what it should look like in the end.
Motivation: We’ve continued working with the InfluxData team to support running their entire time-series platform, known as the TICK stack (Telegraf, InfluxDB, Chronograf, and Kapacitor), in GCP.
Solution: We released the terraform-google-influx module, which is open source on GitHub under the Apache 2.0 License, and available in the Terraform Registry! This module makes it easy to spin up any TICK Stack component in Google Cloud Platform.
What to do about it: Read the release blog post, give the modules a try and let us know how they work for you!
Motivation: In May, in partnership with Google, we open sourced a collection of reusable modules for Google Cloud Platform (GCP), including modules for Kubernetes (GKE), VPCs, Cloud SQL, Cloud Load Balancer, and more. These modules make it much easier to deploy and manage infrastructure on GCP, but wiring all of them together to build your entire tech stack is still a lot of work.
Solution: We are building out an end-to-end, production-grade, secure, and developer-friendly Reference Architecture for GCP! Just as with our AWS Reference Architecture, the GCP Reference Architecture includes just about everything a typical company needs: VPCs, Kubernetes (GKE), load balancers, databases, caches, static content, CI / CD, monitoring, alerting, user and permissions management, VPN, SSH, and so on. We deploy the Reference Architecture into your GCP account and give you 100% of the code, allowing your team to immediately start building on top of a battle-tested, best-practices, fully-automated infrastructure.
What to do about it: The GCP Reference Architecture is currently in private beta. If you’re interested in getting access, Contact Us!
Motivation: Two months ago we announced Terragrunt 0.19, which introduced a new configuration syntax for the Terragrunt config file. The introduction of this new config file allowed us to upgrade the underlying syntax to HCL2 which has many language level improvements that makes it easy to extend the configuration with additional features.
This month we leveraged this new configuration syntax to address two problems:
Solution: We’ve updated Terragrunt to address these two issues! This month, we introduced two new blocks in Terragrunt: locals
and dependency
.
locals
is a new block that can be used to bind expressions to variables for reuse in your config. Consider the following terragrunt.hcl
file:
remote_state {
backend = "s3"
config = {
bucket = "my-terraform-bucket"
region = "us-east-1"
key = "${path_relative_to_include()}/terraform.tfstate"
}
}
inputs = {
aws_region = "us-east-1"
s3_endpoint = "com.amazonaws.us-east-1.s3"
}
Here, the region is a hard coded string ("us-east-1"
) repeated multiple times throughout the config. When switching to a new region, we have to replace that string in all three places. With locals
, we can bind the string to a temporary variable:
locals {
region = "us-east-1"
}
remote_state {
backend = "s3"
config = {
bucket = "my-terraform-bucket"
region = local.region
key = "${path_relative_to_include()}/terraform.tfstate"
}
}
inputs = {
aws_region = local.region
s3_endpoint = "com.amazonaws.${local.region}.s3"
}
Now you only need to update the region string in one place and the rest of the config will inherit that! You can learn more about locals
in the corresponding section of the README.
dependency
is a new block that can be used to read in the module outputs of another Terraform module managed using Terragrunt. Consider the following folder structure:
root
├── mysql
│ └── terragrunt.hcl
└── vpc
└── terragrunt.hcl
In most cases, you will most likely want to deploy the database (deployed with the mysql
module) into the VPC (deployed with the vpc
module). This means that you need to somehow get the VPC ID from the output of the vpc
module. Before, your only option was to use terraform_remote_state
, which has some downsides: it is encoded in the modules so hard to change, requires strict alignment of terraform versions, to name a few.
Now, you can address this using the new dependency
block. In mysql/terragrunt.hcl
, you can have the following config to read the vpc_id
output of the vpc
module:
dependency "vpc" {
config_path = "../vpc"
}
inputs = {
vpc_id = dependency.vpc.outputs.vpc_id
}
This will run terragrunt output
on the vpc
module and render that as the variable dependency.vpc.outputs
, which you can then reference in your inputs
. You can learn more about the dependency
block in the updated README.
Other Terragrunt updates:
run_cmd()
(e.g., for sensitive commands) by passing --terragrunt-quiet
as the first argument to run_cmd()
.hclfmt
which can be used to format your terragrunt.hcl
files.locals
.hclfmt
now ignores files in the terragrunt cache.hclfmt
now accepts and optional flag --terragrunt-check
, which runs hclfmt
in check mode. Check mode indicates that the files shouldn’t be updated, and the command should instead error out if any files are unformatted.terragrunt apply
could fail as it tries to delete files that are already deleted.terragrunt.hcl
configuration: terraform_binary
lets you specify a custom path to the Terraform binary and terraform_version_constraint
lets you modify the version constraint Terragrunt enforces when running Terraform.remote_state.config.credentials
to check if the GCS bucket exists. If it isn't specified then assume the GOOGLE_APPLICATION_CREDENTIALS
environment variable has been set. Also don't throw an error if remote_state.config.bucket
isn't specified and assume it will be passed through the extra_arguments
-backend-config variable.dependency
.What to do about it: Download the latest version of terragrunt
from the releases page and take it for a spin!
helm.Upgrade
and helm.Rollback
, which can be used to call helm upgrade
and helm rollback
respectively. These can be used for testing upgrade flows for your helm charts.http_helper
method. This allows you to specify custom TLS CAs for private endpoints, or disable TLS verification for local setups. Additionally, k8s.GetNamespace
and k8s.GetNamespaceE
now return pointers to namespace objects. This improves your test code when querying for namespaces that do not exist. Note that this is a backwards incompatible release. Refer to the release notes for details on how to upgrade.ssh
module functions now support ssh to host with password as auth method.aws
now has helper functions and tests for working with AWS SQS FIFO queues.setup-systemd-resolved
that can be used on Ubuntu 18.04 instead of dnsmasq
for configuring systemd-resolved to forward requests for a specific domain to Consul.run-consul
has been updated to increase the timeout of the systemd service. Since v0.7.2, consul’s systemd service confignow waits for consul’s notification of having joined the cluster before signaling that the service is alive. This however may occasionally result in larger waiting times and intermittent timeout failures, hence the increase to a more reasonable timeout.install-vault
is now updated to work with Ubuntu 18.04.couchbase-cluster
module using the new input variables instance_role_path
and instance_permissions_boundary
, respectively.bastion-host
module now allows you to preconfigure a static IP.s3-cloudfront
. Fixes a perpetual diff caused when specifying both IAM or ACM certs and default certs.var.default_lambda_associations
list.type
constraint on the cors_rule
input variable in s3-static-website
.terraform 0.11.x
compatible only. The s3-cloudfront
module now supports the use of an Origin Group for the ability to failover automatically in the event your primary bucket is not accessible.health_check_timeout
variable was not used for setting the timeout to LB target group health check.availability_zones
outputs of vpc-app
and vpc-mgmt
had an extra layer of nesting, so you ended up with a list of lists, rather than a single, flat list.vpc-app-network-acls
module now sets allow_access_from_mgmt_vpc
to false
by default. This is a more sane default because (a) it's more secure and (b) mgmt_vpc_cidr_block
is null
by default, so if you left all parameters at their defaults, it doesn't actually work.vpc-flow-logs
module will create a VPC flow log for a provided VPC. Flow logs can be published to CloudWatch Logs or S3.allow_connections_from_cidr_blocks
argument of the rds
module is empty, no security group rule will be created at all now. This makes CIDR based rules completely optional.var.allow_incoming_http_from_security_group_ids
was not creating the required security group rules due to a regression from upgrading module-load-balancer/alb, which required explicitly specifying the number of security group IDs being passed in.terraform-helpers/terraform-update-variable
for better terraform 0.12 and terragrunt 0.19 compatibility. See the release notes for more details.ssh-grunt
now supports passing in multiple IAM groups (by passing in --iam-group
and --iam-group-sudo
multiple times) to sync. When multiple groups are passed, users who are in at least one of the list of groups passed in will be synced to the server. iam-groups
now supports creating multiple ssh-grunt
IAM groups that can be used to differentiate different groups of servers. Note that this is a backwards incompatible change: see the migration guide in the release notes for more details.iam-groups
module to tf12 with existing resources, terraform
gets into a state where you can't apply
, plan
, or destroy
.What happened: The Terraform 0.12.6 release is out, which includes an long-awaited feature: you can now use for_each
with resources!
Why it matters: In order to create multiple resources in Terraform (i.e., similar to a for-loop), your only option was count
. E.g., Setting count = 3
on a resource would create 3 copies of that resource. However, once you a set count
on a resource, that resource is now a list of resources, and Terraform tracks the identity of each resource based on its position in that list. If you deleted an item from the middle of this list, Terraform would end up shifting all the items after it back by one, effectively deleting and recreating all of those items. This made count
impractical for many use cases. With for_each
, Terraform will maintain a map of resources instead of a list, with a unique identity for each resource, so if you delete something in the middle, it only affects that one item!
What to do about it: We’ve updated the Terraform tips & tricks: loops, if-statements, and gotchas blog post to Terraform 0.12.6 syntax, including lots of examples of how to use for_each
with resources. Check out the blog post, upgrade your code to 0.12.6, and enjoy a more powerful Terraform experience!
What happened: Amazon has announced that Amazon Aurora Multi-Master is now available to all customers.
Why it matters: Amazon Aurora Multi-Master allows you to create multiple read-write instances of your Aurora database across multiple Availability Zones, so even if one instance or Availability Zone fails, you can continue accepting writes without downtime or failover.
What to do about it: Check out the announcement blog post for details.
What happened: Amazon ECS now supports attaching multiple target groups to each ECS Service.
Why it matters: Previously, you could attach only one target group to an ECS service. This meant you had to create multiple copies of the service for use cases such as serving traffic from internal and external facing load balancers or exposing multiple ports. Now, you can attach multiple target groups per ECS service, handling all of these use cases with just a single copy of your service.
What to do about it: Check out the documentation for details. Please note that we have not yet updated module-ecs to support this functionality. We will send out an update once we’ve had a chance to prioritize this work (in the meantime, PRs are very welcome!).
What happened: Amazon ECR now supports 10,000 repositories per region and 10,000 images per repository by default.
Why it matters: Previously, the default limit was 1,000 repositories per region and 1,000 images per repository, so it was common to hit that limit if you push images frequently (e.g., after every commit). The default limit is now significantly higher, and you can request it further by request.
What to do about it: See the announcement blog post for details.
Below is a list of critical security updates that may impact your services. We notify Gruntwork customers of these vulnerabilities as soon as we know of them via the Gruntwork Security Alerts mailing list. It is up to you to scan this list and decide which of these apply and what to do about them, but most of these are severe vulnerabilities, and we recommend patching them ASAP.