Once a month, we send out a newsletter to all Gruntwork customers that describes all the updates we’ve made in the last month, news in the DevOps industry, and important security updates. Note that many of the links below go to private repos in the Gruntwork Infrastructure as Code Library and Reference Architecture that are only accessible to customers.
Hello Grunts,
In the last month, Terraform: Up & Running, 2nd edition was published, we added a new Cloud KMS module for GCP, made a number of improvements and fixes to the AWS Reference Architecture (including lots of helpers for working with self-signed TLS certs, keystores, and truststores), updated our Terraform Crash Course to Terraform 0.12, and made many many other fixes and improvements.
As always, if you have any questions or need help, email us at support@gruntwork.io!
Motivation: In early 2017, we released the book, Terraform: Up & Running. In the two years since, Terraform has changed considerably (4 major releases, a change to HCL2, a revamp of Terraform state, and much more), so the book was due for an update.
Solution: Terraform: Up & Running, 2nd edition, has been published and is now available in all book stores! The 2nd edition is nearly double the length of the 1st edition (~160 more pages), including two completely new chapters (Production-grade Terraform Code and How to Test Terraform Code), and major changes to all the original chapters and code examples (see this blog post to learn about all the changes).
What to do about it: Get a copy of the book now! And if you want to learn how to adopt an Infrastructure as Code tool like Terraform at your company—including how to convince your boss—check out this blog post.
Motivation: Many of our GCP customers, especially the ones in regulated industries, such as financial services and healthcare, have been asking for a secure way to manage encryption keys.
Solution: We’re happy to announce terraform-google-security, a repository for setting up best practices for GCP security. The initial release introduces [cloud-kms](https://github.com/gruntwork-io/terraform-google-security/tree/master/modules/cloud-kms)
, a Terraform module that enables you to use Google Cloud KMS for creating and managing symmetric and asymmetric encryption keys and signing keys, as well as IAM role bindings to control access to the keys.
What to do about it: Email support@gruntwork.io to get access and take the module for a spin!
Motivation: This month, we took a look at a few issues customers have brought up with the Gruntwork AWS Reference Architecture:
vpc-app
module where the number of route table entries for the peering setup incorrectly based the count on the App VPC instead of the Mgmt VPC.gruntwork-install
calls in the template. This made it hard to know what versions of various scripts were being used, and there were places where the hard coded version number was duplicated if multiple scripts from a single module were being installed.Solution: We’ve made improvements to the Reference Architecture to address each of these challenges:
vpc-app
has been fixed to properly reference the mgmt
VPC when calculating the route table counts. See this commit in the Acme Reference Architecture example for the fix (or this commit for the single account flavor).What to do about it: Refer to the links above to find out how to add all of this to your infrastructure, and take the updates for a spin!
Motivation: Starting with 1.11, go introduced official support for versioned modules in the form of the new modules system. This was experimental in go 1.11 and 1.12 and required extra flags to enable. Now with go 1.13, modules is officially supported and will automatically be used when certain conditions are met (most notably the existence of a go.mod
file). Many projects have switched to managing dependencies using go modules which made terratest difficult to build with the older, community driven dep
project as the transitive dependencies were not pulled in properly.
Solution: Starting with v0.21.0, terratest now officially switched to go modules from dep
. This improved stability of transitive dependency management as all the upstream dependencies rely on go modules as well. We have also updated module-ci
with official support for go modules in v0.16.0.
In addition to better dependency management, go modules allows us to manage your source code outside of the GOPATH
. This means that you are now free to put your terraform modules anywhere in your filesystem and still be able to use terratest to test your modules!
Other Terratest Updates:
aws.IsPublicSubnet
and aws.IsPublicSubnetE
.IsTestDataPresent
to use os.Stat
to check existence of path. This release also introduces FileExistsE
, which will return the underlying error if it is anything other than an error representing "file does not exist" (e.g permissions error).DaemonSets
to the k8s
module: ListDaemonSets
, ListDaemonSetsE
, GetDaemonSet
, and GetDaemonSetE
.NewKubectlOptions
constructor now requires a third argument, the namespace name. The constructor will also initialize the Env
map to an empty map so you can start appending env vars on the returned struct.test_structure.CleanupTestDataFolder
function to delete the .test-data
folder used by test_structure
for storing temporary test data.dep
to go modules.HTTPDo
functions in http-helper
now require passing in *tls.Config
. This allows you to configure the TLS connection for testing access to the server.What to do about it: Follow the migration guide in the release notes to upgrade terratest and migrate from dep to go modules!
Motivation: With the syntax update of Terraform 0.12, our Terraform Crash Course became out of date. The code samples stopped working, and some of the behaviors of commands have changed leading to differences in the concept materials. While the high level, core concepts around using Terraform were still relevant, the low level details were incorrect.
Solution: We’ve gone through and refreshed the training course to update the lectures that became out of date with Terraform 0.12 to use the correct syntax and behaviors. Additionally, more commentary has been added where relevant to expand on the new features that have been introduced with Terraform 0.12. Since these are new lectures on the platform, you will be able to see all the ones that have been updated when you log back in!
What to do about it: Log in to teachable and take a look through the new lectures and let us know what you think!
Several Grunts are attending AWS re:Invent this year! We’ll be on the ground in Las Vegas during the week of Dec 2nd–6th. We’d love to meet with you over a meal, with a beverage, or preferably both. Book a time to meet with us.
sideCarContainers
did not render correctly in the deployment. Also fix bug where additionalPaths
and additionalPathsHigherPriority
required a serviceName
when using with hosts
.serviceAccount
input value. You can now disable port exposure of the containers by disabling the containerPorts
. Previously the disabled
flag was ignored.skip_bucket_versioning
is set to true.--terragrunt-include-external-dependencies
flag to tell Terragrunt to automatically include all external dependencies without any prompt.1.25.4
so that terragrunt
can be used in an EKS container that is assuming an IAM role.skip_outputs
to return mock_outputs
when it is set to true
.get_aws_caller_identity_arn
and get_aws_caller_identity_user_id
.aws_get_*
methods.include
blocks are now parsed before locals
. As a consequence, you will no longer be able to use locals
to configure include
blocks. However, with this change, you will now be able to use all the functions that depended on include
blocks in your locals
, such as get_parent_terragrunt_dir
.TF_DATA_DIR
environment variable.--terragrunt-ignore-dependency-order
flag to tell the xxx-all
commands to ignore dependency order, and apply all modules with as much concurrency as possible. This is mainly useful for the plan-all
command, where instead of processing dependencies in order, all the plans can be generated in parallel.remote_state
when using GCS.terragrunt.hcl
files.1.25.4
so that terragrunt
can be used in an EKS container that is assuming an IAM role.tls gen
command now supports setting the DNS names in the Subject Alternative Name (SAN) of the certificate. You can configure this using the new --tls-dns-name
arg for the command.helm grant
command will now additionally grant permissions to get the Deployment
resource that corresponds to the tiller deployment. This is necessary to use the terraform helm provider. The --helm-home
option of the helm configure
command can now be set using the environment variable, HELM_HOME
.cloudwatch-logs-metric-filters
module. The module accepts a map of filter objects and creates a metric filter with associated metric alarm. Use this module to monitor a CloudWatch Logs group for a particular pattern and be notified via SNS when the pattern is matched.rds
module now allows you to export various logs to CloudWatch depending on the database engine.apply_immediately
was ignored for cluster instances in the aurora
module.gp2
as the root volume. If you would like the old behavior (e.g to avoid a redeploy), you can set cluster_instance_root_volume_type
to standard
.ecs-service-with-alb
. You can set a delay in seconds (using input variable alb_slow_start
) that controls how long the load balancer should wait before starting to send requests to the targets.iam-admin
role to the saml-iam-roles
and cross-account-iam-roles
modules. This role allows administration of IAM permissions via IAM roles. Previously, we had iam-admin
for IAM groups only.custom-iam-group
module has been renamed to custom-iam-entity
. The updated module supports both IAM groups and roles. Additionally, the cross-account-iam-roles
and saml-iam-roles
modules now support tags.vpc-peering
module now exposes an auto_accept
variable that allows you to specify if it auto-accepts peering connections or not.vpc-mgmt-network-acls
for the mgmt VPC will now allow outbound UDP 53 from the private subnets.eks-cluster-workers
so that you can manage one ASG per AZ. This is necessary for the cluster-autoscaler to work. This is a backwards incompatible release. Please refer to the migration guide in the release notes for full details.custom_tags_eks_cluster
input variable on eks-cluster-control-plane
. Note that you will need to be using AWS provider version >=2.31.0
.eks-cloudwatch-container-logs
, where fluentd
was redeployed on every apply
.eks-cluster-workers
module.eks-alb-ingress-controller
module where you could end up with a perpetual diff in the plan. Fix a regression bug with eks-cluster-control-plane
where it returned the information on the EKS cluster before the API came up (as checked by null_resource.wait_for_api
). This could lead to issues in your terraform code if you were chaining an API request immediately following the creation of the EKS cluster.external-dns
app deployed with the eks-k8s-external-dns
module.attach-eni
script now supports Amazon Linux 2. This release also fixes a bug that prevented the script from working with CentOS 7.What happened: ECS now supports Automated Spot Instance Draining, a new capability that reduces service interruptions due to Spot instance termination.
Why it matters: In the past, if you were using spot instances in your ECS cluster, when the instance was terminated, all the Docker containers running on it would be terminated too. Now, ECS will automatically place Spot instances in DRAINING
state upon the receipt of two minute interruption notice. ECS tasks running on Spot instances will automatically be triggered for shutdown before the instance terminates and replacement tasks will be scheduled elsewhere on the cluster.
What to do about it: You can enable this feature by setting ECS_ENABLE_SPOT_INSTANCE_DRAINING=true
in /etc/ecs/ecs.config
in User Data, while your ECS instances are booting. To use spot instances with an ECS cluster backed by module-ecs
, you can set the cluster_instance_spot_price parameter.
What happened: AWS has announced that the Application Load Balancer (ALB) and Network Load Balancer (NLB) now support three new security policies for forward secrecy: ELBSecurityPolicy-FS-1–2–2019–08
, ELBSecurityPolicy-FS-1–1–2019–08
and ELBSecurityPolicy-FS-1–2-Res-2019–08
.
Why it matters:
ELBSecurityPolicy-FS-1–2–2019–08
gives you the option of only using the TLS 1.2 protocol with the same set of ciphers as available with ELBSecurityPolicy-FS-2018–06
. The ciphers in this policy ensure Forward Secrecy, preventing out-of-band decryption if someone records the traffic and later compromises the server’s private key.ELBSecurityPolicy-FS-1–1–2019–08
is available if you want a more permissible Forward Secrecy policy supporting both 1.1 and 1.2 clients.ELBSecurityPolicy-FS-1–2-Res-2019–08
is the most restrictive policy: it supports TLS 1.2 only and includes only ECDHE (PFS) and SHA256 or stronger (384) ciphers.What to do about it: You can now use these policies on your ALBs and NLBs to improve your security posture. If you’re using module-load-balancer
, you can configure which security policy to use using the ssl_policy parameter.
Below is a list of critical security updates that may impact your services. We notify Gruntwork customers of these vulnerabilities as soon as we know of them via the Gruntwork Security Alerts mailing list. It is up to you to scan this list and decide which of these apply and what to do about them, but most of these are severe vulnerabilities, and we recommend patching them ASAP.
tail
or cat
on a log file provided by an attacker. While most users are diligent about executing arbitrary shell scripts and containers, they may not be so diligent when investigating log output for a debug use case. Given that, it is strongly recommended that you update your iTerm2 shell to the latest version.