Once a month, we send out a newsletter to all Gruntwork customers that describes all the updates we’ve made in the last month, news in the DevOps industry, and important security updates. Note that many of the links below go to private repos in the Gruntwork Infrastructure as Code Library and Reference Architecture that are only accessible to customers.
Hello Grunts,
We’ve got three major new releases to share with you in this newsletter! First, Gruntwork Pipelines, which you can use to create a secure, automated CI / CD pipeline for Terraform/Terragrunt code, with approval workflows and Slack notifications, using your CI server of choice. Second, Gruntwork AWS Landing Zone, which you can use to set up new AWS accounts (via AWS Organizations) and configure them with users, permissions, and guard rails (GuardDuty, CloudTrail, AWS Config, etc) in minutes. Third, the Gruntwork Store, where you can buy all sorts of Gruntwork and DevOps swag, such as t-shirts, hoodies, and coffee mugs. We’ve also made major updates to Terragrunt, lots of progress on supporting Helm 3, and a ton of other bug fixes and improvements.
On a more personal note, we are all doing our best to cope as COVID-19 (coronavirus) sweeps the world. Fortunately, Gruntwork has been a 100% distributed company since day 1, so we were already set up for working from home, and while we’re not exactly at 100% (who could be in times like these?), we are committed to continuing to work on our mission of making it 10x easier to understand, build, and deploy software. In fact, with everyone stuck at home and online all day, the software is becoming more important than ever in keeping us all connected, informed, productive, and entertained. We will continue chugging along as always, and we sincerely hope you’re all able to stay safe and get through this with us.
As always, if you have any questions or need help, email us at support@gruntwork.io!
Motivation: Many customers have been asking us what sort of workflow they should use with Terraform and Terragrunt. They wanted to know how to work together as a team, when to use terraform plan
or Terratest, how to review Terraform code, and how to do Continuous Integration and Continuous Delivery (CI/CD) with infrastructure code. There are many solutions in this space, but most of them left a lot to be desired in terms of security, the ability to set up custom workflows, and support for tooling.
Solution: We’ve created a new solution called Gruntwork Pipelines! It allows you to set up a secure, automated CI / CD pipeline for Terraform and Terragrunt, that works with any CI server (e.g., Jenkins, GitLab, CircleCi), and supports approval workflows. Here’s a brief preview of the pipeline in action:
Check out the How to configure a production-grade CI/CD workflow for infrastructure code deployment guide for a longer video with sound, instructions on how to set up Gruntwork Pipelines, and detailed discussions of why you should use CI/CD, a typical CI/CD workflow, how to structure your infrastructure code, threat models around infrastructure CI/CD, what platforms to use to mitigate threats and more.
Under the hood, Gruntwork Pipelines consists of a set of modules and tools that help with implementing a secure, production-grade CI/CD pipeline for infrastructure code, based on the design covered in the deployment guide. All the modules are available in the module-ci
repository, and include the following:
What to do about it: Check out our Production Deployment Guide, take it for a spin on your infrastructure code, and let us know what you think!
Motivation: Setting up AWS accounts for production is hard. You need to create multiple accounts and configure each one with a variety of authentication, access controls, and security features by using AWS Organizations, IAM Roles, IAM Users, IAM Groups, IAM Password Policies, Amazon GuardyDuty, AWS CloudTrail, AWS Config, and a whole lot more. There are a number of existing solutions on the market, but all have a number of limitations, and we’ve gotten lots of customer requests to offer something better.
Solution: We’re happy to announce Gruntwork’s AWS Landing Zone solution, which allows you to set up production-grade AWS accounts using AWS Organizations, and configure those accounts with a security baseline that includes IAM roles, IAM users, IAM groups, GuardDuty, CloudTrail, AWS Config, and more—all in a matter of minutes. Moreover, the entire solution is defined as code, so you can fully customize it to your needs.
The new code lives in the module-security
repo of the Infrastructure as Code Library and includes:
account-baseline-root
: A security baseline for configuring the root account (also known as the master account) of an AWS Organization including setting up child accounts.account-baseline-security
: A security baseline for configuring the security account where all of your IAM users and IAM groups are defined.account-baseline-app
: A security baseline for configuring accounts designed to run your Apps.What to do about it: Check out our Gruntwork AWS Landing Zone announcement blog post for a quick walkthrough of how to use the account-baseline modules to set up your entire AWS account structure in minutes, and our updated Production Deployment Guide for the full details.
Motivation: We want to sport the finest apparel while looking incredible, and why would we keep that to ourselves?
Solution: We created a new Gruntwork Store with many new designs to choose from t-shirts, hoodies, coffee mugs, stickers, and more!
What to do about it: Check out the new Gruntwork Store to find your newest addition (to your closet!)
Motivation: Many users of Terragrunt wanted to be able to use off-the-shelf modules, either from the Gruntwork Infrastructure as Code Library, or other repositories, without having to “wrap” those modules with their own code to add boilerplate code, such as provider
or backend
configurations. Users also wanted to know how to make their Terragrunt code more DRY by reusing parts of existing configurations, such as common variables.
Solution: This month we introduced two new features to Terragrunt that directly address the pain points of third party modules and config reusability:
generate
blocks: generate
blocks allow you to generate arbitrary files in the Terragrunt working directory (where terragrunt
calls terraform
). This allows you to dynamically generate a .tf
file that includes the necessary provider
and backend
code to use off-the-shelf modules. See the updated documentation for more details.**read_terragrunt_config
helper function**: The read_terragrunt_config
helper function allows you to parse and read in another terragrunt configuration file to reuse pieces of that config. For example, you can use read_terragrunt_config
to load a common variable and use that to name your resources: name = read_terragrunt_config("prod.hcl").locals.foo
. This allows for better code reuse across your terragrunt project. Check out the function documentation for more details.These two functionalities in combination can lead to more DRY Terragrunt projects. To highlight this, we have updated our example code to take advantage of these features.
What to do about it: Upgrade to the latest terragrunt version, check out our example code, try out the new features, and let us know what you think!
Motivation: Helm 3.0.0 was released and became generally available in November of last year. This was a big release, addressing one of the biggest pain points of Helm by removing Tiller, the server side component. Since then, many tools have upgraded and adapted to the changes introduced, including the terraform provider which was updated last month. Now that all the tools have caught up, we are ready to start updating our library for compatibility.
Solution: We have begun to update many of our components to be compatible with Helm v3! While this is still a work in progress, many components have been adapted in the last month. Here is a list of tools and components in our library that are now compatible with Helm v3:
helm-kubernetes-services
is now tested with Helm v3, and known to work without any changes.eks-k8s-external-dns
, eks-k8s-cluster-autoscaler
, eks-cloudwatch-container-logs
, and eks-alb-ingress-controller
) have been updated to follow best practices with Helm v3. We have also updated our supporting services example to use the latest helm provider with Helm v3 as a reference on how to update your code.Note that although many of our components now support Helm v3, we recommend holding off on updating Reference Architecture deployments until it has been officially updated with the new modules.
What to do about it: Try out the new modules to get a feel for the differences with Helm v3 and let us know what you think!
terragrunt-read-config
, which is triggered immediately after terragrunt loads the config (terragrunt.hcl
). You can use this to hook important processes that should be run as the first thing in your terragrunt pipeline.generate
block and the generate
attribute to the remote_state
block. These features can be used to generate code/files in the Terragrunt working directory (where Terragrunt calls out to Terraform) prior to calling terraform
. See the updated docs for more info. Also, see here for documentation on generate
blocks and here for documentation on the generate
attribute of the remote_state
block.read_terragrunt_config
function, which can be used to load and reference another terragrunt config. See the function documentation for more details. This release also fixes a bug with the generate
block where comment_prefix
was ignored.--terragrunt-include-dir
where there was an inconsistency in when dependencies that were not directly in the included dirs were included. Now all dependencies of included dirs will consistently be included as stated in the docs. This release also introduced a new flag --terragrunt-strict-include
which will ignore dependencies when processing included directories.external_id
and session_name
parameters in your S3 backend config.find_in_parent_folders
function will now return an absolute path rather than a relative path. This should make the function less error-prone to use, especially in situations where the working directory may change (e.g., due to .terragrunt-cache
).terragrunt.hcl
and terragrunt.hcl.json
files.minikube
. Specifically, k8s.WaitUntilServiceAvailableE
and k8s.GetServiceEndpointE
now properly handle LoadBalancer
service types on minikube
.terratest
has switched to using Terraform 0.12 series. As a result, we have dropped support for Terraform 0.11 series. If you are using Terraform 0.11, please use an older terratest version.aws.InvokeFunctionE
and aws.InvokeFunction
.helm
module have been updated for Helm v3 compatibility. As a part of this, support for Helm v2 has been dropped. To upgrade to this release, you must update your CI pipelines to use Helm v3 instead of Helm v2 with Tiller.WorkingDir
and OutputMaxLineSize
parameters in packer.Options
.t testing.TestingT
parameter instead of the Go native t *testing.T
. testing.TestingT
is an interface that is identical to *testing.T
, but with only the methods used by Terratest. That means you can continue passing in the native *testing.T
, but now you can also use Terratest in a wider variety of contexts (e.g., with GinkgoT).clean_ami_name
function with clean_resource_name
.clean_resource_name
function instead of the deprecated clean_ami_name
function.--enable-dynamo-backend
, --dynamo-region
, and --dynamo--table
parameters in run-vault
and the enable_dynamo_backend
, dynamo_table_name
, and dynamo_table_region
in vault-cluster
.--resource-type rds
to nuke RDS Databases. They’ll also be nuked if you use without any filter.nomad-security-group-rules
module now correctly handles the case where allowed_inbound_cidr_blocks
is an empty list.eks deploy
command where it did not handle LoadBalancer
Services that are internal.
iam-policies
now allow you to set the create_resources
parameter to false
to have the module not create any resources. This is a workaround for Terraform not supporting the count
parameter on module { ... }
blocks.jenkins-server
module: expose a new user_data_base64
input variable that allows you to pass in Base64-encoded User Data (e.g., such as a gzipped cloud-init script); fixed deprecation warnings with the ALB listener rules; updated the version of the alb
module used under the hood, which no longer sets the Environment
tag on the load balancer.deployment_health_check_max_retries
and deployment_health_check_retry_interval_in_seconds
, respectively. Changed the default settings to be ten minutes' worth of retries instead of one hour.git-updated-folders
script.infrastructure-deployer
CLI where it did not handle task start failures correctly.terraform-update-variables
script to run Terraform in the same folder as the updated vars file when formatting the code.eks-cluster-control-plane
now supports specifying a CIDR block to restrict access to the public Kubernetes API endpoint. Note that this is only used for the public endpoint: you cannot restrict access by CIDR for the private endpoint yet.eks-k8s-external-dns
, eks-k8s-cluster-autoscaler
, eks-cloudwatch-container-logs
, and eks-alb-ingress-controller
) are now required. Previously, we defaulted use_iam_role_for_service_accounts
to true, but this meant that you needed to provide two required variables eks_openid_connect_provider_arn
and eks_openid_connect_provider_url
. However, these had defaults of empty string and do not cause an error in the terraform config, which means that you would have a successful deployment even if they weren't set. Starting this release the IRSA input variables have been consolidated to a single required variable iam_role_for_service_accounts_config
.clean_up_cluster_resources
script now cleans up residual security groups from the ALB ingress controller.eks-cloudwatch-container-logs
module now deploys a newer version of the fluentd container that supports IRSA.eks-cluster-workers
now supports attaching secondary security groups in addition to the one created internally. This is useful to break cyclic dependencies between modules when setting up ELBs.cloud-init
for boot scripts for self-managed workers by providing it as user_data_base64
.eks-k8s-external-dns
, eks-k8s-cluster-autoscaler
, eks-cloudwatch-container-logs
, and eks-alb-ingress-controller
. The major difference between this release and previous releases is that we no longer are creating the ServiceAccounts
in terraform and instead rely on the Helm charts to create the ServiceAccounts
. Refer to the Migration Guide in the release notes for information on how to migrate to this version.stable
helm repository does not refresh correctly in certain circumstances.1.15
and drops support for 1.12
.rds
module uses for replicas via the allowed_replica_zones
parameter.backtrack_window
variable.aurora
module via the ca_cert_identifier
input variable. Update the ca_cert_identifier
input variable in the rds
module to set the default to null
instead of hard-coding it to rds-ca-2019
.deletion_protection
in the rds
module.rds
module. Also added copy_tags_to_snapshot
support to the rds
and aurora
modules.var.allow_connections_from_cidr_blocks
optional.lambda-create-snapshot
and lambda-cleanup-snapshots
now support namespacing snapshots so that you can differentiate between snapshots created with different schedules. Take a look at the lambda-rds-snapshot-multiple-schedules example for an example of how to use this feature to manage daily and weekly snapshots.lambda-create-snapshot
to show what cloudwatch metric was updated.create_resources
to allow conditionally turning them off.rds
module now allows you to enable IAM authentication for your database.ecs-service
module. This allows you to provide a strategy for how to run the ECS tasks of the service, such as distributing the load between Fargate, and Fargate Spot.service_tags
variable. Refer to the release notes for more details.ecs-service
module now exposes task_role_permissions_boundary_arn
and task_execution_role_permissions_boundary_arn
input parameters that can be used to set permission boundaries on the IAM roles created by this module.logs:CreateLogGroup
to the IAM permissions for the ECS task execution role. This is necessary for ECS to create a new log group if the configured log group does not already exist.allow_ssh_from_cidr_blocks
and allow_ssh_from_security_group_ids
. Use these lists to configure more flexible SSH access.aws-config-multi-region
which can be used to configure AWS Config in multiple regions of an account. There are also major changes to the GuardDuty modules. Refer to the release notes for more details.iam_user_access_to_billing
.kms-master-key
module now exposes a customer_master_key_spec
variable that allows you to specify whether the key contains a symmetric key or an asymmetric key pair and the encryption algorithms or signing algorithms that the key supports. The module now also grants kms:GetPublicKey
permissions, which is why this release was marked as "backward incompatible."fail2ban
module that prevented it from starting up on Amazon Linux 2.iam_groups
module no longer accepts the aws_account_id
and aws_region
input variables.kms-master-key
module.DEBIAN_FRONTEND=noninteractive
on the apt-get update
calls. As a result, certain updates (such as tzdata
) would occasionally try to request an interactive prompt, which would freeze or break Packer or Docker builds.run-cloudwatch-logs-agent.sh
no longer takes in a --vpc-name
parameter, which was only used to set a log group name if --log-group-name
was not passed in. The --log-group-name
is now required, which is simpler and makes the intent clearer.alarms
now expose a create_resources
parameter that you can set to false
to disable the module so it creates no resources. This is a workaround for Terraform not supporting count
or for_each
on module
.cloudwatch-memory-disk-metrics
module now creates and sets up a new OS user cwmonitoring
to run the monitoring scripts as. Previously this was using the user who was calling gruntwork-install
, which is typically the default user for the cloud (e.g ubuntu
for ubuntu and ec2-user
for Amazon Linux). You can control which user to use by setting the module parameter cron-user
.cloudwatch-log-aggregation-scripts
module to correctly indicate that --log-group-name
is required.run-cloudwatch-logs-agent.sh
where the first argument passed to --extra-log-files
was being skipped.create_resources
input variable to cloudwatch-custom-metrics-iam-policy
so you can turn the module on and off (this is a workaround for Terraform not supporting count
in module
).aws_account_id
and aws_region
.aws-config
is now a general subscription module and has been migrated to module-security
under the name aws-config-multi-region
starting with version v0.23.0.allow-ops-admin-access-from-external-accounts
IAM role to the child AWS accounts. This meant that you could not run any of the terragrunt and terraform modules in those accounts since there was no IAM role with enough permissions to run a deployment. The change set in this release can be applied to your version of the reference architecture to create the necessary cross account IAM permissions to allow access to the ops-admin roles.aws-config-multi-region
in module-security
instead of aws-config
in cis-compliance-aws
.--link-mtu
parameter.allow_vpn_from_cidr_list
.elasticsearch-cluster-restore
and elasticsearch-cluster-backup
modules have been updated to nodejs10.x
.var.allow_connections_from_cidr_blocks
.alb
module no longer exposes an environment_name
input variable. This variable was solely used to set an Environment
tag on the load balancer.alb
module no longer accepts the aws_account_id
and aws_region
input variables.destroy
provisioner.server-group
module now exposes a new user_data_base64
parameter that you can use to pass in Base64-encoded data (e.g., gzipped cloud-init script).server-group
module via the new enabled_metrics
input variable.num_availability_zones
was producing Error updating VPC Endpoint: InvalidRouteTableId.NotFound
. Add parameter create_resources
for VPC Flow Logs to allow to skip them on Reference Architecture.icmp_type
and icmp_code
variables to the network ACL modules, allowing you to specify ICMP rules.single-server
module now allows you to add custom security group IDs to using the additional_security_group_ids
input variable. The parameters that control SSH access in the single-server
module have been refactored. The source_ami_filter
we were using to find the latest CentOS AMI in Packer templates started to pick up the wrong AMI, probably due to some change in the AWS Marketplace. We've updated our filter to fix this as described below.lambda
and scheduled-lambda-job
modules now support conditionally turning off resources in the module using the create_resources
input parameter.s3-cloudfront
module can now take in a dynamic list of error responses using the new error_responses
input parameter, which allows you to specify custom error responses for any 4xx and 5xx error.s3-static-website
module with versions of terraform >0.12.11, where the output calculation fails with an error.What happened:**** AWS has reduced the price of EKS by 50%.
Why it matters: AWS used to change $0.20 per hour for running a managed Kubernetes control plane. This cost is now $0.10 per hour, which makes it more affordable for a wide variety of use cases.
What to do about it: This change is live, so enjoy the lower AWS bill in coming months.
What happened: AWS CLI version 2 (v2) is now GA (“generally available”).
Why it matters: The new CLI offers far better integration with AWS SSO, as well as many UI/UX improvements, such as wizards, auto complete, and even server-side auto complete (i.e., fetching live data via API calls for auto complete).
What to do about it: Check out the install instructions, migration guide, and give it a short!
Below is a list of critical security updates that may impact your services. We notify Gruntwork customers of these vulnerabilities as soon as we know of them via the Gruntwork Security Alerts mailing list. It is up to you to scan this list and decide which of these apply and what to do about them, but most of these are severe vulnerabilities, and we recommend patching them ASAP.
Sudo could allow unintended access to the administrator account. Affected versions include Ubuntu 19.10, Ubuntu 18.04 LTS, Ubuntu 16.04 LTS. We recommend updating your sudo
and sudo-ldap
packages to the latest versions. More information: https://usn.ubuntu.com/4263-1.
It was discovered that OpenSMTPD mishandled certain input. A remote, unauthenticated attacker could use this vulnerability to execute arbitrary shell commands as any non-root user. (CVE-2020–8794) More information: https://usn.ubuntu.com/4294-1.