Back in October, 2018, we reached a big milestone at Gruntwork of $1M in annual recurring revenue (ARR). Today, a little over a year later, we’ve grown to roughly $2.7M ARR. And we’ve done all of this with $0 in funding. In this blog post—the second in our year-in-review series—I’d like to highlight a few of the key lessons that helped us get here:
Throughout most of 2018, Gruntwork was growing quickly: we doubled our revenue, tripled the size of the team, and signed some major customers. However, at the beginning of 2019, things seemed to slow down. I say “seemed” because we weren’t sure: we had a couple slower months of sales in Q1, and someone had a hunch that something was wrong (more on this hunch later), but was it just a fluke, or a sign that there was really something wrong? To answer this question, we had to turn to the data.
The first step was to understand our sales funnel, which is the entire process someone goes through to become a Gruntwork customer. Here’s a rough picture of what our funnel looks like:
To try to get some hint as to where things were going wrong in this funnel, we used HotJar to record what was happening on the Gruntwork website, and ended up with hundreds of recordings that all exhibited a similar pattern:
Here’s what the recording is showing:
Did you catch the problem? There’s an entire step in the funnel missing! Our website was encouraging users to go straight from Interest (land on Gruntwork website) to Engagement (look at pricing and make a purchase decision), without any Evaluation (learn about and try the product) in between. In other words, our website visitors were being shown a price tag before they had any idea what product we were offering, so it’s no surprise that many of them bounced immediately.
As it turns out, the big red “Get Started” CTA was part of a redesign we had released recently, and we were learning the hard way that it was having some unintentional consequences. But just how bad was it? How much of an impact was this really having?
To answer that question, we dug into our Google Analytics data to understand the website metrics, plus our Salesforce data to understand the sales metrics, and when we compared the data from Q1 2019 to the last several quarters of 2018, here’s what we found:
Visits to our content were up 70%; roughly the same percentage of those visitors ended up going to the Gruntwork website; and our sales team was converting about 10% more deals than before. But the step in between, where visitors would evaluate the product and go to the pricing page or contact sales, was down by around 40%! Worse yet, this had been a consistent trend for several months since we launched the redesign, and the only reason we hadn’t noticed was because the overall number of visitors had gone up. Yikes.
Now that we knew what was going on, it was time to fix the problem. We came up with a new design which replaced the confusing “Get Started” CTAs:
With clearer and more explicit “Buy Now” and “Contact Sales” CTAs:
If we had just blindly launched this change, it would have been hard to tell if it had the impact we wanted. Even if our website conversion metrics changed after launching this new design, we wouldn’t know if the change was due to the new design, or something else (e.g., other product launches, new blog posts, a change in search ranking, etc). To be able to separate causation from correlation, we needed to do a scientific experiment—or, if you prefer the marketing terminology for it, an A/B test.
We used Optimizely to show the new design to a randomly selected 50% of our website visitors and the old design to the other 50% of visitors, and we tracked our conversion metrics across each of these groups. The results were fairly clear. For example, here is how the old design (original_nav
) compares to the new design (beta_nav
) in terms of getting users to contact sales:
The new design increased the conversion rate dramatically in almost every metric we measured in the A/B test, including roughly tripling the number of visitors who contacted sales (+202.61%, as shown in the image above)! It also got far more users to browse the rest of the website and actually learn about and evaluate the product before going to the pricing / checkout page. The result, as I can see now with all the metrics for the year in front of me, was that we more than doubled conversions on our website with that simple design tweak.
While this story had a happy ending, it exposed a dark reality: we had been flying blind. Our website conversions had dropped by ~40% and we didn’t even notice! Here’s what we’ve learned as a result of this experience:
The reason we noticed that sales were down was because one member of our team had a hunch. As it turns out, she happened to be the newest and most junior member of the team, but something about the numbers looked odd, and she decided to speak up. And fortunately, the rest of us decided to listen. In fact, this entire year was a lesson in how to listen to the team.
“Listen to the team” may sound like an obvious piece of advice, but for a 100% distributed company like Gruntwork, with employees all over the US, Canada, the UK, Ireland, Germany, Finland, and Nigeria, the thousands of miles and multiple time zones separating team members makes communication harder than you might expect. While there are many advantages to distributed companies, there are some things you get “for free” when everyone works in the same office—things you probably take completely for granted—that become much trickier:
To overcome these challenges, we’ve had to put in extra processes and techniques to help us hear what the team is saying, even across thousands of miles, including:
Since you can’t see or hear each other in a distributed company, you have to go out of your way to regularly let everyone know what you’re up to. We use a number of tools and techniques for this. For example, we’ve configured Basecamp check-ins to prompt each person once a week to ask (a) what you accomplished last week and (b) what you’ve got planned for this week. Your response is stored on Basecamp and shared with the entire team:
We also use Basecamp check-ins for daily status updates (“What did you do yesterday and what do you have planned for today?”) and monthly check-ins (e.g., “What interesting books did you read last month?”). Other tools we’ve found helpful for keeping the team up-to-date include GitHub pull requests (submitting small, frequent pull requests not only helps update the team as to what you’re working on, but these smaller changes are also easier to review), Jira (breaking work down into small tasks that you update regularly helps the team better understand what’s happening with a project), and Slack (it can be helpful to post occasional status updates in the Slack channels for the relevant project/team).
Each employee chats with their manager on a regular basis. Each 1:1 is about an hour long, and the focus is explicitly on the items that don’t make it into status updates. That is, instead of talking about what you did (which we should already know, as per the previous section), you talk about how you did it, what worked well, what could’ve been done better, what concerns you have, how you’re getting along with the team, what the company could do differently, what your manager could do differently, and so on.
We take notes during the 1:1 in a shared Google Doc:
In between 1:1s, both the employee and the manager can add items to discuss to this Google Doc. That way, you have an agenda ready to go for the next time you meet.
Inspired by blog posts from Stripe and GitHub, we do the following:
To improve our ability to make decisions as a distributed company, while still defaulting to written and transparent communication, we use a Slack app called Conclude that allows us to systematically raise and conclude discussions. To raise a new discussion, you run the following command:
/c basic
This pops up a form to fill out (this uses a Slack Blueprint under the hood):
Here’s the information you need to fill out in the form:
Once you hit “Submit,” this creates a new Slack channel for the discussion. Once everyone has had a chance to have their say, the decision makers will make a decision, and announce it to the team and capture it for the future by running the following command in that Slack channel:
/conclude We decided to ...
Roughly three times per year, we fly everyone in the company out to an interesting location for a week, and spend 3–4 days doing work and 1–2 days having fun together. The company outings for 2019 have included Arizona:
Portugal:
And New Orleans:
Each of these outings has proven to to be hugely productive (and fun!). They have given us a chance to make up for some of the things you lose as a distributed company: we get to have more serendipitous discussions; we get to celebrate wins in person; we get to have everyone working in the same room, getting motivated and excited for the future; and, everyone gets to ask important questions and be heard.
One of the topics that always comes up in Gruntwork company outings is, “what should we work on next?” We’re in the DevOps space, where there seems to be an endless list of possibilities. Everything feels broken, underdeveloped, poorly designed, or entirely missing. Should we work on improving our Kubernetes offering? Serverless? CI / CD? Microservices? Service mesh? Edge computing? Distributed tracing? Observability? Big data? Data lakes? Machine learning? IoT? GitOps? ChatOps? DevSecOps? NoOps?
The list is endless, and while having lots of opportunities can be a good thing, it’s also a risk, as more startups die of indigestion than starvation. One of the realizations we had from discussions at our company outings was that we were being pulled in all directions, and taking on too many projects for too few people. Our roadmap was not well defined, the individual projects were not well defined, and the team was spending lots of time on the overhead of jumping from one project to another.
We decided to solve this problem by changing our product management process, and rebuilding it around (a) listening to our customers and (b) doing fewer things better. The first step we took was to start gathering all customer feedback and requests in ProductBoard:
We also started to test out a portal where customers could submit and vote on ideas directly:
The second step was to take all of this customer data, plus our own ideas, and organize it all into a list of potential top-level product objectives: e.g., streamline our EKS experience, add support for AWS Landing Zone, make it easy to set up SSO on AWS, and so on.
Now that we had our list of objectives, step three was to prioritize them. We used an approach similar to RICE, prioritizing objectives based on their reach (i.e., how many customers would be affected), impact (how much each customer would be affected), confidence (how sure we were the product would have the reach and impact we defined), and effort (how much time it would take to accomplish the objective).
Step four was to go into ProductBoard and enter all the possible features we could work on for any given objective (again, based on customer feedback, plus our own ideas), and to score each feature against that objective:
For example, in the image above, you can see on the right side that one of the objectives we were considering was to add first-class support for AWS Landing Zone. On the left side, you can see the list of all possible features we could build, and in particular, how we are scoring the features under “AWS Account Baseline” against the “landing zone” objective. For each feature, we specify its value, whether it’s a nice-to-have or must-have, and an estimate of how much that feature costs: e.g., Amazon GuardDuty is a high-value, low-cost must-have feature for landing zone, whereas “streamline creation of new AWS accounts” is a lower-value, high-cost, nice-to-have feature.
With all objectives prioritized, and all features scored against objectives, we can finally go to step five, which is to produce a roadmap. Our basic strategy is to swarm on the highest priority objectives, optimizing for (a) shipping as quickly as possible and (b) team efficiency. For example, let’s say some objective foo
would take the following amount of time to complete:
It looks like the work in foo
is parallelizable up to about 3 people, after which you get diminishing returns. We’d most likely assign 3 people to this objective. On the other hand, consider objective bar
:
For project bar
, it would make sense to assign as many people as we had available to it, as that would mean we could ship it in a matter of days, rather than months, and start gathering all the value from it right away, while we moved on to other work. Of course, in the real world, few projects are that parallelizable, and you also typically need to take into account other overheads, such as the time it would take for someone to ramp up on a totally new project.
Putting this all together, to create our roadmap, we start with the current quarter, add the highest priority objective, assign as many people as is reasonable to ship that objective quickly (swarm), then add the next highest priority objective, have people swarm on that, and repeat the process until there are no people left to assign. We’d then move to the next quarter and repeat the process.
For each item that made it into our roadmap, step six is to put together a Product Requirements Document (PRD) that briefly outlines what we want to do: the problem, why it’s important, who it affects, our appetite for solving it, what metrics we think it’ll move, and the requirements for the solution. Each PRD is written in Markdown, submitted as a pull request in GitHub, and reviewed by the team. Here’s the template we use for it:
The members of the relevant team can then submit a Request for Comments (RFC) that outlines ideas for how to meet the requirements in the PRD. This may include mock-ups and wire frames, technical designs, a description of the user workflow, and so on. Each RFC is written in Markdown, submitted as a pull request, and reviewed by the team. Once merged, the engineering work can begin!
This new process has significantly improved how we communicate product decisions and designs within the company, and, we believe that we’re now doing a much better job of focusing on the most critical features and products for customers. Here’s just a glimpse at some of the items we’ve delivered in 2019:
We’ve made major improvements to the Infrastructure as Code (IaC) Library, including:
dependency
blocks, support for all Terraform built-in functions, move to HCL2, support for encryption and access logging) and Terratest (switch to Go Modules, support for Helm Chart testing, support for testing Kubernetes).In partnership with Google, we added first-class support for Google Cloud Platform (GCP), including:
In partnership with the Center for Internet Security, we’ve released:
We released a series of Production Deployment Guides, which contain step-by-step instructions for how go to production on top of AWS and GCP, including:
We published blog posts on:
We updated the DevOps Training Library:
We also:
We learned a lot in 2019, especially about listening to the data, our team, and our customers. We hope you’ve found some of these lessons valuable too. Happy holidays and see you in 2020!