How Event Tracking Drives Progress

Most people look at their analytics every day. But very few people use event tracking to answer simple questions like these:

  • "Did that new heading on my homepage really improve signup conversion?"
  • "Did our redesign actually get more users to activate?"

When your business depends on it, do you measure these changes or just move on to the next "improvement"?

Event tracking is critical to measuring your key metrics like Signup Rate and Churn. If you do measure conversion when you release something new, there's really only two possible outcomes.

  • Success! Your key metric improved. Can you apply the same idea elsewhere for another win?

  • Failure! Your key metric didn't improve, or even got worse! Good thing you measured, because now you've got a second chance. Revert the change or try a completely new approach.

If you aren't measuring your key metrics, it's easy to trick yourself into thinking you're doing well. As we saw in Week 0, this happens to the best of us and it's extremely damaging to your business.

How do we know? We learned it the hard way.

We had this exact problem ourselves last year, while working on a very different analytics product.

With every new release we'd invite a bunch of people from our interested list. Conversion was terrible, but we weren't tracking it. Looking back and calculating the numbers now, less than 4% of invites actually signed up and only a single user was active.

But we tricked ourselves without real numbers... "a bunch said they were interested!" How many, exactly? How many signed up? Installed? We should've been tracking it by the numbers! If only we'd put up a monitor with 4% signup conversion, 1 active user, we would've pivoted to Segment much sooner.

But the dark days are over. We now track our key metrics religiously.

In the first week after launching Segment, 60% of the people who Signed Up successfully Created a Project. Creating a project is really easy (it's literally the first thing you do after signing up) so losing 40% of our signups right off the bat seemed silly.

To fix it, we've run a few different experiments: moving our email confirmation, improving our call to action, and even just fixing a few bugs. After the changes, 94% of signups were successfully creating projects:

signups with projects

By tweaking our onboarding flow, we were able to increase our user conversion rate dramatically. But often people ask us "How do I even choose my key metrics"?

How do you choose a key metric?

For starters, there's two main types of metrics.

The first are total numbers. Good examples are the total number of 30-day active users on Facebook, or the total number of Pro users on Dropbox. As long as you avoid bullshit metrics, these metrics will give you a sense of the true scale of your business. Unfortunately, they generally won't help you improve it.

The second type of metric is a conversion rate. These might be the percentage of Facebook users who upload photos, or the percentage of Pro users on Dropbox. Metrics like these directly correspond to the quality of your product. They are excellent metrics to track because they show conclusively whether you are improving your business or not.

We're going to focus on the second type of metric, conversion rates, because we want to help you improve your business, not just monitor it.

What conversion rate is most important to your business? Signup rate? Viral invite conversion? Churn? Thinking about your business model and engine of growth should give you a clear answer.

Once you start measuring your key metric, you'll start fixating on it. You'll find new and clever ways to improve it. You make what you measure - just by measuring your progress, you'll begin to recognize the factors which are limiting it.

Your first case study this week is a great example of careful measurement paired with a clever solution.


Case Study: Grouper

Grouper sets up drinks between two groups of friends who don’t know each other: three guys and three girls.

Early on they had a huge problem: one group would often get skittish! They'd come up with a lame excuse and cancel at the last minute, leaving the other group hanging. And the worst part was that the Grouper team then had to personally call the other group to apologize.

They'd been tracking the cancellation rate carefully, and it was rising.

So they came up with a hypothesis: canceling was way too easy.

Their experimental solution was ridiculously simple. If a group called to cancel, Grouper would ask the cancelling party to personally call the other group and explain why they couldn't make it. Simple right? Their cancellation rate dropped by 90%.

La Tienda

Case Study: La Tienda

La Tienda sells artisan Spanish food and wine in North America and Europe. Their key metric is sales.

But improving sales overall is a tough, generic problem. So they started by splitting their users into two geographical groups: one group of users who lived near their warehouses, and one group who lived far enough away that they had to pay higher shipping costs.

The data showed that visitors far away from their warehouses were 48% less likely to purchase. That seemed like a great opportunity to grow sales for a significant subset of their users.

As a test, they tried switching the far-away users to a flat-rate shipping model. The simpler model made users feel better about the shipping price, and La Tienda's sales jumped 70%.


Case Study: YouEye

YouEye is a tool that lets you run remote user testing with eye tracking and emotion recognition. In the early days they were developing crazy tech for video processing and gaze analytics... and everything was working beautifully!

But then they got test participants who weren't members of their own QA team :)

It turns out there were a handful of best practices their team used when taking a user test. But the strangers they recruited were doing it all wrong! Was it a problem with messaging? Or the technology?

To get a clear answer they needed to test a hypothesis: "If we add a mandatory training webinar for our testers, then more tests will have valid results." (A result has to pass a bunch of requirements to be valid... e.g. the user can't move their head too much.)

To test their hypothesis, they started tracking just two events Started User Test and Finished User Test with Valid Results.

First they measured the "baseline" success rate with no webinar. Then they held their first mandatory training webinar, and measured the new success rate: the webinar increased the odds of a valid result by 150%!

Not only was YouEye happy, but their test participants were happy they could be more helpful too. If YouEye hadn't collected data on the impact of a mandatory webinar, they never would have discovered that it was so worthwhile.

Notice how each of these examples has the same flow. The business chose a single key metric to improve, they came up with a clever idea to improve it, and they measured the results to make sure they really improved their metric.

In all three examples the event tracking was incredibly simple. YouEye measured "Valid Results" per "Test Started", La Tienda measured "Purchase Complete" per "Visitor", and Grouper measured "Cancellation" per "Grouper".

What two events can you record that are core to your business?

To help you get started with event tracking, we prepared a shortlist of the best event tracking solutions out there.

KISSmetrics and Mixpanel are specifically designed to track and analyze custom event data. Sacha Greif has a great post outlining the differences between the two.

Optimizely is an A/B testing platform. It's excellent for testing tweaks in homepage header copy, button colors/sizes, or even layout if you're adventurous. And you can run the tests without pushing new code.

Ready to get started? Here's your homework:

  • Choose and track your key metric.
  • Think up an experiment to improve it.
  • Run your experiment and see if you succeeded.

Want feedback on your key metric or experiment? Email us at and one of our co-founders will help you out!