Nupur Bhade Vilas on October 20th 2021
Andy Jiang on August 27th 2015
Here at Segment, we love frameworks and data. Frameworks help us organize and understand the world, and data helps us stay focused and monitor progress. So, it’s no surprise we use them both to help us project future growth and figure out how to hit our lofty goals.
This is the framework to understanding what vibes the hardest at Segment HQ.
In this post, we’ll share our approach to understanding our growth dynamics, which involves building a mathematical model for marketplace dynamics and aligning internal resources to focus on growth goals. You can’t really get more data-frameworky than this!
To start from the beginning, we’re a two-sided marketplace with customers on one side and partners on the other. This business model strongly affects how we grow.
For marketplaces like us, growth is driven by “network effects” where the value of the marketplace increases as usage increases. If you look around Silicon Valley for “unicorns,” or some of the fastest growing companies, you’ll no doubt find a few marketplaces. Uber and Airbnb have each grown to over $20B valuations in less than 7 years.
Marketplaces are awesome because, without them, buyers and sellers face complex, risky, and time-consuming transactions. For example, without Lyft and Uber, you hope a random taxi drives by at just the right time. Meanwhile, the cabbies roam around, hoping to spot someone frantically waving their arms in need of a ride. This is a crazy process for connecting riders and drivers, and it’s clearly an inefficient marketplace, especially in cities like SF where the volume of taxi drivers does not meet the demand from riders:
Uber and Lyft significantly reduce that complexity. Riders press a button to hail a driver to their GPS location, and the pair rate each other after each ride to improve trust and safety. Drivers don’t look for people waving their arms, they just wait for their phone to tell them exactly where their next ride begins. In our case, we bring together businesses that care about data and SaaS products that run on data with one API to use them all.
So, thanks to the marketplace, transacting is dramatically more efficient:
The value of the marketplace comes from the ease of transaction, marked by the availability of buyers and sellers on either side of the exchange. In other words: the value increases with the number of participants, attracting even more buyers and sellers.
Knowing we’re a marketplace, we borrowed heavily from this HBR article that outlines a rigorous way to model marketplace growth with six separate dynamics:
Buyer-to-Seller Cross-side — prospective buyers tell prospective sellers that they prefer to do business on the platform. “It was hard to find your place. How come you don’t list on Airbnb?”
Seller-to-Buyer Cross-side — prospective sellers expose prospective buyers to the platform. “Buy my adorable elephant mittens on Etsy.”
Buyer Same-side — buyers love the new transaction experience, and tell other prospective buyers to use the platform. “Why would you use a taxi? You should check out Lyft.”
Seller Same-side — sellers love the new transaction experience, and tell other prospective sellers to use the platform. “I made a good chunk of change while I was on vacation, renting my place on Airbnb. You should try it out!”
Direct to Buyers — the marketplace tells buyers about itself directly. “Wow, Uber has a lot of billboards here.”
Direct to Sellers — the marketplace tells sellers about itself directly. “I searched for jobs in Cincinnati and found Lyft.”
These internal growth dynamics let us put together a simple numerical model. The model uses the current size and growth rate of customers and partners as inputs, and then applies each of the six dynamics to project future performance.
Using the model we’re able to see a range of forward projections based on conservative and aggressive goals. The chart below details two sample scenarios, the Base Case and Growth Case (again, totally made up here for the sake of example!).
The Base case is a conservative, keep-everything-status-quo projection. Based on historic figures, there is already some cross-marketplace network effects going to work, which is great. As a result, this case predicts ~1.26 million customers by the end of the period with a steady ~%4 m/m growth and a total of %58 growth.
The Growth case, alternatively, assumes that we will figure out ways to get more customer signups from our partners (0.4 gradually growing to 10 new customers per partner) which represents the Seller-to-Buyer cross side effect.
This case predicts ~1.44 million customers by the end with total of 80% growth. Better! And this is only adjusting one assumption: the new customers per partner cross-side dynamic.
Let’s walk through how this model was put together with our fake data, so you can do the same if you have a platform business.
To start playing around with the model, you can copy this handy sheet we built for this example. 😃
The model contains three tabs that are split up into these groups:
Aggregate top-level numbers (total customers and partners) for the entire platform
Growth dynamics for customers
Growth dynamics for partners
First we defined the participants of each marketplace, as this helps us measure their sizes to put into the model.
For Segment, the definitions are:
Customer: somebody who registered an account with us
Partner: an integration on our platform
We added these historic numbers to sheets 2 and 3 at the top section (reminder: fake data!):
Then, we looked at historical attribution numbers in order to determine reasonable rates for each growth dynamic.
We’ve been fortunate enough to have been attributing our sources for customers and partners (e.g. recording where they came from) for a meaningful time period. This data is necessary to backfill historic cross-side and same-side growth dynamics.
new customers per partner: total partner referral signups
new customers per existing customer: signups that mentioned “friends” as how they heard about us
customers direct growth: signups attributed to content, paid marketing
We filled them here in the green highlighted cells (note that the blue font indicates the cell is calculated and the black font is hard coded):
Once we have historical data, we filled in the future assumptions.
We determined the assumptions by looking at the historic range for that metric, thinking about the factors in our business or situation that impacted that metric, and estimating what that metric will be in the next few months.
Then we continuously adjusted them, so the overall top level growth numbers (aka sheet 1) looks reasonably achievable for us. Sanity checking the assumptions is critical, so take time to revisit them to make sure they make sense.
Let’s take use the Partner Direct growth rate as an example. Historically in this example, it has been 0%. Let’s assume in these past few months, we did nothing with regards to attracting new partners to the platform. However, we plan to put way more focus in this channel: developer evangelism, outbound business development, etc. From that information, a growth case assumption for this rate would be 30% or more.
If there is no historical data to help set the assumption, then ballpark guesses are acceptable, so long as you continue to tweak it so that the overall outputs of the model passes a sanity check later on. Ballpark guesses based on anecdotal stories can be helpful: “the majority of buyers who I talk to say they heard about us from their friends” would make the case that same-side buyers are a strong driving force.
Once we settled on reasonable assumptions, we projected future growth by applying the dynamic rates and used these to set goals. The dynamic nature of the model helps us to easily visualize how growth is impacted by different scenarios. We chose our goals after seeing where our efforts would yield the biggest change; for example, if spending more time on direct customer marketing or figuring out how to drive signups from our partners would have a greater effect.
To identifying these key opportunities, or the dynamics that would make the biggest impact with the smallest improvements, we tested different projections in the model.
Approaching this systematically, we came up with six different scenarios. Each scenario uses conservative estimates for all six assumptions except for one, which is aggressive. Going through this process, we could see which dynamic had the highest impact change in number of customers.
As a thought exercise, here are some intuitive examples of growth opportunities based on your company’s stage.
For earlier companies, it is likely that the cross-side value hasn’t reached a critical tipping point yet; i.e. the size of one marketplace is too small for it to be valuable to the other side. You likely have a classic the chicken or the egg problem. In these situations, it is wise to devote resources on propping up or subsidizing one side of the marketplace via its direct channel, of which the company has complete control (“more marketing spend will yield more users”).
In Segment’s early days, the team subsidized the partners side by building and maintaining the first 100+ integrations. The selection of integrations made Segment appealing to customers. However, now we’re fielding inbound requests from newer partners to be added to our platform—these companies see Segment’s value as a distribution mechanism to acquire users cheaply.
For marketplaces that are already experiencing positive cross-side effects, strengthening these effects can often lead to much higher growth (vs. focusing on a direct channel), due to their compounding impact.
Ultimately, once we identified which channels needed the most love, we could set more granular growth goals, such as “get X customer sign ups from partners”. With clear, quantifiable objectives that we can measure over time, it’s easier for us to stay on track!
As a result, we evaluate every effort in our Product, Marketing, and Partners teams within the platform model. We use the model as a map, helping sequence and compare alternative tactics.
Because our customers are our source of revenue, we’re focused on channels for “customer direct”, “customer same-side”, “partner-to-customer cross-side.” At our current state, partner-side growth dynamics aren’t as important until we have a fully functioning “customer-partner cross-side” effect.
We’ve used these growth channels to organize our resources: our marketing team owns the “customer direct” channel, product team owns “customer same-side”, and our partners team owns the “partner-to-customer cross-side” channel:
Goal: enthuse our customer base to refer more
Metric: # Monthly signups attributed to “friends” or “colleagues”
Goal: find and convert more fresh customers
Metric: # Monthly signups attributed to our content, paid acquisition
Goal: experiment to figure out how to make this dynamic work
Metric: # Monthly signups attributed to our partners
We’re able to attribute the signups by asking users where they heard about Segment when they’re signing up.
Now, evaluating or comparing new projects is much clearer when your team’s goal is tied to one metric. We can just ask ourselves if the project in question will positively impact the metric, by how much, by how soon, etc.
Moreover, assigning ownership of metrics to different teams is an excellent way to modularize their efforts and keep them concentrated on a specific target.
Though this was a simplified walk through of how we currently think about our business and its growth challenges, we believe this step-by-step approach helps separate signal from noise. Instead of being overwhelmed by tracking and tackling everything under the sun, we can focus on the growth metrics and initiatives that matter.
But it’s worth noting that metrics and data aren’t everything. We can manically chase after our targets, but we risk doing so at the expense of overall quality. If we’re only concerned with page views, then our content may erode to listicles of corgi gifs. Similarly, if we focus only on sign ups, those leads may be of lower quality and never end up activating.
As such, there are mechanisms outside of this growth model that help inform our decision making. We consider quantifiable parameters, such as activation and churn rates (deliberately omitted in this example and blog post for simplicity). We also set qualitative goals, such as “Build a brand people love” (a top-level marketing team objective to complement its other goal to get more signups) and “Karma” (one of Segment’s values, against which all decisions are made, even on an individual level). These safeguards help us make sure that we don’t just amp up our numbers, but that we build a sustainable, fundamentally useful business.
David Shackelford on July 22nd 2015
At PagerDuty, reliability is our bread and butter, and that starts with the first time users interact with our platform: our trial onboarding.
(For a little background on us, PagerDuty is an operations performance platform that helps companies get visibility, reliable alerts, and faster incident resolution times.)
We’d heard from customers that our service was simple to set up and get started, but we also knew there was room for improvement. Users would sometimes finish their trials without seeing all of our features or wouldn’t finish setting up their accounts, which caused them to miss a few alerts. We wanted to help all of our customers reach success, and we knew if we did this well, we’d also boost our trial-to-paid conversion rate.
We talked to our customers, support team, and UX team before diving into the onboarding redesign, but we also wanted to complement their qualitative feedback with quantitative data. We wanted to use telemetry (automatic measurement and transmission of data) to understand exactly what users were doing in our product, and where they were getting blocked, so we could deliver an even better first experience.
Before we started making changes, we needed to establish a baseline: how were users moving through the app? Did their paths match our expectations? Did the paths match the onboarding flow we were trying to push them through?
After researching the user analytics space, we found Mixpanel and Kissmetricshad the best on-paper fit to answering these types of questions. However, the investment (in both money and implementation time) to adopt these kinds of tool was significant — so much that we wanted to test both to make sure we picked the right tool. But, the only way to comprehensively test tools like this is to run them against live data.
That’s where Segment came in. We were excited to find a tool that let us make a single tracking call and view the data, simultaneously, in multiple downstream tools. Segment made it easy to test both platforms at the same time, and with more departments looking at additional analytics tools, the prospect of fast integrations in the future also excited us.
We used our questions about user flow and conversion funnels to test if Kissmetrics and Mixpanel could help us understand the current state of affairs. Using Segment, we tracked every page in our web app and as many events as we could think of (excluding a few super-high-volume events such as incident triggering). Then our UX, Product, and Product Marketing teams dove into the tools to evaluate how well they could answer our usage questions.
After spending a few weeks with both tools, we went with Kissmetrics. To be honest, they’re both great, but we liked the funnel calculations in Kissmetrics a bit more. They also offered multiple methods for side-loading and backloading out-of-band data like sales touches, so Kiss was the winner.
Throughout this process, we also learned that we should have tracked far fewer events. If we were to do it again, we’d only collect events that signaled a very important user behavior and that contributed to our metrics for this project. It gets noisy when you track too many events.
After combing through the data, our user experience team had a lot of ideas to develop a new onboarding flow. We mocked up several approaches and vetted them against both customers and people inside the company. Then, we tried out the design we thought would best communicate our value proposition and help customers get set up quickly. Our approach included both a full-page takeover right after signup, as well as a “nurture bar” afterwards that showed customers the next steps to complete their setup.
After implementation, we tracked customer fall-off at each stage of the takeover wizard to see how we were doing. We also measured “Trial Engagement,” an internal calculation for when an account is likely to convert from their trial (what others call “the aha moment”). Using Kissmetrics, it was very easy to measure how the new design was working against this metric.
After shipping the new experience, we saw a 25% boost in the “engagement” metric mentioned earlier, measured using Kissmetrics’ weekly cohort reports. Kissmetrics showed us that with the new wizard, new users were actively using the product earlier in their trial and using more of the features in the product. In addition, far fewer new users were ending up in “fail states,” such as being on-call without notification rules set up.
Since then we’ve run various experiments on the trial funnel, and the weekly cohort report has really helpful for looking at the effects of those experiments and determining whether our changes are actually helping users.
Qualitative feedback has also important to get a full picture of how the changes to the product affect the user experience. We’re pretty low-tech in this regard — I dumped a list of users I pull from Kissmetrics into to CSV, then sent them a quick micro-survey to see if anything was unclear with the new onboarding. We also gathered internal feedback from our sales team and support teams to confirm that customers were finding our new experience easy to use and understand.
To get more context on how people are using PagerDuty, we also route Segment data to Google Analytics, which is a great way to look at meta behavior trends. Kiss can tell you who is doing something, but Google Analytics is a little better for asking questions like “Which features get used the most?” or “What Android versions are most of our users on?” Google also manages top-of-funnel analytics (the marketing website) a little better, while Kissmetrics is more powerful once the user is actually in trial.
Since our initial work on the wizard, we’ve expanded our use of analytics to look at behavior of both trial accounts and active customers. Whenever I’m about to start work in a particular area of the product, I’ll use Kissmetrics to pull a list of highly active users in that area, and then reach out to them to understand how they’re using our features and what their pain points and goals are. We also implemented mobile tracking with Segment because some of our customers mainly use our service through our mobile app, and installed the ruby gem for code-level tracking.
There are plenty of improvements we’d like to make to our onboarding, but since it’s doing pretty well right now, our next project is going to be investigating simultaneous A/B testing. We move fast, and if we’re running a sales or marketing initiative alongside product changes, sometimes it’s tough to sort out what impacted what. Split-testing trial experiences should let us get cleaner data about how our onboarding redesign is improving our trial users’ experience, and ultimately help us make better decisions about the ideal experience.
Like any new initiative, we learned a lot when implementing analytics for the first time. Here are some of our takeaways — hopefully they’re helpful to you as well.
Choosing what to track is an art — too few events and you may miss a key action; too many and it gets really noisy. Segment hadn’t yet shipped their “Tracking Plan” feature, so we had to manage our list of tracked events in Google Docs. It wasn’t pretty — in fact, had we done it again, I would have started a fresh new account after we finished our evaluation, tracking far fewer events.
Using separate production and development environments is absolutely key, in both Segment and the downstream analytics tools. We have it set up so events on all our local, staging, and load test environments go to “pagerduty-dev”, and only events in the production web stack go to the main account. In addition, we add filters to make sure that we’re not rolling activity on internal employees’ accounts into our metrics.
Nobody’s solved the problem of simultaneously looking at user-level and account-level funnel data. We’re currently looking at a Kissmetrics workaround using two different Kiss instances, but it was surprising to us that nobody natively handles pivoting on both levels.
I hope you found our story helpful! If you have any questions or feedback, hit me up @dshack on Twitter.
A big thanks goes out to Dave from PagerDuty for this piece! If you’re interested in sharing your Segment story, we’d love to have you. Email email@example.com to get started!
Luke Gotszling on July 17th 2015
Like many startups in the early stages, at Peruse.io we’ve asked ourselves the question – how do you find product market fit? In this post, we’ll share our process for finding it and the tools we use for user testing, customer development, and data analysis.
For a little background on us, Peruse.io lets you easily search through information in your documents on Box and Dropbox. You can ask questions like, “Where is that sales projection spreadsheet I edited in March?” and we’ll show you.
Our inspiration for starting this company was that finding information in our files is still essentially the same tedious process that existed twenty years ago. We launched at 2015’s Techcrunch Disrupt in New York and now have a few hundred users in our public beta.
There are a lot of different definitions for product market fit. These two resonate with us:
“Product/market fit means being in a good market with a product that can satisfy that market.” - Marc Andreessen
“When, in a survey, at least 40% of users say they would be “very disappointed” without your product or service.” - Sean Ellis
More concisely, to us it means 1) our product works as intended, and 2) our users love our product.
We started with enabling users to search for information inside of spreadsheets, which the beta group likes. However, the story doesn’t end here as we’ve received plenty of feature and integration requests. We’re currently on a mission to see which features will bring us closer to product market fit and drive adoption.
To see how people are using the product and what’s bringing them back, we track the following events in Segment. We decided to use Segment to reduce development overhead and bloat in our site code, as well as ensure consistent customer data across all of the end tools that we use. For example, a spike in event A in Google Analytics can be easily investigated by searching for the users by event A in our communication tool, Intercom.
We want to track a few things about the new search feature. First is the accuracy of the results, to see if the product works as intended.
To measure accuracy, we’re tracking the following events with Segment:
Document Searched event is fired when a user submits a new query. When the query is submitted, results will appear on the interface. The user, assessing the quality of these results, can then rate them by clicking a button:
By including a button for people to rate the accuracy of the search and adding meaningful event tracking, we can look at the ratio of inaccurate
Search Ratedevents to either accurate or general
Document Searched events to determine if we’ve hit our first goal: to make the product work as expected. In case people only rate bad answers, we can also evaluate the
Document Searched property
result_clicked, since people will likely only click through answers that look good.
To measure if our users love us, a more qualitative indicator, we segment our customers to find power users and potential churners, and reach out to them for direct product feedback.
The events necessary to create segments of the power and churn-risk users are
Signed Up and
Account Authenticated. In addition, we can look at if they have searched any documents from the events we’re already tracking above.
We measure users at risk of churn as people who show few signs of engagement. For Peruse, we believe that there are two types of churn risk users, short-term and long-term. We define these groups respectively as people who haven’t authenticated a Box or Dropbox account within 24 hours and people who haven’t used the service (searched a document) in 14 days. Note that the time intervals were not scientifically determined, as we plan to adjust these in the future, but represent a subset of users large enough to yield an adequate number of meaningful conversations.
Conversely, we identify power users by looking at strong engagement. Based on our existing user base from two months of being in public beta, we’re defining a power user as someone who has authenticated their Dropbox or Box account and searches documents at least once every few days. Our definition of a power user is partially a convenience measure for us: we adjust the “frequency” parameter until the power user sample is large enough to generate sufficient meaningful conversations.
Further segmentation is not being done at this stage of the product (other than ad-hoc segments like users who request a specific feature). However, this is an avenue we may explore in the future to tweak engagement or kick off nurture campaigns.
We’re using the following tools to help get to product market fit:
Google Analytics: usage and analytics
Intercom: customer communication and feedback
FullStory: session recording
Inspectlet: user behavior visualization
Segment: customer data hub
In order to assess product accuracy and repeat usage (a sign that users love the product), we decided to keep an eye on event data in Google Analytics. For product feedback, we use Intercom, FullStory, and Inspectlet for broad site usage trends, alongside communicating with users and event tracking. We don’t need to use each service’s event API, since we can just use
analytics.track once. The Segment tracking plan page can be used to select which services receive events.
We’re using Google Analytics (free!) to track aggregate usage data, visitor origins, trends, and search result feedback counts
Specifically, in our search for product market fit, we are tracking aggregate trends of upvotes compared to downvotes in search result feedback counts. The benchmark for query accuracy is
(number of upvotes - number of downvotes) / number of searches performed.
For the next few weeks, we’ll be keeping a close eye on this report, which measures the event
Document Searched is a feature introduced on June 23rd.
Intercom is a customer communication tool: you can segment your users into smaller groups (based on their actions, location, and traits) and message them via email, in-app notifications, and live chat. They’re affordable at $50/m (the plan that we’re paying).
As stated, our definition of a “power user” is someone who has authenticated with Box or Dropbox and actively uses the service. Since “Account Authenticated” is an event that is sent when Box or Dropbox is added, it is available as a filter in Intercom. Additionally, “Active” is a user who is “last seen” within a week. With both filters, we create the segment in Intercom:
All names and sensitive information are removed.
Then we use email and in-app messaging for reaching out, with the objective to understand the user’s use case and get feature feedback, to guide future development.
For users with churn risk, we look at opposite triggers: did not authenticate and also low usage. We’ll setup a similar filter in Intercom. Similarly, we’ll email this segment of users, asking them what types of searches they would like to perform or if there are other features that are missing. This helps us prioritize product development.
Another benefit of using Intercom is seeing threaded conversations along with user events on a per user basis. This is very valuable to us in terms of seeing the user’s level of usage at a glance (and their experience in terms of answer quality by looking at how often they upvote and downvote results). Here’s how it looks like for two unique users:
FullStory is a session recording tool that allows you to easily record, replay, search, and analyze each user’s actual experience with your website. These sessions give us a good sense of how users navigate our site, especially on mobile devices:
Inspectlet, a similar service, also provides “eye tracking heatmaps”, which basically is a map of where the your visitors are looking and what parts of the site they’re reading by visualizing their mouse movements (here is a study they reference that examines this correlation).
Eye tracking with Inspectlet shows that people tend to fixate more on the right side than the left. We’ll investigate whether this is something that needs to be addressed or a symptom of reading left-to-right.
Scroll tracking shows that most of the relevant information is above the fold (this page becomes longer when search results are displayed).
From the click tracking heatmap we can see that the results we show at the top tend to be more relevant for our users (for user privacy the search results are not shown on the heatmap).
Both of these tools are great for learning how users interact with our site, if our navigation is useful, and if we’re surfacing helpful results.
Finding product market fit is certainly not easy. However, given the proliferation of tools out there to assist in every part of the product lifecycle (13 Marketing Automation Tools, $9 Marketing Stack, and How to Launch a Startup with $99), there are now many ways to become smarter about building features and minimizing the iteration time.
Segment has helped us experiment with new tools without incurring development overhead, and we’ve been very happy with Google Analytics, Intercom, FullStory, and Inspectlet so far. We know we’re not there yet, but we look forward to continuing to monitor our metrics and talk to customers as we make our way toward product market fit.
Morgan Brown on July 7th 2015
As growth marketers building a community for other growth marketers, data is a key component to our success. We’re meticulous about defining our metrics and tracking the necessary events to measure improvement.
But because the product we offer — a community of posts and comments — differs from a traditional SaaS or ecommerce business, the metrics we care about also differ. In this post we’ll explain what these metrics are, the dynamics specific to communities that affect our growth, and how we use this data to improve the user experience.
Communities like GrowthHackers, Hacker News, and Product Hunt have specific dynamics that attract, engage, and repel members. Before building a growth model, tracking data, or analyzing our progress, it’s important for us to understand these community dynamics.
The large majority of visitors get value from community sites without ever logging in, creating an account, or even participating. They read and watch but never vote, comment, or share.
In 2006, Jakob Nielsen outlined this effect as the 90-9-1 rule for communities. He explained that on average 1 percent of community members actively contribute, 9 percent contribute a little, and 90 percent “lurk”. Each segment gets value from the product differently and has different behaviors and needs.
We’ve found this dynamic to be relatively true at GrowthHackers.com. Other communities have confirmed it as well — 80-90 percent of Reddit visitors never create an account, and 500 million people, 76 percent of visitors, read Twitter without ever logging in or tweeting. It’s likely 90 percent of the people reading this right now are lurking. (Hi!)
In addition to the “lurker” dynamic, communities like GrowthHackers are also affected by marketplace dynamics, where supply and demand are freely exchanged and strongly influence each other.
For background, our goal at GrowthHackers is to help people get better at growth. This means our supply is the content that educates and inspires our users, who comprise the demand-side of the market. The content comes from self-directed contributors, commenters, and curators who vote on submitted articles.
On the demand side, we know there’s a big audience of marketers, product managers, startup founders, and engineers interested in growth. Our goal is to attract and retain more of them, which means that traditional user acquisition and activation funnels are important to us. How do we turn a “lurker” into a participating member? We spend a lot of time on this.
While our user base is pretty easy to measure quantitatively, it’s much more important to measure the quality of the supply-side content. Low quality supply doesn’t help our community members be successful, which in turn lowers demand and hurts the marketplace. We don’t want listicles that lead to face palms here.
Now that we understand how both sides of the market work, we can easily measure the underlying dynamics. This helps us break down our growth challenge into achievable, improvable metrics.
When we look at how we’re helping people achieve their growth goals, we care about a few key data points. These metrics help us understand how our members go from being a first-time visitor, to a consistently engaged community member. To classify our metrics, we use Charlene Li’s Engagement Pyramid.
New visitors — When measuring traffic, we want to know how we’re growing the top of the funnel. Are new people discovering GrowthHackers (and how)? Because we’re still in the hustle stage, reaching new people who haven’t heard of us before is important.
Engaged time — While GrowthHackers acts as an entry point to resources around the web, engaged time per visitor is an important metric for us to understand whether people are getting the true value out of the site itself.
Someone who reads a few comment streams, watches a video, and reads a related post or growth study, really gets a chance to discover the core product value. This is compared to a visitor that reads one article and never comes back. More time spent likely predicts higher engagement in the long term.*
*Caveat: Sometimes people that submit questionable content also have high engagement times. Particularly, there are some users who attempt to game and manipulate the community by sharing of low quality content and participating in voting rings to get that content to rise. These users have high engaged times, but they actually are net negative on the community. Everycommunity deals with this, so we have to make sure we account for this behavior in our metrics.
Subscribed to newsletter or Created an account — This is a big step for us! It’s the point when a user goes from a lurker to an identified member of the community. At this point, we can tie previously anonymous user actions into a single holistic view of a user’s behavior. Creating an account also enables users to vote, submit, and participate — they are one step closer to emerging from lurker-dom and have demonstrated significant interest in the community.
Voted or Commented — The next step after creating an account is to take some action besides consuming information. On GrowthHackers.com, the key actions are to vote and comment on posts. By voting, users help improve the quality of the content on the homepage. By commenting, they help create unique content that doesn’t exist elsewhere.
Specifically, we measure the percentage of total visitors who voted or commented on at least one post over a given time frame, usually by week or month.
Retention — Ultimately, our community grows from people who decide that GrowthHackers is a home for them on the web: a go-to spot for growth content, community, and inspiration. Retention is the best proxy we have for actually helping people get better at growth, so we follow it very carefully. Plainly, if people learn things, they will come back.
We measure retention by the percentage of visitors who check out the site and then return within one week. We then slice retention by user behavior, traffic source, and more to understand what levers lead to retention, and how we can improve the product experience.
Because “helping people get better” isn’t a numeric goal, we supplement retention metrics with qualitative surveys and Net Promoter Score with Qualaroo to better understand how much value we’re delivering to people.
You’ll note that we didn’t address submitting content or “producing” as a key metric here. To us, voting and commenting signal engagement better than submitting an article–which for many users is little more than trying to get some eyeballs on their own content.
A marketplace is only as good as its supply. If Airbnb didn’t have any nice homes in a city, or Uber was short on cars when you needed one, you’d be less likely to consider them as options in the future. Similarly, if people who visit GrowthHackers.com don’t find something of value, they’ll be less likely to return.
What’s more, if self-interested people see lame, promotional articles or negative comments, they will pile on more low quality content. Paul Graham calls this the broken window theory as applied to communities.
As a result, our supply-side metrics focus on measuring content quality. We don’t worry as much about content quantity because spam and self-promotional submissions skew those numbers; but we do worry greatly about providing signal in all the noise.
To that end we look at a variety of metrics that together give us a sense of the content quality.
Votes per post — Often, more votes signal a higher quality resource, so this is an important metric for us. However, sometimes voting rings pop up to promote bad content against the natural behavior of the community, so we take this with a grain of salt. While we track votes as one metric, it’s most useful when combined with others.
Time a post spends on the homepage — This is actually a more interesting number to us. Our algorithm favors sustained engagement on a post, boosting high quality content over low quality content temporarily boosted by voting rings. So the longer a post stays on the homepage, the higher quality it tends to be.
Number of comments on a post — Comments are usually a better proxy to quality than votes, because people only interested in getting a post up a page rarely take the time to leave a comment. Of course this isn’t perfect, and some comments are about how terrible an article is. But in aggregate, across a large sample, this number helps identify quality content.
In addition to on-site supply metrics, our interaction from our weekly top posts email and our Twitter act as reasonable proxies for content quality. If more people click through, retweet, and share, that’s good news. We analyze user behavior in those channels for insight into noisier supply-side metrics.
The GrowthHackers growth team runs via a framework called High Tempo Testing (HTT). HTT is focused around setting and maintaining a testing tempo goal (at least three tests a week!) across our product to accelerate learning. It’s been a big part of our breakout growth in the past few months.
Without tracking the above metrics and below events, we’d have no guideline for generating test ideas or ability to see if our tests ultimately improve our core metrics. And, because we’re looking at effects in behavior over time rather than immediate conversion goals, ensuring we have the right tracking down is critical to finding what changes really worked.
Here’s a look at the events we track to calculate our metrics.
Account Created — Recorded when a user creates an account
Account Verified — Recorded when a user’s account is successfully verified
Collection Viewed — Recorded when a user views a grouping of posts on a single page, like trending or new
Post Viewed — Recorded when a user visits the discussion page of a post
Post Upvoted — Recorded when a user upvotes a post
Comment Added — Recorded when a user adds a comment to a post
Comment Upvoted — Recorded when a user upvotes a comment
If you’d like to learn more about what to track for community sites, check out Segment’s best practice guide. The guide outlines the properties we’re collecting with these events and a few additional events that make sense for communities but don’t relate to our specific KPIs.
Before you can identify your key metrics, you first have to understand the dynamics of your business model. Following this plan, tracking the right data, and executing on our High Tempo Testing process, we’ve hit 59 percent user growth in the past few months!
Hopefully, this look into the community model and our data can help you on your way to growth, too. It’s not magic, just data and discipline.
Michael Brondello on May 7th 2015
Minicabster, a London-based cab company, realized that the campaign interaction data provided by their advertising, email, and social media platforms wasn’t enough to personalize their messages to each user. Instead, they wanted to incorporate data about how riders interacted with their app and website into these campaigns.
To execute on this strategy, they implemented Segment to collect and route their customer behavioral data to third-party tools including Lytics, an audience segmentation application. By creating behavior-based user segments in Lytics for targeted marketing, Minicabster was able to increase bookings by 83 percent.
Minicabster was founded in 2011 as a way for tech-savvy Londoners to request and book taxi cabs. In a fiercely competitive taxi market, Minicabster stays ahead of the competition with multi-channel marketing: they use a mobile app and e-commerce site for bookings, email and push notifications for communication, and social media for advertising and retargeting.
However, each tool has limitations in the types of user activity they capture and can use for targeted campaigns. Minicabster needed a way to aggregate all of their customer’s activity data, so they could make marketing decisions based on a comprehensive profile of their customers.
Today, Minicabster uses Segment to collect all of its customer behavioral data and send it off to tools such as Google Analytics, Mixpanel, and Lytics. With their data streaming in from Segment, Minicabster uses Lytics to discover niches of their audience that display similar behavior and target those segments through other marketing channels.
Before, they targeted email campaigns purely based email behavior. For example, Minicabster would send “loyalty” campaigns to customers who had opened more than four emails rather than focusing on actual riding activity. Now, they use a combination of cross-channel behaviors to define their targeted segment and to reach the appropriate audience. Instead of targeting customers who open emails, Minicabster uses data about customer purchases, web site visits, and mobile app usage to create segments for marketing.
The first group Minicabster wanted to target was inactive users. Using Lytics, Minicabster found a segment of loyal customers who had taken more than 10 rides, but hadn’t engaged with the brand in over a month.
With information about who churned, Minicabster focused their campaigns on getting those customers back in cabs. They used Lytics to export this segment out to marketing channels where they could reach these customers.
They pushed the segment of users into Facebook and targeted their inactive customers in a re-engagement ad campaign.
To expand the impact of their campaign, they also pushed this segment into MailChimp, where they coordinated a win-back email campaign for the same group of inactive customers.
Instead of investing time and effort on net-new customer acquisition, Minicabster focused on getting the most out of the customer data they already had. Using Segment and Lytics enabled them to target specific groups of customers with relevant messages across multiple channels.
This new approach to data-driven, personalized marketing dropped Minicabster’s average cost of acquisition from $17.84 to $2.59 (converted from British Pounds).
“Within a few days, we integrated customer data using Segment and launched a targeted re-engagement campaign through Lytics which resulted in an 83% increase in the number of cab bookings on Facebook alone.” — Luiz Albuquerque, Minicabster Digital Marketing
To learn more about Segment + Lytics, check out our docs!
Mike Sharkey on April 17th 2015
Today, a surprising 61% of marketers in the US still rely on basic batch and blast emails to communicate with their audience. Only 4% have “graduated” to marketing automation, or creating email campaigns based on the personal experience each user has in your product.
Marketing automation is appealing because you can collect more granular lifecycle data on how customers move throughout your website, app, landing pages, support portals, etc., which in turn lets you to send more targeted messages.
In this post, I’ll explain how Autopilot and some of our customers use this approach to create relevant, high-converting email campaigns.
Behavioral marketing, also called behavior-based marketing, is a marketing strategy that utilizes data on how customers interact with an app, website, or service to create a personalized marketing experience. Behavioral marketing strategies incorporate data such as browsing history, on-page actions & events, demographics, cookies, and IP data to create segmented user profiles that can be targeted with greater specificity and effectiveness than traditional one-size-fits-all campaigns.
Behavioral marketing automation refers to tools and processes that businesses use to automatically target users based on behavioral data. This typically means identifying behavior patterns in your target audience and triggering messages based on customers completing (or not completing) specific events within your app, website, or service.
Behavioral marketing is an umbrella term that includes many different digital marketing strategies across numerous channels. Here are some of the most popular and effective.
One of the most common types of behavioral marketing, typically used in ecommerce, is suggested selling. This involves tracking data on a customer’s purchasing behavior and the products they view, and suggesting new products that they’re more likely to be interested in.
Retargeting is a behavioral marketing strategy that involves targeting customers with advertisements that show pages or products that they’ve viewed in the past. Similar to suggested selling, this strategy uses data on past behavior to better understand what customers are interested in and provide ads that are more likely to convert.
In email marketing, marketers often use audience segmentation based on previous engagement to send customized messages that will provide the most relevancy (and ultimately improve click-through rates).
Demographic targeting is a type of marketing segmentation that uses customer data such as age and gender to show advertisements more relevant to that demographic. This type of targeting is often used on social media (where demographic information on users is most readily available).
Now that we’ve walked through what we mean by behavioral marketing, let’s overview some successful examples we’ve seen of businesses harnessing automation to create more effective marketing campaigns.
It’s common for marketers to set up a drip of activation emails based on a single customer action. For example, a user signs up for a free trial, and we send a series of educational emails. Or, a user completes a key milestone in the product, and we send a congratulations email.
A typical activation email string based on a single event.
These drip emails can effectively guide first time users, but with a holistic view of the customer journey we can make the emails more helpful. For example, let’s look at how PandaDoc incorporates user behavior to send more personalized emails.
PandaDoc is a startup that offers smart document automation for sales. They offer four distinct features:
Configure, Price, Quote (CPQ);
Deal Room; and
Before signing up for PandaDoc, users typically browse features, read their blog and compare pricing. These actions give us clues into what features they’ll be most interested in.
To collect this data, Pandadoc creates a Segment
track event each time a user signs up, which sends the unified identity of the new user along with previous historical information to Autopilot. This data enables us to create a more tailored welcome email.
Instead of using a single welcome email path, PandaDoc sorts users into tracks with different content based on their behavior history. For example, people who browse blogs and pages about the price quote feature are added to the “Configure Price Quote” Segment. After signing up, they receive a personalized onboarding message about the feature they browsed, resulting in more engagement and conversions.
While it’s easy to use Segment for tracking positive events like signups or goal completions, you can also use Segment to identify negative events, such as a bug in your software or a common behavioral roadblock, so that you can automatically send tips or engagement offers.
For example, at Autopilot we use Segment events to increase retention by identifying negative behaviors and taking immediate action to overcome them. Behaviors we listen for include:
Failure to complete mail domain setup;
Failed connection to Salesforce CRM due to wrong user permissions;
Incomplete trial sign up due to technical or other issues; or
User encounters a known product issue or bug.
Below is an example of what happens if you encounter an error when signing up for a free trial:
The Segment Sign Up Error
track event allows us to immediately recognize the problem and then send a notification email to our support software (Zendesk), which creates a ticket on behalf of our customer. The customer is also notified immediately (via Autopilot), reassured that help is on the way.
In the example above we were able to retain 88% of all signups that resulted in an error.
Analytics applications like Mixpanel or KISSmetrics allow you to create product activation funnels for Segment
track events. These activation funnels visualize the popular features and the obstacles that users encounter as they explore your product.
Using this event data, you can identify stages or moments when you need to send automated communications to help your users advance to the next behavior in the activation funnel.
In our own product, when people add the Autopilot tracking code (which allows them to track online activities and capture contacts from form submissions), we know that their next likely behavior is to track a form on their site. In order to increase the conversion rate from adding the tracking code to tracking their first form, we built a user behavior journey.
The journey listens for the Segment
track event to notify us when the user has added the tracking script (
If true, we update the user’s contact field to note that they have added the tracking script, send an internal notification to our team, and then send a personalized engagement email to the user by first checking to see what other relevant events they have completed.
Have they: a) tracked a form, or b) imported their contact from a spreadsheet? Depending on those, we automatically send an email with tips and links that are specific to their actual usage and stage of the activation funnel.
Using Segment to track the moments when customers engage in your product, then automatically sending resources to help them succeed or reward them for positive behaviors, is a win-win. Users feel more satisfied, and your conversion go up big time.
Being able to personalize your marketing based on how customers use your product or engage with your content is nothing new. Neither is automating your marketing or sending multi-channel messages. But doing so quickly and intuitively, and integrating your various data, product, and engagement systems is often hard to do, slow-going, and costly. That’s why we love using Segment to connect all of our systems like Autopilot, Zendesk, and Mixpanel together seamlessly.
If you’d like to learn how to automate personalized communication based on user behavior, catch the recording of our webinar featuring myself, Diana Smith, Director of Marketing at Segment, plus special guests Serge Barysiu, CTO at PandaDoc, and Lauren Alexander, VP of Marketing at PandaDoc.
Chad Halvorson on April 13th 2015
Marketing automation tools will make your life easier by, ya know, automating things. You don’t have to send a welcome email to every customer or follow up with folks who’re inactive individually. You can automate it. The tricky part is finding the right tool for you with a ridiculous number of options on the market.
At When I Work, we use marketing automation tools for everything from lead generation and conversion to content marketing and analytics. Along the way, we’ve evaluated and used a lot of these platforms. Based on our experience, here’s an overview of popular tools and when they’re most helpful.
Customer.io allows you to send targeted messages to users based on what they’ve done in your product, making your emails more relevant and personalized. They also offer the segmentation, comprehensive reporting, and A/B testing features you need to optimize your email marketing.
Category: Customer Activation and Retention
Use if you: Have a growing SaaS or ecommerce business
Extole is a referral marketing tool that turns your customers into brand evangelists. Extole offers a wide range of advocacy products that help you encourage your current fans to bring in new customers by rewarding them. The software also provides analytics, so you can track the ROI on your campaigns.
Category: Referral and Loyalty Programs
Use if You: Want to develop an ambassador or loyalty program
An all-in-one marketing automation tool, Hubspot helps you attract new leads and turn them into customers. Hubspot features include email automation, landing page creation, analytics, lead scoring, and a built-in customer relationship management (CRM) system.
Category: CRM and Lead Generation
Use if You: Want to start investing more in content marketing
Another great all-in-one solution, Marketo is a strong B2B marketing automation platform. Not only does the software help with your lead management, email marketing, and customer tracking campaigns, Marketo also offers a variety of plans that make it an affordable, viable option for nearly any B2B company.
Category: Lead Generation and Content Marketing
Use if You: Want an all-in-one marketing automation system that plays nice with Salesforce
Whether you run an ecommerce site or a brick-and-mortar store with an online presence, Bronto is for you. Bronto offers marketing automation solutions for following up on shopping cart abandonment, as well as helping you run post-purchase campaigns and connect with ecommerce integrations.
Category: Lead Nurturing and Conversion
Use if You: Have an ecommerce or retail-based business
Making communication with your customers personal and relevant is always a challenge, but it’s even more difficult when you’re trying to automate it. Autosend can help! This program allows you to automatically send email, text, and in-app messages to your customers based on their prior actions on your site. From welcoming new customers to sending reminders to upgrade, Autosend has you covered.
Category: Customer Service and Activation
Use if You: Want to communicate more consistently with prospects and customers
If you’re interested in automating responses to customer behaviors, you might also want to check out Blueshift. Blueshift automates behavior-based messaging across many channels including email, push notifications, Facebook, and display ads. You can use Blueshift’s behavioral segmentation to find groups of users that are 3-10X more likely than the average to perform actions like repeat purchase, activation, or churn. With mulit-channel touch points, Blueshift is a great option for B2C companies.
Category: Segmenting and Multi-channel Messaging
Use if You: Want to target users across email, web, and mobile.
Autopilot is a new marketing automation tool focused on customer journeys. You can easily build a lifecycle marketing campaign with a drag-and-drop interface. They also support email and SMS, and offer a number of guides to help you get started.
Category: Lifecycle Marketing
Use if You: Want a visual interface for designing your campaign
Iterable also offers a clean visual interface for creating campaigns and can handle transactional, promotional, and lifecycle emails. They offer A/B testing for up to 50 variations and auto-implement the winner. Iterable also helps you easily test your messages on different email clients, which can be tricky to get right.
Category: Lead Nurturing and Customer Retention
Use if You: Want a single email platform and with A/B testing front and center
If you want your customers to take action, you’ve got to tell them what to do! Outbound helps you do this automatically through email, mobile push, or SMS. You can use the program to set goals and test messages for users in each step of your funnel, and since Outbound doesn’t require coding, your A/B tests and resulting changes are easy to implement.
Category: Customer Service and Acquisition
Use if You: Want to automate communication with potential customers
As its name implies, Drip is a marketing automation platform that allows you to send information to customers or leads at specific intervals. Drip makes it easy for you to slowly feed your communication to customers, whether you’re teaching a free class or want to follow-up on a marketing campaign.
Category: Email Marketing and Lead Nurturing
Use if You: Want to set up a lead nurture campaign or email course
If you build apps, you’ve got to know whether people like them and how frequently they’re being used. Localytics is an analytics program that’s focused exclusively on mobile, offering demographics, usage, and session information in addition to retargeting and automated messaging systems. With Localytics, you can find users having trouble and automatically message them.
Category: Mobile Analytics and Messaging
Use if You: Want an all-in-one platform for app analytics and messaging
Do you ever wish you could watch your customers interact on your site and talk to them if problems arise? Similar to Localytics for apps, Intercom allows you to analyze customer behavior on your site and communicate with customers 1-to-1 or via automated campaigns. You can guide new users through the onboarding process or streamline your customer support with “conversations”. Considering how important customer service is, finding the right tool for customer success is a worthwhile investment.
Category: Customer Service and Acquisition
Use if You: Want to run support conversations and email automation from a single platform
Choosing a marketing automation tool can be a difficult task since there are so many options out there. Hopefully these descriptions have helped you narrow it down! Once you have a few in mind, I’d suggest using Segment to test them against each other without wasting time or resources on installing each individual tool.
If you’re like us, you might find that you actually need two to get the job done. Right now, we’re currently using Vero and Drip through Segment to cover all of our bases.
Kelsey Ricard on April 10th 2015
The mobile commerce (m-commerce) space has grown 42 percent annually for the past four years—more people are becoming comfortable spending money on their phones, and more retailers are investing in mobile to woo these small screen shoppers. But as the market grows, so does the competition for thumb space and user retention.
To compete, you need to win over users quickly, and A/B testing can help. We suggest focusing on these three areas of the user experience when you’re getting started with m-commerce experiments:
A great first-time user experience is critical for users to adopt your app. It’s so important that at any given time more than 50 percent of Taplytics’ customer experiments are focused on improving onboarding. These experiments might test the content and imagery in the first few screens of an app, or the process of getting a user to submit information. Whatever you choose to experiment on, the goal is the same—help your users experience the magical moment of making a purchase as quickly as possible.
Let’s take Frank & Oak for example. The men’s fashion company sets themselves apart by creating a custom shopping experience tailored to each individual’s interests and behaviors. But to get the personal experience, users need to sign up on Frank & Oak’s app first.
As a result, the Frank & Oak team started their A/B testing initiatives with tweaking the signup flow. They first tried changing the field forms and adding the ability to sign up via Facebook. After the first experiment, they tested to see whether an additional option to login with Google would increase signups. You can see the final variation below.
It turns out, that adding a “connect with Google” button increased mobile signups by 150 percent.
While Frank & Oak saw impressive results from a few simple tests, you may need to go through a few more iterations before you find a winning strategy. Here are a some onboarding tests you can run to help get your users through your sign in process to start browsing items.
Text and Imagery
Sign up Options
Once your users get through the onboarding process, it’s time to activate them. In commerce, this means getting them to purchase something as quickly as possible. To have a shot, our research shows you’ll need to get them to buy within the first two sessions, or it’s unlikely they will ever come back.
For example, when Karmaloop analyzed their customer activation data they noticed that if a first time user placed an item in their Wish List they were much less likely to complete the purchase compared to if they added an item to their cart. Once Karmaloop identified this trend, they set up an experiment to deemphasize the Wish List button in the UI.
Discouraging users to interact with their Wish List allowed Karmaloop to better capture purchase intent. This simple test drove more activations and increased sales by 35 percent.
What can you learn from their experiment? When you’re looking to improve your customer activation, challenge the status quo. Don’t assume that a current feature is achieving your goals. Instead, test your assumptions and use funnel analytics to help brainstorm ideas.
You pushed customers through onboarding and activation—they bought something! Woot! But you’re not done. The last piece of the puzzle in mobile commerce is retention. How do you turn customers into repeat buyers?
There are many strategies m-commerce companies use to increase retention. One effective strategy is hosting targeted sales on a regular basis. For example, Rue La La keeps a “What’s Hot” section in their app specifically to develop scarcity and drive repeat purchases. Other apps achieve it through targeted notification campaigns, whether via email or push.
In the Rue La La example, they could easily test the effectiveness of the “What’s Hot” section on user retention, by creating an experiment where “What’s Hot” is replaced with another section of the app, or taken out altogether. The team would then analyze if cohorts who saw the section come back and purchase more often. Other ideas for testing retention include:
Last chance section
Adding discounts or promos for in-app purchases
Changing photography to make items more appealing
The tests discussed here offer ideas to get you started, but to be successful with A/B testing, you’ll have to continually challenge the norm and make sure your team is invested in listening to the results of A/B tests. Some of the biggest improvements can come from the smallest tweaks or even testing a feature you thought was performing well. Once you find a way to optimize your app, you’ll need help from your team to rally around making the change permanent.
If you are new to A/B testing and want to learn more about A/B testing best practices, and how to create a culture that drives experimentation, check out the Mobile Growth Academy at Taplytics for some helpful tips and tricks.
Kevin Niparko on February 24th 2015
For the first couple years at Segment we solved most of our biggest business problems using regular ol’ gut instinct. Everyone was in constant conversation with our initial customers, so it was okay to use intuition to choose our pricing model or prioritize new features. But as we’ve grown, we needed a more analytical approach. About six months ago we started using SQL for analysis, and that’s completely changed how we answer questions about our business.
Direct access to SQL has made us much faster at answering questions, and more teammates are digging in themselves instead of relying on intuition or being blind to siloed information. There are a number of tools on our platform that have made querying and sharing our data around the company much easier.
This article will show you how we use Segment SQL + Mode to speed up our decision-making. We use Mode to query and build reports that can be run by anyone on the team (even non-technical people!). There are a ton of great SQL tools out there, so make sure you find the right SQL stack for your business case.
The cumbersome process of pulling these data sources together made it too costly to analyze. Zendesk and Stripe’s out-of-the-box reporting was good, but we really needed to combine data across sources. So although the raw data was available, our teams were essentially still in the data dark ages.
For example, our conversion funnel took weeks to get right, and we were constantly bothering our engineers for changes. For our sales team to function, we wrote a plugin for our chat-bot hermes to generate a .csv file of all of our enterprise clients. Another plugin would tell us how many users were using different integrations on our platform. Here’s the code:
Each one of these one-off programs distracted our engineering teams from building a great product, and blocked product, success, sales, and marketing teams from making decisions as they waited for data.
When we started building Segment SQL, it was meant to be a data warehouse for our largest customers, who were already loading raw Segment data into Redshift. But during alpha we gave it a spin for ourselves and found that even for companies at our size (25 people at the time), getting our data into SQL was really powerful.
We were already tracking events like this:
And all of a sudden, these events were also available to us in SQL:
For the first time we could see how users went from signing up, to sending in support tickets, to upgrading subscriptions. We were able to visualize sign-up rates for users who read a blog post and submitted a help-desk ticket, or track average time spent on the pricing page. Life was pretty good.
But we still found that running queries and building reports was restricted to a small group of SQL power users.
Those who didn’t know SQL stood in increasingly long bread lines. While some teams were able to query the data they needed, reports remained siloed. We were also wasting time re-creating queries that had been written by other teams, or resolving differences in queries across teams.
Enter Mode. Mode makes it easy to create on-the-fly analysis and share it across the organization. As part of our New Year’s resolutions, many around the office decided to get better at SQL. Mode’s SQL School was an amazing place to start (and our marketing team also recommends Periscope’s SQL for marketers!). Mode even came onsite to give us some hands on training.
With SQL, questions that previously took an engineer a ton of time to answer could now be answered with a simple query. And with Mode, these queries are stored as reports that can be shared and re-run by anyone in the organization with a Mode account.
In a matter of weeks, we went from a handful of reports to over a hundred. And now it wasn’t just a few of us creating them, it was every team: from marketing, to partners, to success, to product. Teams were no longer waiting on engineers to access the data, they were querying and making decisions completely by themselves. And better yet, teams were seeing each other’s queries and building on them to make more powerful ones.
This is our actual Mode feed from the time of writing. I think it gives a good snapshot in just how integral SQL (and sharing queries with Mode) has become to our team!
As an analyst at Segment, I’m always interested to see what reports other teams are creating. Every once in a while I’ll troll our team’s new reports page to see what people are learning. Here’s a bunch of my favorites:
Remember the conversion funnel that took us a ton of engineering hours to get? Well, here it is using regular-old SQL! Not only does it save us time, but SQL allows us to do custom funnel analysis that’s tricky to do with out-of-the-box tools. For example, we can easily add caveats to exclude users that received an invite or submitted a support ticket!
We love our product team’s interactive user flow analysis, which let’s you get a feel for how users are spending time in our app. We borrowed this one from the Mode playbook!
This is a snapshot of the number of accounts using each of our integrations, which helps our Partners team prioritize partner outreach and co-marketing. While it looks like a simple-enough bar chart, this report used to take a bunch of engineering hours to produce!
Our success team has done some analysis combining ticket data from Zendesk with things like subscription plans and usage to help understand our gross margin. Above is a peek at our daily ticket volumes by business customers. Our amazing success team will never pass up an opportunity to help a customer, even on the weekends!
SQL has helped our teams build, learn and share faster, which is helping us move quicker and make more informed decisions. We’ve found that Mode’s Github-like approach to SQL encourages a ‘build-upon-the-past’ mentality that saves us all – not just the analysts – a ton of time.
If you’re just getting started with SQL or interested in learning more, our partners have a ton of great content to get going. Mode’s SQL playbook is a good jumping off point for some common patterns, or if you’re just getting started with Segment + SQL, our SQL partners JackDB, Looker, Chartio, and Xplenty have some good guides to get you set up!