Go back to Blog

Growth & Marketing

Jes Kirkwood on November 15th 2021

Shopify's VP, Growth Morgan Brown reveals how the company's growth team drives results in an exclusive interview.

All Growth & Marketing articles

Erin Franz on January 29th 2015

This post is the second in a three-part series from Looker, a Segment Warehouses partner, sharing strategies to solve common analytics conundrums. On to user sessions!

As we mentioned in the first post, universal user IDs are the foundation for complex web and behavioral analysis. The next step is tying those single identities to a string of actions, which is what we call a user session.

Some out-of-the-box tools will do this for you automatically. However, when they do the reporting under the hood, it’s hard to know exactly how they calculate a user session, and if it’s right for your business. Instead, we recommend using SQL and LookML to build out user sessions yourself.

By transforming event-level data to metrics from the ground up, you can design them to your use case, gain greater flexibility and dig deeper into the details. With the basis of a universal user ID, this post will show you how you can sessionize user activity.

What Is a Session?

Simply, a session is defined as a string of user interactions that ends when specific criteria are met. For example, a user session might be defined as every action a user takes from when they land on your app to when they log off, or reach a 30 minute interval.

Sessions are the building blocks of user behavior analysis. Once you’ve associated users and events to a particular session identifier, you can build out many types of analyses. These can range from simple metrics, such as count of visits, to more complex metrics, such as customer conversion funnels and event flows (which we’ll approach in the third post). Assuming we already have a Universal Alias ID methodology in place, we can also examine session metrics over time, such as user lifetime values and retention cohorts.

Many prepackaged web analytics solutions include sessionization, but they typically implement very “black box” version, giving the end user little insight into or control over how sessions are defined.

Different businesses may want to vary session definitions by the user inactivity required to terminate a session, how much buffer time is added to the last event in a session, and any other more custom requirements pertinent to a specific application’s use case. They might also want to differ session logic across devices, since behavior varies greatly based on screensize. For instance, in e-commerce, users on mobile are more likely to be researching on their phone to purchase in store in comparison to web users.

With Looker’s custom LookML modeling layer, you have complete control over how sessions are defined.

Universal User ID Refresher

In the previous post, we covered how to create a map from disparate user identifiers to a single Universal Alias ID. This process is integral to creating the most accurate event analytics, and therefore session definitions. Thinking about it simply for one user as they login:

The diagram shows the transition from Anonymous ID to User ID. If we didn’t map A1 and U1 to a common identifier, we’d either create two distinct sessions by using both IDs, drop all the events pre-login if we chose only the logged-in user ID, or not be able to map these events to subsequent visits if we choose the anonymous ID. Setting up a Universal Alias Mapping resolves this issue, so we just get one session encompassing both pre-login and post-login events, with user traceability to future sessions.

How to Build Sessions

Session creation can be completed in Looker in a few steps using persistent derived tables (PDTs). PDTs allow you to create views from SQL queries that have been written to a scratch schema that Looker has write access to. These views can then be referenced throughout our model.

First, let’s persist the critical elements of the tracks table mapped to the Universal Alias ID. You’ll see that we’ve left joined in the existing universal_alias_mappingtable that we previously created to accomplish this. The sql_trigger_valuedetermines how often the table is updated and can be customized to the implementation. In this case, I’ve set it to when CURRENT_DATE changes, or each day at midnight.

From these mapped tracks, we’ll create our sessions in another persistent derived table. Let’s assume our session definition foremost relies on elapsed inactivity time. First, we’ll need to determine the amount of idle time between events themselves. This step can be accomplished using the lag() function in Redshift.

Idle_time is the time in minutes between the current event and the previous event for the mapped user. Once we have that value we can set how much inactivity is necessary to terminate a session, and then begin a new one on return. In this case, we’ll assume this is 30 minutes, which is a commonly used timeout window, but this can be assigned to any time value based on the duration you’d expect between your events when a user is active.

Our original query is now aliased as the the subquery lag. From that base, we can select only events where lag.idle_time is greater than 30 minutes or when lag.idle_time is NULL(meaning it is the first session for the user).

The timestamps of these events are the session start times. The session identifier can simply be taken from the combination of the User ID and the session sequence number, which will be unique for each session. A session sequence number is assigned by choosing the row number after ordering the sessions by start time and partitioning by user. Boom! Now we have the ability to calculate metrics such as sessions per day and sessions per user.

Tying Up Sessions

Now, we’ve marked the events corresponding to each session start, but we still need to identify the events contained within the session and the events ending the session. This will require another step in Looker.

Note that previously in the session query, in addition to determining the start time of the current session, we also determined the start time of the next session — which will be important here. Once we’ve persisted the sessions we’ve just generated in a PDT, we can create what we refer to as a Sessions Map using the following syntax:

Referring back to our Mapped Tracks table, we’ll join the newly created Sessionstable on user_id, where the tracks event timestamp (sent_at) is between the session start (inclusive) and next session start (exclusive). This enables us to assign a session ID to each event in Mapped Tracks. Then we can determine the end of a session using the following query involving the persisted SessionsTracks, and newly created Sessions Map.

Using the LEAST() function in Redshift enables us determine the time of the end of the session – the next session start time or the time of the last event of the current session plus 30 minutes, whichever comes first. This 30 minute value is the “buffer” time added to the timestamp of the last event, and can be customized easily in the SQL as is relevant to the use case. In addition, we can compute the number of events in the session in this step. Persisting the resulting table (we’ll refer to this as Session Facts) and joining it back to sessions give us the complete basic session metrics: User ID, Session ID, Session Start, Session End, and Events per Session.

Using Sessions to Perform Bounce Rate Analysis

Now that we’ve built our customized sessions, we can go ahead and analyze them. Custom session generation enables you to know exactly what your metrics mean and get the most value from them, since you define how they are built – from start to finish. With this method, you can analyze session counts, session duration, events per session, new/repeat usage, user cohort analysis and much more. To give an example, we’ll focus on a commonly used metric in web analytics based on sessions: bounce rate.

Bounce rate can be a great measure of how “sticky” your content is, since it measures how many people drop off before they become engaged with your application. A bounced session is usually defined as a session with only one event. Since the number of events per session is defined in our Session Factsview, you can easily add the dimension is_bounced to identify bounced sessions in your model:

Assuming we’ve joined the Session Facts table to Sessions in the LookML model, we can now use the is_bounced dimension to segment sessions. A simple example is daily trending bounce rate and overall percentages. These elements can additionally be dimensionalized by any other attributes from the tables we’ve joined in, such as geography, time, and campaign to determine their respective “stickiness” or user engagement.

You Can Do It!

A lot of value is provided from sessionization when it can be customized specifically for your use case and desired definitions. A solution that gives you the flexibility to define your own rules when it comes to structuring user event behavior for your business is important to accomplish this. We build off these custom user sessions in the third blog post of this series, where we extend sessionization to custom funnel and event flow analysis. Check it out!

Want to learn more about Looker and Segment? Let us know.

David Cook on January 26th 2015

Our awesome customer, David Cook from Jut, graces the blog to tell us how he created an actionable product usage display with event tracking + Segment + Keen + LED lights. Read on!

What is an Information Radiator?

Now that companies are collecting tons of data on what users are doing in their products (you are collecting user data, aren’t you?), the challenge shifts from figuring out what users are doing to disseminating that information to the company.

Obviously, you can discuss the data you’ve analyzed with team members and summarize data at company meetings, but in those cases you’re usually just sharing a snapshot of historical data. If you really want to be a data-driven organization, you have to constantly beat the data drum and make data accessible to everyone at all times.

Enter the information radiator. Alistair Cockburn coined the term and defines it like this:

“An Information radiator is a display posted in a place where people can see it as they work or walk by. It shows readers information they care about without having to ask anyone a question. This means more communication with fewer interruptions.”

The most common information radiator is a wallboard – a TV that constantly displays information. However, humans have a natural instinct to tune out things that aren’t relevant.

If you’re not careful, your team might unconsciously start ignoring your wallboard. This is why, while wallboards are great for most kinds of data, it’s valuable to take advantage of alternate means of constantly communicating information to your team.

My data communication tool of choice are LEDs.

What’s So Great About LEDs?

LEDs add depth to your data visualization arsenal. They’re more engaging than a wallboard because they’re unusual. Everyone sees TVs on a daily basis, whereas few people see LEDs that actually display information rather than simply act as a source of light. You may be more constrained with the characteristics you can alter (color, brightness, and speed) but LEDs still provide plenty of flexibility to display meaningful information.

You can configure LEDs into a variety of shapes: individual pixels, arrange pixels into a matrix, or string a strip of LEDs together.

We use LED strips because they’re easier to setup and allow you to construct a larger installation that more people are likely to see. Specifically, we use Heroic Robotics’ LED strips and control them with the PixelPusher. This combination simplifies the act of controlling individual LEDs on the strip.

Displaying User Actions with LEDs

So now that you have some LEDs and data, you need to figure out exactly how you want to unite the two.

It’s best to measure and display the moments where users derive value from your product, so your team can strive to make more of those moments occur.

At Jut, the main way our users derive value from our product is by writing and running programs in our new language, Juttle, that retrieve and display data. Consequently, we fire an analytics event whenever one of our users takes this action. To visualize this, we decided to send a pulse of light down an LED strip for each of these events.

To give us the flexibility to use any tool to track product usage, we decided to implement Segment to control our event reporting.

Segment makes it easy for us to send these events to services like Intercom and Google Analytics with a single integration. We take advantage of Segment’s webhooks feature to send events to our own database, and use it to push data to Keen, where we get the LED signals from.

In our closet, we have a server that runs a java program. This program communicates with the PixelPusher over our network to tell it what to do with our two LED strips. Every minute the program pings Keen to request the timestamps and status of the Juttle programs run in the last minute.

The java program then plays back the last minute of Juttle program runs. In other words, the PixelPusher will send pulses down the strip at the same pace as the Juttle programs were run instead of sending the pulses all at once. That means, if you go to our website and run a Juttle program, one minute later a green pulse will shoot down the LED strip in our office like this:

Here is the repo if you’d like to check it out..

How We Use the Data

Unfortunately, like any program or SQL query, not every Juttle program runs perfectly. Sometimes they encounter a runtime error and fail. We want to avoid those, so when that happens, a red pulse goes down the strip instead of a green pulse.

You might want to send pulses for other actions as well. For example, we send an orange pulse down the strip whenever someone signs up for our beta. To differentiate it from the Juttle run pulses, we don’t rely on just changing the color, we also vary the speed that it moves down the strip. It’s half as fast as a typical pulse.

All in all, this creates a more lively atmosphere in the office. We can easily see how active our users are in almost realtime. If it seems like we’re experiencing an unusual amount of traffic, you can turn to our wallboard to see how the current level compares to historical levels.

What else do you think you could use LEDs for?

Let us know on Twitter @jut_inc!

Erin Franz on January 20th 2015

We welcome Erin Franz, data analyst at Looker, to the Segment blog! This is the first post in her three part series sharing practical advice for common analytics conundrums: accurately identifying users, creating sessions for user activity, and event path analysis.

Let’s dive into the first topic — how to work with user IDs.

The Problem of Disparate User IDs in Web Analytics

The user is at the center of every event in web analytics. An ID for that user is assigned to each event, but this identifier is only as accurate as the context and timing of the event.

For example, what happens if a user visits our website on their laptop, and then visits again on their mobile phone? A login or authentication process brings us closer to a single identifier for this user — but what about events pre-login? What if a user changes their email or username?

Raw events are often isolated or separated by their assigned user ID, preventing true user identification when these events are analyzed on their own.

Why Do We Need a Universal User ID?

Raw event counts are unaffected by user ID discrepancies. The number of logins, pageviews, cart adds, etc. will be the same no matter how the user is identified. But, these event counts mean a lot more if coupled with an accurate count of users who are active on your application, and the ability to trace these users over time to measure conversion funnels.

The majority of advanced analytics, from user behavior to retention, relies on user identification that accurately tracks users through all of their visits.

If we aren’t careful about stitching these user identifiers together, active user counts can get inflated by double-counting the pre- and post-login IDs — and only partial visit behavior can be examined because a single user’s activity gets split into two separate “users.” For example, without joining pre- and post-login behavior through a universal ID, it’s impossible to identify which campaigns or site features are most likely to lead a customer to pay after signing up.

How to Define a Universal ID with Looker and Segment

Luckily, we can make our analyses accurate by using Looker and the data provided by the Tracks and Aliases tables in Segment Warehouses. What’s recorded in Tracks and Aliases will vary on the Segment implementation, but we’ll make some general assumptions for this example.

Track events, which record customer interactions, such as “Signed Up” and “Completed Purchase,” automatically collect the current anonymous ID (created pre-login) and user ID (created upon login) when available for each event. Aliasrecords when a user ID changes by marking a previous ID (before change) and a current user ID (after change).

We can simplify a typical user event-tracking scenario using the below diagram, where A represents pre-login anonymous IDs and U represents authenticated user IDs. Current state is on the left, and our goal is to transform this data to look like the diagram on the right.

The left of the diagram represents one user with three visits to the application, resulting in events reflected by five IDs. Consider this scenario: On the first visit in this sequence, the user with Anonymous ID A1 logs in as username U1. Events associated with that user have either or both A1 and U1 assigned to them. Simple.

But on the second visit, the user’s Anonymous ID is assigned as A2 and the user logs in as U1, but then changes their username to U2, resulting in events with three distinct identifiers: A2, U1, and U2. Lastly, the user returns with Anonymous ID A3 and logs in simply with the new username U2.

The ultimate goal here is to provide an accurate mapping from any one of the five identifiers to one single user ID, as demonstrated in the diagram to the right and below it in Table 2, the Universal Alias Mapping table.

Query Recipes for Consolidating User IDs

You can accomplish this in a few steps in Looker. First, we need to create the Alias to Next Alias mapping table, as shown in Table 1. This will consist of all the possible combinations of Anonymous ID and User ID from your Tracks tables, to map pre-login to login IDs. Additionally, we’ll need to include all the possible combinations of Previous ID and User ID from the Alias table, to map User IDs to changed User IDs. The union of both result sets will yield Table 1.

Once we have this result set, we need to map all values in the Alias column to a Universal Alias, which will be the most current user ID. In this case it is U2. This can be achieved by joining the table onto itself many times where the prior table’s next_alias equals the joined table’s alias attribute.

In some SQL dialects the number of joins can be made dynamic via a Recursive CTE, but in Redshift this function is not available — so we’ll just accomplish the same thing by joining a finite number of times (more times than a user ID could ever be re-aliased).

The above logic left joins the original table to itself up to five times on next_alias = alias. This creates a row for each original alias mapped through, at most, five user ID changes. (More or fewer joins can be used, depending on the context of the Segment implementation.)

The most recent value for the alias is chosen by coalescing backwards through the last available value to the first, resulting in the universal ID for that alias. This value is then optionally anonymized to a numeric value using a hashing algorithm. The end result is a mapping table as modeled in Table 2.

Measuring Accurate Active User Counts

In Looker, this mapping table can be updated automatically at any frequency and stored in a materialized table. We can then reference it throughout our Segment Warehouses LookML model. The view file abstracts the result set from the underlying query, creating the mapping table. As a result, the end user is only exposed to the result set and doesn’t have to worry about any underlying complexity, while the data analyst can easily modify the logic if necessary.

In this case, we’ll join the Universal Alias Mapping view to the Tracks view in our model file. The Tracks view creates a similar abstraction for the Tracks table in Segment.

This LookML syntax ensures that whatever ID column is populated in the Trackstable (user ID or anonymous ID) is properly aliased to the latest alias. Since we have joined in the Universal Alias Mapping table, we can use it to compute more accurate user metrics as defined in the Tracks view file below (mapped_user_idand count_distinct_userssupplementing the original fields tracks.anonymous_idand tracks.user_id).

Since our mapping table will not contain a mapping for a track where user identifier is the universal ID, the mapping for those values will be null. So ultimately, the user identifier will be a coalesce of universal_alias_mapping.universal_user_idtracks.user_idtracks.anonymous_id, as defined in the mapped_user_id dimension. We can then define our count of active users as a count distinct of the derived mapped_user_id value.

How Looker Can Help

Every Segment implementation is unique, and Looker’s LookML layer allows you to customize any aspect of your model. With this flexibility, you can catch edge cases and gain tremendous freedom in defining metrics specific to your application. Your data is transformed into a simple set of definitions that can power the analytics of your entire department or organization.

I hope you’ve enjoyed learning how to track events to a single user. My next two posts are going to get even more exciting, as we build off this foundational element of a universal ID to create custom session definitions and measure visit behavior. Subscribe to the Segment blog to be sure to catch them!

Want to get started with Looker & Segment? Let us know.

Adam Marchick on January 13th 2015

Today we welcome CEO and co-founder of Kahuna, Adam Marchick, to the Segment blog! Kahuna is a kick-A push notification and analytics tool that’s now available on Segment! Check out the docs to get started. Now, onto tips for building user-centric notficiations!

Engaging your mobile users is a high-stakes game.

Competition among apps is fierce – the question for mobile companies is no longer simply how to get installed, but instead how to become part of a user’s regular routine. To rise above the noise and win the allegiance of your users, your mobile marketing must be strategic and personalized.

User-centric push notifications (push notifications that users actually want to receive) are the building blocks of any great mobile marketing strategy, but crafting the perfect push isn’t as simple as it may seem. Not sure if your push notifications are hitting the mark? Here are five ways to master the basics of user-centric push notifications.

1. Talk to the right people

User-centric push notifications must be tailored to the people receiving them, and segmenting your users before you message them ensures they receive information that is relevant and valuable. The old fashioned “spray and pray” approach to marketing is especially dangerous when applied to push notifications and will catalyze users to disengage with your app.

There are many ways to segment your users, ranging from the most basic forms of segmentation to more advanced user groupings. One best practice is to group users based on who they are, what they’ve done, and what they like. Start with these key groups to create simple, yet powerful, segments of users.

2. Make it personal

Not all messages are created equal, and when it comes to push notifications, only the best ones resonate. So how do you craft a winning notification? Personalized content that inspires and delights is a critical component.

Netflix does a great job of personalizing their push notifications. Here’s how they do it:

As you can see, Netflix uses push notifications to let users know when their favorite shows are available. Rather than sending every user a notification every time any new show or season is released, Netflix understands the specific shows that each user has been watching, and only sends a push notification to a user when one of their favorite shows has a new season available.

The result: each user receives a perfectly personalized message about the specific series they have been watching.

Message personalization is critical to generating great content, but as the Netflix example shows, your push notification system must understand your users’ behavior across platforms and devices, and in real time. You can learn more about to do this with Kahuna here.

3. Test rigorously

How do you make a great push notification even better? Test it! A-to-E message testing is a critical for a sophisticated strategy because even the slightest word change can make a significant difference in a message’s effectiveness. And the winning version may surprise you. Here are some real results of A/B tests and which push notifications won out.

Approaching Valentine’s Day last year, 1-800-Flowers readied themselves for the onslaught of mobile purchases by preparing to A/B test two very different messages. They tested up to five versions of one message to a small sample of users who had added an item to their shopping carts but had not completed their purchases. As you can see, one message variant included a 15% off promotion code.

After testing a small number of each message variations, 1-800-Flowers used Kahuna to automatically identify the winning version, and send that version to the rest of the users. Contrary to what was expected, the message that performed best was variant A – the variant that did not include a promotion code. In fact, the message without the promotion code generated 50 percent more revenue and resulted in fewer app uninstalls than the variant with the promotion code. That’s why you need to test everything.

4. Get the timing right

Tailoring your notifications to your users isn’t just about what you say, it’s about when you say it. When would the user appreciate receiving the information conveyed in the push notification? This graph reveals the optimal message send time for an app with tens of millions of monthly active users.

As you can see, 10pm is the time at which the most number of users are engaged with the app. But perhaps even more interesting, there is no clear winning time. At 10pm, nearly 90% of users are not using the app or interested in receiving a push notification.

The important takeaway is that every user is different, and every app is different. Users keep different schedules and have varying patterns of usage. The solution is to ensure that every push notification arrives at the unique time of day when each user prefers to engage with your app. This is something we offer at Kahuna, and we have seen this capability generate more than 3x the number of conversions.

5. Track the right metrics

How do you know if your user-centric push notifications are successful? Here are the three metrics you should be tracking.

Goal achievement: “Did the push notification drive users to take the desired action?” You should define the specific goal for each notification well before sending it, as different push notifications will be driving toward different goals. Examples of “goal achievement” metrics include: social shares, purchases, revenue, sign-ins, cart additions, and more.

User engagement: “Did the push notification enhance and enrich the user’s app experience?” An important metric for answering this question is the number of users who re-engaged with your app after receiving the push notification. Every push you send should prompt users to re-engage at the next level, turning monthly users into weekly users, weekly users into daily users, and daily users into rabid users and brand advocates. Tracking this metric is a good way to validate that the push notification was user-centric, not company-centric.

App uninstalls & push opt-outs: Another critical metric for evaluating “Did the push notification enhance and enrich the user’s app experience?” is the number of app uninstalls or push opt-outs that have been generated as a result of the notification. There can be a tendency to track only positive metrics, and this is a big mistake. Tracking the potentially negative ramifications of every push notification you send is the best way to know how users really feel about the message. When you are measuring this number in real time, it’s easy to adjust or cancel any detrimental push notification campaigns before it’s too late.

Key Takeaways

Push notifications that add real value to your users’ lives are critical to improving your brand, and in turn your revenue. Make sure you understand these key takeaways as you embark on the journey to send great push notifications:

  • Mobile apps use push notifications to enhance the product experience and drive user engagement and revenue.

  • Segmenting users before you message them and personalizing the message content ensures that they receive information that is relevant and valuable to them.

  • A successful push notification strategy approaches message timing from the perspective of the end user.

  • Before you send any notifications, you should choose a goal and track the necessary metrics to determine if the communication worked.

I hope you enjoyed these tips! If you’re looking into a solution for push notifications, you should check us out! Lucky for you, Segment makes installing Kahuna as easy as pie – you track important customer lifecycle events once, then Segment transforms and pushes out this data to all the tools you want to use, like Kahuna.

To learn more about user-centric push notifications and how to get people hooked on your app, join our upcoming webinar on Thursday, January 29 at 10am PT / 1pm ET. Alli Brian, growth marketing manager at Kahuna, and Jake Peterson, head of customer success at Segment, will be presenting.

Diana Smith on January 9th 2015

Like many startups, we’re experimenting with traction channels to discover what will drive our next stage of growth. But before we could test user acquisition, we had to answer an important question.

How much should we spend on this stuff?

We didn’t want to waste cash on frivolous ads or channels that weren’t working. We needed a benchmark help understand what was a good buy, and what wasn’t.

Today, we’re sharing our process with you! Here is the step-by-step guide we’ve used to confidently calculate what we should spend on customer acquisition and the tools that have helped us along the way.

Ask Three Questions.

For a SaaS company, there are three main questions you need to answer before you create a budget for acquiring new customers.

1. How much do your paying customers spend throughout their lifecycle?

We’re talking lifetime value or LTV. RJMetrics has a nifty calculator for this, but here’s a quick and dirty formula.

Gross Margin is what’s left after you’ve paid for COGS, or the cost of goods sold, such as support and hosting expenses. Churn is the percent of people that leave, unsubscribe, or in other words stop paying for your service on a monthly basis.

2. What percentage of people that signup actually pay you?

Many SaaS companies offer a free trial or freemium plan, so not every signup becomes a paying customer. You’ll need to calculate your Signup-to-Paying Conversion Rate to know what you can pay for a signup.

3. What percentage of folks that visit your website sign up?

You can calculate this Signup Conversion Rate with a simple funnel. If you’re using landing pages in your ads, use the landing page conversion rate.

For an e-commerce company, you can merge the third and second steps because you’re not giving away anything for free! You’d just calculate Visited Site / Completed Purchase.

If some of your campaigns target in-between conversion events (like capturing lead information for a newsletter), then your two calculations might look like:

Of course for mobile apps, you’ll be looking at app installs to rather than website views.

Now that you’ve calculated your three inputs: lifetime value, signup to paying conversion rate, and signup rate, you can easily calculate how much to pay to acquire new customers and to get a click through to your website.

Cost Per Acquisition Formula.

There are two main metrics advertisers use to report their costs: Cost Per Acquisition (CPA), or the amount of media dollars you need to spend to get one sign up, and Cost Per Click (CPC), or how much you pay when someone clicks through an ad to your site. Let’s start with CPA. It’s a two part formula for those of us that offer freemium products.

1. How much can I spend to acquire a paid customer?

A third of LTV is on the higher end of what you’d want to spend for one paying customer, but it’s fine for when you’re experimenting with new channels. After all, you’ll still be making money if your customer acquisition costs are lower than your LTV. However, when you find a channel that works, you’ll want to optimize for a lower CPPC that’s closer to LTV / 5.

2. How much can I spend to acquire one signup?

If some of your signups are on a free plan or trial, this additional formula factors that into the cost to acquire a single signup.

One caveat – some companies have enterprise plans that are much more expensive than self-service plans. In this case, you might want to factor in conversions to enterprise when paying for a regular old signup.

Calculating Cost Per Click.

Often times, your advertising platforms will make you choose a maximum bid for a click, rather than letting you pay for actual signups. This is where your Signup Conversion Rate from question 3 comes into play, since it calculates the likelihood that someone who clicks through your ad will convert.

How much can I spend for one click to my website?

There you have it folks – a very good baseline for what you should be spending on customer acquisition. Check out this Google Spreadsheet to input your own numbers, and calculate your own costs!

Optimizing Per Channel.

While these formulas give you a guide for what to spend, analytics, event-tracking and attribution tools will help you optimize your acquisition budget.

For example, people who find you via Google Adwords might pay you more money overtime than people who come from Twitter. As an advanced marketer, you’re going to want to evaluate the LTV of signups from each channel, which will change what you can spend to acquire customers via that channel.

Just plug the channel LTV number into the equations above, and you’ll be good to go. Also, make sure you’re getting enough traffic to make these calculations significant. Here is a tool to calculate how much you need.

You can compare if the costs are too high and the LTV too low for a specific platform compared to your baseline and cut it from the budget. Or, if you find a bargain channel, you can pour on the cash.

Tools That Can Help.

A/B Testing.

To get the most bang from your marketing bucks, you’ll want to consider optimizing the conversion rates we discussed. OptimizelyLeanplum (for mobile), and Taplytics (also mobile) are some good A/B testing tools to improve your website and landing page copy, design, and conversions.

Marketing Automation.

Event-based email and push notification services like Customer.ioOutboundVero, and Kahuna can help you move users through the activation funnel with personalized, timely communication and increase your Signup-to-Paying conversion rate.

Attribution.

If you’re running a ton of campaigns, keeping track of conversions and spends can get tricky. Mobile App Tracking and Convertro help marketers make sense of their entire media budget and see which channels are performing best. Google Analytics also has solid attribution reporting.

Integration Platform.

Segment is the easiest way to capture the data you need to calculate customer acquisition costs, and then send your conversion events to all of these tools. If you integrate Segment into your app or website once, then you can flip a switch to integrate any of these services. Your time is money, and we’ll save it.

I hope you found these tips helpful! If you have any other suggestions, share them with @segment on Twitter!

Janessa Lantz on December 2nd 2014

Janessa Lantz from RJMetrics – a Segment SQL partner – shares marketing and product tips for engaging, retaining and learning from your best customers.

The best companies are already using customer lifetime value (LTV) to measure marketing ROI. LTV is a powerful metric for optimizing marketing spend, allowing marketers to look beyond the channels or campaigns delivering first-transactions to identify those that are bringing in high-value, repeat purchases.

But that’s not all you can do with LTV.

To get you started, here are a few product and marketing hacks that leverage your high value customers:

1. Identify and encourage high value behaviors

What are your top customers doing on your site and what could their behavior tell you about other potentially valuable customers? Put your event data to work to answer this question and you can uncover the “golden motions” driving your business. In RJMetrics, you can build lists of your top customers by spend, then analyze what your best customers are doing on your site with Segment SQL data to find this out.

Once you know high value behaviors, you can use engagement tools, like email marketing and push notifications, to encourage users who haven’t taken high value actions to do so. For example, a SaaS company might find that their best customers invite at least two other users from their organization. An e-commerce company might find that repeat purchasers share their new goods on social media within a day of ordering. You could set up automatic campaigns to reach out to people who haven’t taken these important actions and help them to make the most out of your product.

2. Treat your top customers like VIPs

In the average ecommerce store, the top one percent of customers spends 30-times more than the average customer. You should be treating these customers like VIPs. For example, you could identify top customers who were on your site recently. Once your list is ready, export it, upload it to your email service provider and reach out to your best customers with a highly-personalized, tailored offer.

If you’re looking to engage with your very best customers without filling up their inboxes, you could use Facebook retargeting in a similar way. Once you’ve uploaded your list to Facebook Custom Audiences, you can send custom ads and offers to your most engaged, highest value customers.

3. Evaluate how new features are performing

Cohort analysis – or evaluating how a similar group of users behave overtime – is a great way to measure how feature changes are impacting customer engagement. With LTV, you can take this analysis to the next level by segmenting high-value vs. low-value customers.

Did your best customers quickly adopt the new feature? Is it driving low-value customers into the high-value range? Answering these types of questions will help you define and tweak your product roadmap and evaluate if new features were worth it.


Hope you enjoyed these tips! To learn more about what else you can do with RJMetrics and Segment, click here or hit us up on Twitter @RJMetrics and @Segment.

Jake Peterson on November 23rd 2014

Your analytics are only as good as the data you’re tracking. And deciding what to track is the hardest part about making your data useful. It’s overwhelming to create a tracking plan from scratch, so this article will give you a head start.

After discussing with hundreds of customers, reviewing best practices across all our partners, and some good ol’ trial and error, we’ve revamped Segment’s own tracking plan. We’re sharing it with you today, so you can use it as a reference.

First things first.

You should keep a few things in mind as you start building your tracking plan.

  • It’s all about the funnel. How do people discover, start using, and pay for your product? What are the most important steps along the way? These are the crucial events you want to capture.

  • Less is more. Track only the events you will use to make decisions, only the most essential pieces of information. Start with three. Seriously, three. Only add more later.

  • Get organized. Pick a convention for naming your events and properties. Your eyes, brain, and new team members will thank you later.

Core Lifecycle Events

Our customers hit three important lifecycle events as they find, implement, and pay for Segment. These lifecycle events are very similar across most SaaS companies. Here’s how we record them down as discrete events:

  • Signed Up

  • Sent Project Data

  • Started Subscription

Why?

In our own tracking plan template, (you can see it in a Google Sheet here), we have a column called “Why?”. In the “why” column we explain the purpose for tracking each event, which forces us to focus on what’s most important and leave out extraneous data that’s not critical to our business. We’d suggest you do the same! We actually borrowed this idea from you, our customers!

Protip. We’re sticklers for formatting and consistency over here, so any new person on the team can easily search for and add new events. We suggest naming all your events with past tense verbs and capitalized letters, ex: Signed Up__. Name properties in camelcase (lowercaseCapital form) for easy reading, ex: userLogin__.


Tracking Plan

Alright, let’s dig into why and how we track each of these events.

Signed Up

The Signed Up event is the key metric for Design to see how the site is converting and for Marketing to measure campaigns. It’s a user’s first baby step of commitment.

analytics.track('Signed Up', { userLogin: 'peter-gibbons', type: 'invite', organizationId: 'aef6d5f6e' });

We differentiate organic signups and invited signups, so we can understand organic website conversion separately from internal team sharing. To distinguish between organic and invited signups we use an event property called type. We also use automatically recorded UTM parameters to differentiate users that come through paid campaigns like Adwords.

Sent Project Data

This is it folks. This is the crux of whether or not people are using our product. Have they integrated? Are they sending data? How much? To which partner integrations? Via which methods? From which libraries and platforms? It all goes here.

analytics.track(userId, 'Sent Project Data', { // project projectId: 'bce5fad577', projectSlug: 'initech.com', projectCollaborators: 1,

// owner ownerId: 'aef6d5f6e', ownerType: 'organization', ownerOwners: 12,

// usage callsMonthly: 134811, callsWeekly: 22,

// methods methodIdentify: 14811, methodAlias: 1320, methodTrack: 2861, methodPage: 115819, methodScreen: 0, methodGroup: 0,

// libraries libraryIos: 13289, libraryAnalyticsjs: 121582,

// integrations integrations: 3, integrationKISSmetrics: true, integrationGoogleAnalytics: true, integrationCustomerio: true });

With this paramount event, we can measure which integrations and libraries are the most popular, bill folks according to their usage, understand how much data we’re processing, and measure our active users.

The tricky thing with this event is that it’s unique per project, but everything else is tracked per user. So we record a Sent Project Data event once a day, for each user, for every project they’re connected with that had data sent. For example, a user might be an owner or collaborator on 3 projects that sent data today, in which case we’ll send 3 Sent Project Data events, one for each of those projects.

This is a server-side event. Learn more about server vs. client side events here.

Started Subscription

We offer a 14 day free trial, so it’s important for us to capture when people actually start a plan and enter a credit card. This is the final step of the activation process.

This event should only be triggered when we immediately start billing them: credit card, plan, everything is set up. This is a server-side event as well.

analytics.track(userId, 'Started Subscription', { ownerId: 'aef6d5f6e', ownerType: 'organization', ownerLogin: 'initech', ownerEmail: 'peter@initech.com', planName: 'Startup', planId: 'startup-$79-1-month', });

Phew! That’s our three core funnel events. These are the most essential events to help us determine the health of our business.

If you’re just getting started, we recommend you check out our template for a basic tracking plan here. Like I said before, start with three core events.

If you’re an analytics wizard, read on. We’re a data company, so we couldn’t help but identify a few more events that help us make better decisions.

Getting Fancy

In addition to our core funnel, we’ve started tracking a few additional events to help us inspect user behavior around the edges of the core funnel.

  • Created Organization – Most of our revenue comes from organizations that use Segment, so it’s important for us to know when someone levels up from a personal account to an organization account.

  • Invited User – When users invite more people to their organization, it’s a good indicator that they’re engaged and serious about using the product. This helps us measure growth within organizations.

  • Enabled Integration – Turning on an integration is a key engagement metric.

  • Disabled Integration – We also want to know which integrations people are turning off, experimentation with different tools is positive engagement.

  • Debugger Call Expanded – When we see that a certain customer has used the live event stream feature a number of times, we reach out to see if we can help them debug.

  • Created Ticket – How is our support doing? How many users contact support during or after the setup process? This event helps us measure our setup guide and documentation quality as well.

Identify and Page Calls

Using Segment track calls you can easily capture interactions people take on your site. But you really want to tie this information to two other pieces of the puzzle: who is taking the action and where they are when they do it. That’s where our identify and page methods come into play.

Take a look here to learn about all of our methods.

analytics.page('Docs', { section: 'API Reference', topic: 'Identify Method' });

Note: The Segment page method also collects the page title__, url__, path__, referrer__, search and campaign (UTM parameters) automatically.

analytics.identify(userId, { created: '2014-06-30T16:40:52.238Z', name: 'Kanye West', email: 'kanye@iamawesome.com', login: 'kanyew', type: 'user' });


If you want all of this in a handy Google Doc, we’ve got you covered.

And remember, if you have any tracking questions or thoughts on our plan, don’t hesitate to contact us.

Thanks!

Derek Steer on November 19th 2014

We welcome Derek Steer, CEO of Mode, to the Segment blog! Mode is a SQL integration partner and has created these open source queries so you can start learning ASAP. The best part? Their playbooks are tailor-made for the Segment SQL schema.

The Fast Track to Data Driven.

You’re planning ahead to 2015. “Be more data-driven” is on your company to-do list.

You want to give your team access to better information–and help them make more informed decisions. Questions like “how many invites did we see yesterday?” and “how are customers using the invite feature?” are popping up right and left. Maybe you’re hearing things like “what marketing channels lead to the highest lifetime value customers?” when six months ago this type of question sounded simpler, like “how many daily sign ups are we getting?”

It turns out a lot of these insights are hiding in the raw event data that your product is already generating, quietly in the background.

Thanks to Segment, you can track these analytics events and flip on Amazon Redshift to start digging into these questions.

As we started working with our own Segment data—and helping some customers beta testing the product—we found that many of us had similar questions about how people use our products.

To put all these questions in one place, we started writing SQL queries that could be tailored to anyone’s Segment data schema. These open source queries—we call them the Mode Playbook—can help you find Retention and User Behavior insights in your data from the moment you connect your database to Mode.

Find Insights Fast with Open Source SQL.

Let’s look at an imaginary messaging app as an example.

Your Product Managers are talking about how to increase sent messages but no one seems to have a specific understanding of what users do before they send a message. With the User Path report, you can help them visualize the paths the users take before sending a message.

This report, like every other Playbook report, uses a common table expression. Think of a common table expression like a temporary table that a subsequent query can reference. Modify the common table expression to reference your schema and run it. The next step is to share it with coworkers and start talking about what the data means for upcoming decisions.

The magic of this common table expression and the standardized Segment event stream is that this report will work whether you’re analyzing messages sent, photos shared, orders placed—anything. It can also work with mobile apps: Replace the pages table with the screens table, and the properties.title column with the name column.

If you’d like to use more explicit events other than page or screen views, you can combine Segment events together into a single table using the UNION function in Redshift. As long as the events table defined here has a user_id column, and event_name column, and an occurred_at column, the rest of the report just works.

You can also explore other aspects of retention and user behavior in the seven other Playbooks.

Sometimes Simple Queries Are Best.

You don’t always need a fancy interactive chart to answer someone’s question—and dashboards often lead to requests for underlying data. So, we advocate starting with simple queries that get to the heart of your questions. Let’s say the Product Manager in the example above is curious how message volume changed since a feature launch. It just takes a quick query to find out:

Create a simple line chart and share the report out with stakeholders. They’ll be able to refresh the results any time.

Polish Up Your SQL Skills. Your 2015 Self Will Thank You.

We developed the Playbook to get you and your company on the fast track to being more data-driven, inspiring deeper exploration of your data with SQL. It’s easy to get started with SQL, especially if you’re familiar with Excel. We created a free, interactive tutorial called SQL School, so you can dive right into learning SQL or brush up your skills.

We’re excited to help you find insights in your Segment SQL data. You can sign up for Segment SQL here and for Mode here.

To learn more about how we seamlessly integrate with Segment SQL and have helped customers like Munchery get up-and-running quickly, mosey on over to Mode.

Harry Glaser on November 17th 2014

We welcome Harry Glaser, CEO of our SQL partner Periscope, to the Segment blog! Harry’s talking cross-database joins on the heels of our Amazon Redshiftlaunch. If you want to analyze behavioral data across platforms, and Excel won’t cut it, here are some tips to level-up your analytics game.

All The King’s Databases

It began with the best of intentions: You launched your first web app for your customers, backed by a database full of transactional data to analyze. In time you added a read replica, and replaced Excel with an more-advanced visualization tool to go with it.

Now you’re launching your first mobile app. You want SQL access to the underlying data store, but building a server to receive pings is much too difficult. So you make great use of a fabulous event-tracking to SQL solution.

But now your data is in two places. What if you want to know whether your iOS users are big spenders? You’d need to slice monthly iOS users in your mobile app database by payment plan information in your web app database. Luckily, there is a solution: cross-database joins.

Cross-Database Joins

You need to connect your transactional, web, and mobile in one table. To start, Let’s count our iPhone MAUs (monthy active users), with Segment’s SQL schema as an example.

We’re counting a user as active in a given month if they’ve started a session in that month. This query gives us a graph like this:

Now, we just need to bring our payment plans into the chart. This is where the magic happens. We’ll join in the users table on our web database, and slice the query by users.payment_plan:

Note that we now need to fully qualify the tables in the FROM and JOIN clauses with their database names: segment and web_prod.

And our hard work pays off! Here we can see our iPhone MAUs sliced by payment plan:

How It Works

Cross-database joins, and in the case of Periscope – our query speeds, are enabled by our Postgres-based data cache. Each customer’s data is stored in the same database, with one schema per (database, schema) pair. This architecture allows us to run exactly the query you wrote, with some simple rewrites to make it valid.

Here’s the rewritten query:

In this example, your Segment database’s iphone_production schema is translated to the db_1234_iphone_production schema in Periscope’s data cache. And web_prod‘s (unspecified) public schema is translated to the db_1235_public schema. The rest of the query remains the same!

Start Exploring

We hope you enjoyed this lesson in cross-data base joins. If you have any thoughts or questions, reach out to us on Twitter @PeriscopeData or hit up friends@segment.com. We’d love to discuss.

We’re also offering a 30 day trial for Segment Redshift customers. To get started, swing on over to the Segment Redshift access page and then sign up for Periscope.

Become a data expert.

Get the latest articles on all things data, product, and growth delivered straight to your inbox.