Josephine Liu, Sherry Huang on June 9th 2021

Our latest feature, Journeys, empowers teams to unify touchpoints across the end-to-end customer journey.

All Company articles

Amir Abu Shareb, Calvin French-Owen on June 29th 2016

We’ve been longtime admirers of Google’s efforts to speed up the internet: everything from SPDY to Chrome to Google Fiber. Google has invested heavily in making the internet a better, faster place for billions of people across the world.

So when we first heard about the Accelerated Mobile Pages project (AMP), we got incredibly excited about what it could mean for the mobile ecosystem, and how we could help speed its adoption.

As of today, we’re excited to announce that Segment now supports AMP right out of the box. The preview is currently live on the dev channel, and will be generally available on the stable channel starting tomorrow.

Because Segment supports over 100 different customer data tools that proxy directly from our servers, we’re able to instantly support those tools without additional calls from the client.

That means, every server-side tool on our platform effectively supports AMP as well; from Amplitude to Zendesk.

With just a few line of code, you’ll instantly have analytics tracking in all the tools you know and love.

You no longer have to choose between AMP’s SEO benefits, and having your data in the top analytics tools. With Segment + AMP, you can have it all. ⚡️

What is AMP?

AMP is Google’s “Accelerated Mobile Pages” project. Its entire purpose is to make loading content from mobile browsers lightning fast.

According to the research in AMP’s FAQ, the conversion rate for pages that take more than 10 seconds to load drops by a whopping 68%. The project’s mission is to speed up the loading times for mobile users everywhere.

AMP’s core premise is to provide a subset of what normal HTML and Javascript webpages will give you. And that subset is made up of only the most performant features. AMP intentionally limits the amount of dynamic configuration a page can do, but gives it the benefit of additional performance.

Here are a few tricks AMP uses to optimize performance:

  • static sizing – all assets have to be sized statically in order to minimize the amount of page repainting.

  • don’t block loading with third-party scripts – AMP ensures that analytics scripts and iframes don’t block the page rendering, and that pages can load in a single pass.

  • css is in-lined and size bound – instead of fetching out to a separate stylesheet, the browser can render the page instantly.

In most cases, these restrictions don’t fundamentally limit the functionality of the page–but they do provide a significant speed boost.

The second aspect of AMP is to make all of the content extremely easy to cache.

Ilya Grigorik has talked extensively about how the danger with mobile actually_comes from the round-trip-time in the _last hops of the network. Even if you speed up the server rendering by 20 milliseconds, transferring the ‘last mile’ is often the major bottleneck that cause users to bounce.

So Google is taking a more proactive stance when it comes to caching content. They guarantee each of their edge servers will automatically cache AMP-enabled content, and pre-load it when entered via Google search.

To provide extra incentive, Google is even prioritizing AMP results so that users searching for articles and results can get there more quickly.

For years, we’ve had native mobile apps that are instantly responsive and load only lightweight pieces of data. Today, AMP is bringing that same responsiveness to the web.

Analytics for AMP

To achieve superior load times, AMP ships with a curated collection of bundled analytics and advertising providers. Google integrates only a small number of service providers that are committed to speeding up page loads via lightweight, declarative javascript calls.

When it comes to analytics data, AMP’s philosophy is simple: “measure once, report to many.” And as it so happens, that’s extremely well-aligned with our philosophy at Segment.

As a quick reminder, Segment collects data, and then proxies it to whatever customer data tools or warehouses you need. We provide a single API to take in that data and then fan it out into whatever end tools you might need. Measure once. Report to many.

Using Segment through AMP is a win for customers because there’s no need to wait for extra pull requests to the core AMP repo to support your favorite tools. It’s a win for Segment’s integration partners because they don’t need to do anything to add support for AMP. And, it’s a win for mobile visitors because they get faster experiences for less bandwidth.

As a result, anyone running an AMP-enabled page can get up and running with all of their first-class analytics tools, today.

Identifying Users

Despite all of AMP’s performance benefits, there is a definite shortcoming of AMP that we hadn’t initially foreseen.

Because AMP is static, it doesn’t provide many options for persistently identifying users. There’s a basic cookie mechanism, but nothing that allows you to write to localstorage or otherwise save state. Depending on whether the page is served via a cache or not, it may even create multiple users in tools like Google Analytics.

The recommended approach is to create a separate GA property for your AMP pages. That way you can better isolate AMP pageviews from the rest of your application.

But we thought there might be a better way–provided you’re willing to do your own custom analysis. For that, we can turn to our old friend: the data warehouse.

Since Segment also gives you the raw data in a data warehouse, you can join AMP cookie ids (automatically detected via analytics.js) with the cookies you already track. It means that you can get a first-class view into what your users are doing, no matter how the page is served!

Get AMPed!

What’s more, installing Segment with AMP couldn’t be easier. Our integration automatically comes bundled in the AMP javascript, so you won’t need any additional code.

By default, the Segment amp-analytics integration will automatically track pageviews. But, it’s easy to add additional triggers for click and scroll events:

And with just a few lines of code, you’re off to the races! You’re ready to start serving lightning-fast pages for all of your static content–with first-class analytics support.

For more info on getting rAMPed up, check out our docs page.

Until next time, stay ⚡. ️


P.S. Want to learn more? Come hear about it at CircleCI office hours this Thursday.

Using AMP in production? We’d love to hear about your experience. Drop us a line at friends@segment.com or on Twitter.

Kevin Niparko on June 23rd 2016

Today we’re excited to announce a new wave of email Sources. You can now use Segment to collect and analyze email events like Email Opened and Email Link Clicked from eight popular email providers: ActiveCampaign, Mailjet, Customer.io, Vero, Klaviyo, Iterable, Drip, and Nudgespot.

These Sources will help you ask and answer tough questions about how well your campaigns are actually performing.

  • Would the conversion rate increase if you didn’t send an email?

  • How does one email affect user behavior three steps down in your funnel?

  • Are you emailing customers too many times during an average week?

Performing these types of analyses has typically required spreadsheets or custom ETL processes. With Segment Sources, you can capture email data with just a few clicks and no new code!

When you activate any of the new email Sources, Segment will automatically capture Email DeliveredEmail OpenedEmail BouncedEmail ClickedUnsubscribedand Email Marked as Spam events from your favorite email platforms. Then, we’ll send these events out to your data warehouse. Unlike most of our other Sources, these eight Sources will also send events out to the integrations you’ve connected through Segment. All you have to do is add your email platform credentials to Segment to get started!

Measuring Your Campaigns

To get started with email data, here are three analyses you can try today!

  1. Analyze how email affects your conversion funnel

  2. Test email performance with a “no email” control group

  3. Find out if you’re sending a few too many emails

You can run these in any analytics integration or BI tool on the Segment platform!

The full-funnel effects of email

Since most behavioral email tools focus on a measuring a single conversion event that occurs after an email is sent, analyzing email events within the context of your funnel isn’t easy.

Plus, most email tools show you the conversion rate for everyone who was sentthe email… whether they opened it or not. Analyzing email opens more accurately measures the effect of the email itself.

Let’s say you have a basic question: how many people signup, open a welcome email, and start a subscription? With your email data, you can finally figure this out.

Email opened funnel. (Amplitude Dashboard)

Nice! Looks like ~30% of signups open the email, and then 20% of those people go on to start a subscription.

If you’re curious what happens after the subscription starts, you can add in even more events! This is only possible with your email data in analytics tools and data warehouses — very few email platforms allow for multi-conversion analysis.

Testing the null hypothesis

To further dig into the performance of your emails, you should probably be testing the null hypothesis with a control group. How did the cohort of users who received a particular email behave compared to those who didn’t get the email? Using the Email Delivered event you can answer this question easily.

Activation rate when “Welcome” email is not delivered. (Indicative Dashboard)

Activation rate when “Welcome” email is delivered. (Indicative Dashboard)

In the example on the top, you can see the one week activation rate for customers that do not receive a personalized onboarding email is 1.64%. In the second screenshot, you can see the activation rate for customers that did receive the email go up to 7.97%. And, they activate 100% faster (3.09 down to 1.48 days)!

Not bad! Having confirmed the value of this email, now you can experiment with different email variations and watch how that impacts your overall funnel.

Email overload

Lastly, it’s a good idea to make sure you’re not inundating your customers with emails.

You can start this exploration by asking the simple question, how many emails do you send users on an average week? This is nearly impossible to discern in most email tools. But it’s easy to do with Segment and our Sources partners!

First, let’s look at the average number of emails sent to users, over time.

Average number of emails delivered per week. (Amplitude dashboard)

In this example, we’re averaging between 1–2 emails per week. That’s acceptable. But what does the distribution look like?

Email delivered analysis. (Amplitude dashboard)

Whoa! Looks like 300 customers got 5–6 marketing emails in a week which is probably too many. You can split these by campaign and subject to see where these emails are coming from and explore how to avoid email overload in the future.

Getting Started

Ready to start your own analysis? To collect your email data from ActiveCampaign, Mailjet, Customer.io, Vero, Klaviyo, Iterable, Drip, or Nudgespot, follow these steps:

  • Go to the Sources catalog in the Segment dashboard and click on your preferred email provider

  • Get your Segment write key

  • Jump into your email provider’s settings page and enter your Segment write key

  • Connect tools where you’d like to analyze this email source data

Boom! You’re ready to take your email marketing to the next level.

Without further ado, log in to add an Email Source today.

Not a Segment customer? Get started here!

Chris Sperandio on June 8th 2016

We’re incredibly excited to share that Segment’s customer data platform is now powering the analytics stack for over 3,000 mobile apps, including our friends at HomeAway, HotelTonight, Instacart, VSCO and DraftKings. Collectively, these apps have over 500 million downloads, and we’ve been poring over our data and conversations with these customers to build you something new: the Native Mobile Spec.

Over the last three years, our customers have taught us that it’s particularly tricky to understand the customer journey on mobile. Users might find your app through an ad, download it, not purchase anything, come back through a push notification, discover a different product, and buy that one instead. How could you analyze that complex of a customer journey and attribute it to business efforts? Our customers told us the answer was “a long story.”

We want to make it easy for you to collect and analyze each of these touch points. That’s why we’ve released the Native Mobile Spec, a standard blueprint for automatically tracking key events across the mobile user lifecycle.

Automatic Event Tracking

Our core product has always stripped away redundant analytics tracking with a single library for capturing user events. The Native Mobile Spec will save you even more engineering time, since you no longer have to code in essential user events one by one.

We analyzed the analytics implementations of more than 3,000 different apps to define the Native Mobile Spec. Now Segment’s iOS and Android SDKs automatically track these essential user interactions:

To view the full details, see our documentation. If you’re an existing Segment mobile customer, you can opt-in to receive these events.

  • Application Installed — User installs your application for the first time

  • Application Opened — User opens your application

  • Screen Viewed — User is shown a new screen

  • Order Completed — User completes a transaction on your app

  • Application Updated — User updates to a newer version of your app

A Lighter App

The Native Mobile Spec will also slim down the size of your app because you can move key integrations, like Facebook App Events, to the server-side instead of bundling them in the Segment SDK. This functionality is available for customers that opt-in to automatic event collection and for those who instrument their own events that respect the Native Mobile Spec.

Our new server-side Facebook App Events integration powers popular features like Facebook App Analytics, dynamic product ads, custom audience creation, and conversion tracking with events captured by the Native Mobile Spec. That’s saving you 1,160 KB on iOS and 4,000 methods on Android by moving the integration to the server-side!

Soon, we’ll be rolling out similar functionality with server-side integrations updates for Google Adwords, covering Conversion Tracking, as well as Salesforce Marketing Cloud’s Predictive Intelligence feature to better tailor your messages.

Essential Mobile Metrics

With the new SDK update, you’ll be able to get your marketing and product teams up and running with our integration partners even faster. Now that Segment automatically collects key lifecycle events, you can start measuring top mobile metrics without any tracking code.

You can report on these essential metrics in your favorite analytics tools like Mixpanel, Localytics, and Amplitude, or easily create dashboards in your BI tools like Tableau, Looker, and Mode with the data collected from the Segment SDK.

  • Installs per Day

  • Daily Active Users

  • Monthly Active Users

  • Session Length

  • Session Interval

  • Average Revenue Per User

  • Average Revenue Per Paying User

Coming Soon: Capturing Campaign Events

Nearly 22% of all apps downloaded are used only once, but the cost of acquisition is close to $5 a pop! That’s why it’s imperative to measure which of your acquisition campaigns drive the most valuable users, and to bring users back with the right mix of push, email, and SMS.

To help you measure these campaigns, the Native Mobile Spec also details deep linking, push notification, and attribution events. (Our Twilio source covers SMS!). In the next few weeks we’ll be releasing updates to our mobile SDK and mobile marketing integrations to automatically collect these events and fan them out to your analytics tools and warehouses for the first time.

  • Install Attributed

  • Deep Link Clicked

  • Deep Link Opened

  • Push Notification Received

  • Push Notification Tapped

  • Push Notification Bounced

You’ll be able to understand the complete lifecycle of your mobile users and accurately measure the impact of your campaigns by combining your marketing events, in-app behavior, and revenue into one dataset powered by Segment.


We’re excited to continually improve the Segment customer data platform, helping you collect, combine and access data across all of your customer touch points. The Native Mobile Spec is just the beginning! Stay tuned for a wave of updates designed specifically to address the needs of mobile app companies.

Not a Segment customer yet? Request a demo or sign up today!

Sarah Spangenberg on April 8th 2016

This week we launched Segment Sources — a new way to bring together all of your customer touch points into a single database. More than 11,000 developers and analysts already rely on Segment to help load data from their websites and mobile apps into their data warehouse for advanced analysis. Now you can add in brand new dimensions of the customer experience that happen outside of your product in other cloud services like sales calls, support tickets, texts, payments and more.

But bringing all of your data into one place is only the first part of the story. The next step is exploring your data to answer important questions about your business. Connecting these apps — Salesforce, Zendesk, Mandrill, Stripe and more — to your database with Segment Sources opens up new possibilities for analysis and understanding the complete customer experience.

Query Quicker

To help you get your insights faster, we’ve built some getting started queries and detailed documentation around Sources schemas. However, we’re most excited to announce new offerings from our business intelligence partners to help you explore and visualize this data.

The fine folks at BIMEModeLookerPeriscope, and Chartio have built useful, new resources to help you analyze your Sources data. Take a look their offerings below, so you can start answering new questions about your customers as quickly as possible.

BIME

BIME is a visualization platform that’s great for business users. You don’t need to know SQL to build impressive reports and charts. Now a part of the Zendesk family, BIME has built handy dashboards for the Zendesk Source.

Resources:

  • Walk in your customer’s shoes with Segment and BIME — This article lays out how you can get started with Zendesk, Segment, and BIME and analyses you can run right away.

  • Customer Support Control Center — Layer on this dashboard to your Zendesk data in BIME and immediately see all of the important metrics for your customer care team. The dashboard covers everything from volume by ticket type, to average resolution time, requester distribution and more!

Mode

Mode is a powerful platform for analysis, great for ad hoc queries and sharing data stories with everyone at your company. Mode has a number of awesome resources for Sources data, particularly around analyzing Salesforce data

Resources:

Looker

Looker makes it easy for analytics teams to create a data platform that everyone in their organization can explore. They support building data driven cultures, by taking the time upfront to define a common data model for your company. Looker has a number of analytics resources for Segment-Looker customers, from their pre-built “Blocks” for Segment and Salesforce data, to education on joining user IDs across platforms.

Resources:

Periscope

Periscope is a popular tool for dashboarding and SQL analysis. For Segment customers, the Periscope team is offering customized support and building sample dashboards to showcase how you can combine data across multiple sources. Their solutions team is available via live chat, as well.

Resources:

Chartio

Chartio is an analytics platform designed to empower anyone in your company to explore and visualize data in a meaningful way. With an easy-to-use interface, anyone can build custom reports and drill down into anomalies without using SQL. Segment Source customers using Chartio will be able to discover hidden business insights even more easily with pre-built dashboards and queries that will get them started in minutes.

Resources:


Many thanks to our BI partners for helping to make querying Segment data a delightful experience! Each team has been working hard to close the loop between getting your data where you want it and finding insights in that data. If you’re using Segment Warehouses already, you can discover all of these tools directly in the Segment app under Warehouses Connect!

To learn more about the new types of data you can pull into your warehouse with Segment Sourcessign up for our webinar on April 19!

Ilya Volodarsky on April 6th 2016

Most companies analyze what’s happening on their mobile apps and websites, but that’s only a sliver of the customer experience. Your customers aren’t just using your app—they’re also sending in support tickets, opening emails, talking with your sales team, tapping through your text messages, and more.

Each of these touchpoints influences your customers’ likelihood to sign up, activate, purchase, and re-purchase, but it’s been nearly impossible to know precisely how, or by how much because each tool stores your data in isolation, away from all of the other tools you use.

Digging into Salesforce data meant using the dreaded data loader. Analyzing SendGrid email opens over time required building your own webhook ingestor. Most frustrating, you couldn’t join together Salesforce, SendGrid and Zendesk datasets with other cloud services or your product data because each stream lived in different databases.

Until now.

Introducing Segment Sources

Today, I’m excited to announce Sources, a new offering from Segment that provides unprecedented access to new types of customer data.

With just a few clicks, you can load data from Salesforce, Zendesk, Stripe, SendGrid, Mandrill, Intercom, Hubspot, and Twilio straight into Redshift or Postgres without writing a line of code. (Coming soon: Google Adwords, Facebook Ads, Postgres, MySQL, and more!)

With Sources, you can understand the complete customer experience. Sure, you can drill down beyond the limitations of a particular tool’s dashboard with that data in a flexible SQL format. But what’s more exciting are the possibilities of combining this data with your traditional analytics data from your websites, mobile apps and servers.

You can retrieve a complete list of interactions across marketing, sales, and support for a particular customer in a single query. You can finally learn how interactions with your support team affect customer activation and conversion. The possibilities are nearly endless.

Getting started is easy. Just enter the credentials for your cloud services and data warehouse, and we’ll start syncing your data. You’ll be notified when the data is ready for you. 😎

No writing code, no maintaining data pipelines, no silent fails or late-night pages. We manage the entire infrastructure so that you can focus on getting value from your data, not supporting the code to move that data around.

Understand your customers better

Here are a few ways companies like Instacart, Trunk Club, and Mesosphere are already using Sources.

  • Quantify the value of customer support. Mesosphere ties together Zendesk and Salesforce data to understand how interactions with a support representative affect upgrades and churn. They also use this data to prioritize bug fixes and product requests by connecting each suggestion to existing and potential revenue.

  • Understand the effects of email over time. Trunk Club analyzes Mandrill data to dig beyond campaign aggregates. They’re looking into how users that regularly open (or don’t open) emails behave differently from less engaged customers over time.

  • Learn which product features lead to bigger deals. Trustpilot uses Segment and Salesforce together to understand what actions correlate with a higher contract value. Trustpilot leverages this data to better score leads and push users in the product toward these “aha” moments.

Start querying today

At Segment, our goal is to make customer data easy to use and accessible across your entire organization.

  • Collect — Gather user data from your website, mobile apps, servers, and cloud services.

  • Structure — Abstract your data into user identities, actions, and business objects.

  • Integrate — Send the data to more than a hundred tools for analytics, email, and more with the flip of a switch.

  • Access — Load your raw data into a relational database without building a data pipeline.

With each new product and feature, we are building a more complete customer data platform. Sources is the latest step on our mission to make your data work for you.

Unite your data today or learn more at our upcoming webinar.

Diana Smith on March 10th 2016

Success Engineers play a critical role at Segment. They are our front line, answering customer questions as quickly and thoroughly as possible. The team is so important that Peter, our CEO and the original Success Engineer, does a monthly rotation in Success to stay connected to customer questions and problems.

On a typical day, our success engineers field questions from, “How do I install Segment on a single-page app?” to, “What is the best email solution for my business?” They need to know a TON about how Segment works, our integration partners, and how to track down code across our thousands of repos.

You could say we hire some “nontraditional” folks for the Success team. Some used to be teachers. Others started in finance or ad tech. Many went through developer boot camps to sharpen their technical skills. One thing they all have in common? A thirst for learning and a desire to put customers first.

Because our Success Engineers learn so much about our product and customers, they are well poised to take on other challenges at Segment, and bring the voice of the customer along with them. This post will give you an inside look at some of our awesome teammates who started in Success and now apply their experience in roles that vary from systems engineering to growth marketing.

Meet Steven, Chris, Andy and Will!

From left to right: Steven Miller, Will Johnson, Andy Jiang and Chris Sperandio. Unnamed gnome also integral to our success.

Meet Steven Miller, Frontend Engineer

Multi-talented, Steven studied both Chinese and Finance in college. When he moved from Miami to San Francisco, he signed up for General Assembly to learn to code.

He joined our team in September 2014 and immediately began working harder than anyone I know to improve his engineering skills. He volunteered to work on extra projects for the growth team, wrote his own open source libraries like Daydream, and built Sherlock — a tool that scrapes your website for existing integrations, so customers didn’t have to do a bunch of annoying copy-pasting.

After several months working with customers as a Success Engineer and taking every opportunity to level up his coding skills, Steven joined our Product Engineering team. If you’ve logged into Segment recently, you may have noticed a bunch of changes to our UI that make it easier to find your projects and understand how integrations work. Steven built many of those systems and works closely with our design team to improve the experience inside the Segment app.

“Starting in success engineering gave me a unique perspective that the average engineer doesn’t share — a sense of responsibility for the success of customers on our platform,” said Steven. “Engineering teams can sometimes get a bit distanced from the actual people using their code, but helping customers on a daily basis showed me how they currently interact with our product and what they hope to do in the future. My background in Success helps me advocate for features that will make the most impact for customers.”

Meet Chris Sperandio, Product Manager

Chris originally stumbled across Segment while aimlessly browsing Github repos. Later, when he saw the brand new “Success Engineer” position open up, he knew it was his chance to get in.

Chris sent in an impressive 3-page cover (love?) letter, so we brought him in from Bawston to meet the team. He started working for Segment the day after his interview (our fastest turnaround on record) and helped to start the Success Engineering “guild.” He took over technical support from our CEO Peter who was still running point back in August 2014.

After spending many months helping customers instrument Segment correctly, debugging integration issues, and sharing customer feedback with the team, Chris moved into Product Management. Chris is now responsible for making sure our integrations are up to date and helping new partners get started on the platform. He spearheaded our mobile platform launch and is currently working hard building a brand new product. (Stay tuned!)

“Hearing so many requests from our customers on a daily basis drove me into Product Management,” said Chris. “I wanted to make their requests happen, and getting directly involved in choosing what features we build and how we build them seemed like the best approach for me. Nearly all of what we’re building now came from conversations with our customers in support. We can’t wait to be able to share it with them!”

Meet Andy Jiang, Growth Marketer

tinkerer at heart, Andy has a diverse professional background. He started in finance and banking, then went into startup sales at Twilio, and eventually landed at Segment as a success engineer, drawn by the newness of the technology. Andy has always had an interest in building communities which led him to join our marketing team after his time as a Success Engineer.

Andy is the reason you might find so many emojis in our Twitter feed or notice a delightful quote when you receive a response from our success team.

Andy spends his days searching for awesome stories to tell and figuring out how to get them in front of the right people. He wrote one of our best performing posts, “How Segment Models Growth for Two Sided Marketplaces” and launched Analytics Academy. Because he brings a technical background to the team, Andy also helps our CTO Calvin power through his posts and sets up the tracking systems for the marketing team.

“Every time I write something new, I think about my conversations with our customers when I was in Success Engineering,” said Andy. “This helps me gut check that I’m picking topics our customers actually care about and gives me specific people in mind to write articles for — Is this for power users? Beginners? — Picturing these customers helps me create more focused, and therefore helpful, content.”

Meet Will Johnson, Systems Engineer

Before Segment, Will worked in finance and advertising as an analyst. He attended a coding bootcamp to expand his skillset from Excel to include web development and presenting stories visually with data. As an analyst, he experienced the frustrations of having unreliable or siloed data sets, so Segment really struck a cord with him.

Will started off as our first Implementation Engineer, working closely with our enterprise customers to get them set up. He’s tackled everything from how DraftKings should handle tracking across hundreds of pages and campaigns with the fewest possible events, to how Atlassian can track users across all of their different products.

After working with our customers on best practices for analytics, he wanted to help us reach the same level of sophistication internally. Now, Will is a systems engineer. He makes sure our tracking, tooling, and metrics are accurate and easily accessible. From writing ETL pipelines to analyzing pricing structures and teaching the rest of us SQL, Will touches a bunch of different data projects. You could say he is the ultimate dogfooder of our product. 🐶

“I love using my experience from working with customers to inform the development of our own internal systems,” Will said. “Whether I’m working on a new SQL query, ETL process or workflow automation, I’m always focused on helping us understand and serve our customers better.”

Customers First

We hope you enjoyed getting to know a few of our teammates here at Segment. We always want to be helpful, and having Success Engineers join new teams is just one way we make sure we keep our customers top of mind.

“A big part of our culture is sharing information across teams and giving teammates the opportunity to work on new projects,” said Adriana Roche, who heads up our people operations. “Success Engineers play a critical role keeping customers happy and funneling their feedback throughout the entire organization.”

If you’re curious about a career at Segment, check out our open positions! We’re looking for more driven, collaborative people to join the team.

Brent Summer on January 27th 2016

A couple of weeks ago a good friend of mine stopped by our Potrero Hill office. As soon as she walked in the office she said, “Wow! This place is like a breath of fresh air.” Ivy that covers one side of our building, and plants are spread throughout our 20,000 sq foot warehouse office.

And we’re also now officially “green”: the City and County of San Francisco has recognized Segment as a green business! We’re thrilled to be among so many great businesses trying to preserve our beautiful natural world.

The SF Green Business Program is comprised of three city agencies: SF Environment, San Francisco Department of Public Health, San Francisco Public Utilities Commission. These agencies advocate for environmental practices that are sustainable as well as profitable. The SF Green Business Program is also a member of the Bay Area Green Business Program and the California Green Business Network. Segment is one of 71 businesses recognized by the SF Green Business Program in 2015.

Green practices at Segment

Of course, it takes more than a few recycle bins to become a green business. Being a green business requires taking action to conserve resources, prevent pollution, minimize waste, and comply with environmental regulations. Like Kermit sang,“It’s not that easy being green.”.

These are the green practices we’ve introduced as part of this program:

  • Adopting a green purchasing policy

  • Installing low-flow aerators and flushers in all of the kitchen and bathroom facilities

  • Changing over all paper goods and cleaning products to green, SF-approved products

  • Using only EnergyStar and energy-saving technology

The leadership team has started doing street cleanups, and we push hard to make sure people are composting and recycling. Our leftover catering gets donated to neighborhood food programs. (Thanks Food Runners!)

There are dozens of commuters who bike in every day on bikes purchased using the $400 allowance Segment gives every new employee for two-wheeled, human-powered transport. Segmenters who live farther away can also take advantage of our Commuter Benefit Program to fund their public transit with pre-tax dollars.

We are trying to build a sustainable business. At Segment, being sustainable isn’t just about monitoring our Average Lifetime Value or controlling Customer Acquisition Costs. It also means reducing our carbon footprint, giving back through philanthropic efforts (we donated over $50k to charity in 2015), and being mindful about the products we purchase to keep our business running smoothly. Participating in the San Francisco Green Business fits right into that vision of sustainability.

For more information about becoming a SF Green Business, go here.

Andy Jiang on January 14th 2016

In December 2015, Slack announced it would invest $80 million into Slack bot startups. This comes as no surprise for Slack’s 2 million daily active users, and bolsters its strategy to create an ecosystem of productivity and collaboration services. There are already over 4,000 Slack integrations (in addition to 150 official apps) which have over 2.2 million installs. The community is finding Slack useful for nearly everything. And we believe this is just the beginning.

In fact, many companies are forgoing traditional user interface. Siri may be the most popular Natural Language Interface on the planet, but Amazon has Alexa, and now every startup team has Slack bots. Moreover, Slack’s rapidly growing user base platform may prove a powerful distribution channel for startups.

That said, measuring performance and achieving predictable growth for Slack bots is a new type of analytics challenge. What is the best way to measure activation and retention? What metrics are most important to track? How do we attribute new bot installs?

This post is a first look at implementing tracking and analytics for Slack bots. We’ve borrowed and applied many philosophies around traditional web and mobile analytics, as well as user behavior tracking to the context of Slack bots. Keep in mind that things may change as this platform matures with more bots and products!

Tracking and Analytics Guidelines

Analytics is about learning and understanding growth. The general guidelines for deriving a set of events are:

  • Understand that everything is a funnel: How do people discover, start using, and pay for your product? What are the most important steps along the way?

  • Identify moments where the user derives value: Which key events, when interacting with your product, sends a strong signal of engagement?

  • Maintain a consistent naming conventionWill all event names be “Object Action” or “Verb Subject”? What will the casing be? This will minimize headaches and save time for future you or a new team mate when conducting analysis or implementing new tracking code.

If you’d like to learn more about this approach to tracking your product for growth, here is a high-level guide to creating a tracking plan.

We won’t dive into #3 so much here (we have a post coming out soon about various naming conventions—stay tuned!), but we will look at #1 and #2.

Here are the common events (and their properties as sub bullets) that are tracked as a reflection of acquisition, activation, retention, revenue, and referral of the funnel. The properties are important as those allow us to slice and dice the data in our analytics or marketing automation tools. Note that each event is attached to the user with an additional parameter of userId to help downstream analytics tools tie events to users and teams:

  • Bot Installed: This event is fired when the bot is initially installed via the “Add to Slack” button.

    • team_id (since this team id from Slack is immutable)

    • source (for attribution, but only if you are visitors are installing your bot from a site that isn’t the Slack app store)

  • Bot Activated (optional): This event is fired when a necessary mid-step, such as authenticating a calendar, is completed. Many times bots aren’t able to provide full value until certain accounts are connected.

    • service_authenticated (e.g. “Google Calendar”)

  • Message Received: This event is fired when the bot receives a message. It’s important to note that the “topic” or “intent” of this event is sent as a propertyof this event, since the bot receiving a “hi” signals a different interaction than “what is the weather today?”

    • message_type (the “intent” of the message, e.g. “new meeting”)

  • Subscription Started (optional): This event is fired when the user begins a subscription and starts paying. This event assumes that the bot follows a SaaS model; if the bot is selling one-off items à la ecommerce, then Order Completedcould be the event. The idea here is that this is the bottom of the funnel and revenue is tracked.

  • Bot Deactivated: This event is fired when Slack closes the Real Time connection’s websocket without an explanation, which is what happens when a user disables a bot (here’s a fantastic little post about handling this event).

Note that you can name your events and properties however you’d like! We used “Object Action” for event names and snake cased property names in the example above.

To help illustrate why some of these properties are included, let’s go into some growth objectives.

Measuring Attribution and Acquisition

If you want to get new users for your Slack bot, then install attribution is critical to measure performance of various campaigns and learn where your highest quality “leads” are coming from. To dig into user install attributions in Slack, let’s take a look at the two main ways to install Slack apps:

Unfortunately, installs directly from Slack’s app store doesn’t provide you with user attribution data—you won’t be able to know how that user got to the Slack’s app store. Of course, it’s still immensely helpful to know that Slack’s own marketplace is allowing users to discover your product (or not).

Here’s a trick to calculate new users from Slack directly using the “Add to Slack” button. The “Add to Slack” button is basically a link to Slack’s Oauth endpoint, which accepts an optional state parameter whose value is passed back to your server upon auth completion. Though the state parameter is typically used in Oauth flows to prevent cross-site forgery requests, we can also append campaign or attribution information so we can attribute installs in our analytics tools.

Here’s an example of a URL that’ll kick off the add to Slack process (replace client_id with yours and state with your randomly generated id and some text that tells you where the user clicked the “Add to Slack” button, e.g. “homepage”):

After the user selects which Slack account to authenticate, Slack will redirect the user to your redirect URI with the state as a query string. Your client-side code can parse the query string to get the attribution information, and (either on the client or server) finally send an .identify() and a .track() call:

Note that the above code is written with Segment’s analytics-node library. Though the keys and values in the traits and properties objects are completely up to you, sending source is what allows you to attribute installs in your analysis.

Now, you can attribute the source of your users, assuming a non-trivial source of your signups is not from Slack’s “store”. Moreover, capturing key activation events (more on that in the next section) can tell you which “Add to Slack” button provides higher quality installs.

Measuring Activation and Retention

An activation event is an interaction when the user receives value. In the case of bots, this typically means when the user asks the bot to do what the bot is meant to do.

Retention, on the other hand, is a bit more tricky to track. Though users have the option of intentionally disabling the bot, it’s much more common for users to slowly forget about it over time.

In the two case studies below, we’ll explore measuring growth as # of requests a bot receives, teasing out “A-ha” moment experiments from the growth of activated users within a team, and finally calculating growth and retention as the ratio of daily active users to monthly active users.


Case study: Birdly

Birdly, a tool for expense management that began as a mobile app, is now focused on building a completely chat-based interface in Slack. Its core value prop is to process expenses from pictures of receipts. However, the new distribution and usage opportunities from Slack’s platform have encouraged the team to expand its vision to provide a suite of back office services.

Image taken from Quang’s post on “Birdly launches a Slack bot for Expense Management”.

Birdly’s current strategy is to experiment with multiple different bots—expense management, turning business cards into Salesforce leadstranslation services, managing grocery lists, etc—and see what sticks. With this quick build-measure-learn iteration cycle, analytics around usage is deeply important.

The main event that they track for each bot is new request (triggered when the bot receives a message from a user) with properties request_type that indicates the intent of the message (i.e. uploading a receipt or something else). Keep in mind that the request_type that signals a core usage event varies amongst Birdly’s bots. For Birdly’s main bot, this is expense, whereas it is translate for the translation bot:

Note the above snippet is a fictitious representation of their actual server-side call, written with Segment’s analytics-node library.

To measure the success of the bot, Birdly closely watches the # of new requests that have its corresponding request_type per week.

Growth is also important. The Birdly team aims to maintain its growth rate of 30% to 40% m/m growth on the number of teams who install one of its bots.

Birdly uses a variety of tools (enabled via Segment) to help them measure and monitor these metrics—Google Analytics for measuring web analytics to their site, Mixpanel for measuring usage and trends, Facebook Pixel and Twitter Adsfor tying their custom bot install event to their ad campaigns.


Case Study: Meekan

Meekan, a flexible scheduling assistant, provides both a Slack and Hipchat bot for teams. Meekan can schedule a calendar event for various team mates by looking through everyone’s calendar, identifying availabilities (within parameters that you specify), and adding the event to the respective calendars.

This is me scheduling lunch with my friend, Alex:

And this is me confirming my attendance to SQL SQOOL:

Since Meekan’s value increases as more users within a team authenticates their calendars and schedule meetings via Meekan, the team cares about growth and distribution within a team.

As such, the team tracks installation (including the total number of users in the team), activation (authenticating a calendar, which creates a user profile in Mixpanel), and finally engagement (when a user initiates a request via Meekan).

With this data, Meekan uses Mixpanel to map out the adoption speed within a team:

Chart taken from a custom query with the data in Mixpanel.

Matty, Meekan’s product manager, understands there are many subtle factors in play here. “Some people don’t use calendars, don’t have meetings (or they just accept whatever meeting time is suggested), or are signed into Slack/HipChat, but don’t actually ever use it.”

However, with these conversion events and some other contextual data (size of team, observed scheduling behaviors, profile of the “champion”), the Meekan team plans to explore opportunities for the bot to demonstrate its value via public channels as a passive way to attract more team member adoption. “Chat bots have a lot of power and access to the entire team, even those who’ve never heard about them,” says Matty. “We make sure we don’t abuse this power.”

Since the Meekan team is focused on retention, one metric borrowed from mobile and Facebook apps is the ratio of DAU / MAU (a popular retention metric), where an active user is one who initiates at least one request with Meekan—what they have named meekan sentence, similar to new_request. This event is triggered after Meekan receives and parses a message from a user into four parts: intent, title, date, and time of day. Note that the below is just one example. There are other message intents, such as calendar queries, flight searches, etc.

Image taken from Matty’s post on “Cheating on the Turing Test”.

The team’s focus is on retention—”making sure our current users are happy, and providing them with value, so that they’ll want to keep us around and use the robot daily. It’s very easy to disconnect the robot or forget you’ve ever installed him in the first place.”

The Future of Messaging Products

It’s still early for bot makers and Slack, so keep in mind that best practices may change as more bots join the platform and we learn what works and what doesn’t.

One thing about analytics that won’t change is that you should always be intentional about your tracking. Start with a goal, derive appropriate metrics (best to focus on one or two), and figure out which events are needed to uncover those metrics.

Messaging products, whose primary interface is through natural language, are starting to become more and more popular. On the enterprise side, Hubot and now Slack’s new platform of bots may become ubiquitous in how we initiate requests such as pulling customer account info or ordering a Philz coffee.

On the consumer side, MagicWeChat (some great analysis from a16z about WeChat’s integral role as more than just a platform), Facebook’s M, and Google’s own Smarter Messaging product, have powerful and grandiose visions of simplifying our lives. Soon, it may be common to interact with a product solely based on natural language verbal or text communication.

In any case, we’re excited to see what bots developers will build!

Have any ideas for tracking a Slack bot? Tweet at us!

Andy Jiang on December 18th 2015

It’s common for teams to use multiple tools to understand how users interact with their product. Two very popular analytics tools are Google Analytics and Mixpanel. These tools compliment each other nicely since both offer slightly different analysis capabilities. But the potential downside of using two different tools is data discrepancies. That is to say, when data is inconsistent, it’s hard to trust.

In this post, we’ll share a systematic approach towards identifying the underlying causes of inconsistent data and debugging on both the client and server (we’ll save mobile for another time).

This post is meant for folks who have instrumented Mixpanel and GA separately. If you use Segment, you won’t have to worry about discrepancies because the same data is being sent to each tool. That said, these debugging steps are helpful in nearly any data discrepancy situation.

STEP 1: Confirm Discrepancies

Before mining through your logs and stack traces, you’d want to confirm that there is a legitimate discrepancy. Due to the nuanced reporting and time zone differences of Mixpanel and Google Analytics, it’s easy to assume something is wrong if the numbers don’t match, when in fact, things are working correctly.

Identify how big the problem is.

We usually suggest that customers investigate discrepancies if there is more than a 5% difference than their production database. If it’s lower than that, it’s likely that the differences are immaterial enough to your business that it’s not worth a full tracking audit. This is because often, you’re using analytics to report trends rather than identify super exact numbers (e.g. Are we growing? How fast are we growing?) So a slight difference isn’t a big deal.

If the difference is greater than 5% across all events or there is one event that doesn’t match, then that warrants further investigation.

Note if the issue is with page views or events.

First, take a look if you are seeing a difference in page views or track events. If you’re working with page views, make sure to note that Google Analytics splits out numbers for URLs with random query strings appended. For example, page views will be distributed across the following URLs:

..whereas those would all be consolidated in other tools. This is a common reason for their to appear to be discrepancies, and why you might see more page views in Mixpanel. To note, Mixpanel does not track page views automatically, you have to send a mixpanel.track("Viewed X Page") call in the client-side when the page is loaded to mimic a “page view” event.

If you notice a difference in numbers for both pageviews and events, do some more digging.

Test one event you’re sending to both tools.

To narrow down on where the inconsistency may occur, select one specific event that you are sending to both Mixpanel and Google Analytics. It’s important that this event is not something “unique” to one tool or the other.

For example, avoid comparing sessions in Google Analytics with an event in Mixpanel, since sessions is specifically defined for Google Analytics.

It may be obvious, but it’s equally important to view the report of this event across the same time period for both tools. We recommend using at least a week’s worth of data to minimize variance due to time zone differences.

Set your time zones correctly.

Mixpanel will default a new project to US/Pacific time (UTC-8) whereas Google Analytics defaults to your local timezone, so this could be another cause for “discrepancies.” Here’s how to check your timezones for your projects in each tool.

In Mixpanel, click on the tiny gear icon on the bottom left. Then you’ll see the below pop up that’ll have your project settings:

In Google Analytics, click on Admin, then View Settings:

Both Mixpanel and Google Analytics allow changing the default timezone, but note that those changes will not apply retroactively to existing event data.

Though you must send all events with timestamp as UTC, Mixpanel will convert the time to the project timestamp before storing it in its database. Once the events are stored, they cannot be altered later. Therefore is, the updated timezone only effects new events coming in.

Mixpanel’s documentation on time zones further expands on how this will effect reporting:

As such, a changed time zone can result in either going ‘backwards’ or ‘forwards’ in time, creating a temporary doubling effect on your data, or an ominous looking hole in time where no data is received. This is due to the fact that the existing data remains in the previous time zone, but the new data either jumps forward in time or goes back to an earlier time zone.

In most cases, if the timezone is set to the same for both Google Analytics and Mixpanel, then the data should be pretty consistent.

Check if Google Analytics is sampling your data.

There will almost always be some discrepancy if Google Analytics “samples” data in your reports. Here’s more from their help page:

Sampling occurs automatically when more than 500,000 sessions (25M for Premium) are collected for a report, allowing Google Analytics to generate reports more quickly for those large data sets.

You can tell if the report is being sampled by the description on the top right corner:

Above screenshot taken from www.morevisibility.com.

There are certainly ways around adjusting the sampling in your reporting. We won’t go into details of troubleshooting this aspect of Google Analytics, but there’s plenty of literature on the web.

Now, if you still see a discrepancy, it’s time to look at the code.

STEP 2: Make sure events are firing from the same place

A common cause for seeing discrepancies is because the events are being sent differently to each tool. You need to ensure GA and Mixpanel fire events at the same time. If you send one client-side and another server-side, you’ll probably see some differences.

If you’re using Segment to send data to both Google Analytics and Mixpanel, then feel free to skip to STEP 3.

It’s important that tracking code to both tools is actually within proximity of each other for maintainability and debugging.

For example, if you’re making a call to Mixpanel from the server and the same call to Google Analytics from the client, then you’ll run into data consistency and code maintainability issues. Every time you change the call to Google Analytics, you’ll have to remember to do the same for the Mixpanel call that is located in another corner of your code base.

You can check whether or not the tracking code is next to each other in your code base by going through your code base (trusty ol’ ctrl-f in Sublime, searching through the repo via GitHub, etc.).

Note if the event being fired to both Mixpanel and Google Analytics in the same areas of your code base. If conditions are met to send a call to Mixpanel, then is it 100% that a call will be sent to Google Analytics?

If not, bring those calls back together.

If you’re seeing a difference in client-side fired events and your production database, you’ll also want to think through common problems with the client.

STEP 3: Consider common mishaps that happens in the browser

Sending calls from the client can be finicky. You’re at the mercy of ad blockers, intermittent network connections, and page unloads interrupting JavaScript on the page.

Predict the effect of ad blockers.

Ad blockers are browser extensions that prevent any third-party tracking libraries from loading onto the page. So, if you’re tracking some events from the client that differ slightly from the events sent from the server, it’s possible that ad blockers are the reason.

Depending on your audience, the impact of ad blockers could vary greatly. If your customers don’t use adblockers that much, you don’t have to worry about it. But if you’re targeting a millennial in the Bay Area who spends time reading tech blogs, you could see a huge variance.

Additionally, there are configurable ad blockers like Ghostery that allow users to choose to block specific services. Unfortunately, there is no way to know those settings.

Your best bet to get around ad blockers is to send the most important events, like Account Created and Order Completed on the server-side.

However, if you have a tracking pixel that can only live in the browser, then one suggestion is to (nicely) ask your users to disable ad blocking for more personalized experiences.

Learn how to track form submits and handle other page unloads.

If you’re tracking a form submit or a button click that’ll take the user away from the page, it’s possible that this interrupts the track request from leaving the browser. Many browsers stop executing JavaScript when the page starts “unloading”, which means those JavaScript commands to send calls may never execute.

One prime example is if you’re tracking sign ups from the client based off a form submission. We’ve seen scenarios where sign ups were consistently ~10% fewer than the Account Created event that is triggered on the server side. Clicking the submit button will immediately start loading the next page and often the trackcall for recording the submission won’t run.

One solution to this is to intercept the event and stop the page from unloading. You can then send your track call as usual, insert a short timeout, and resubmit the form programmatically. (Our friend Rob Sobers actually has a great post on this.)

Here is an example for form submit:

Our open-source analytics.js library exposes .trackLink() and .trackForm()methods, which handles these use cases easily for you. You can see how we do the same thing in the code here.

If you’re tracking an event that may be affected by the page unloading, then it’s possible that these events are consistently fewer than server-side events. Also note that inserting the timeout is not a guaranteed workaround. Depending on the internet connection of the user, some requests still won’t make it. For business critical events, the best way is to migrate those to the server.

STEP 4: Inspect the requests

If you’ve checked each of these cases and considered all of these common problems, and still have discrepancies, it’s time to jump into code and inspect the requests.

Many analytics APIs accept everything and respond with 200 OK. This prevents their servers from crashing due to a sudden influx of bad requests, but also makes it harder for end users to debug their tracking.

For some semantic events, such as Google Analytics’ ecommerce events like Completed Order, certain properties are required for the event to be populated successfully, ex: revenue. Mixpanel is more lenient—there are no semantic event names that are treated differently.

Inspecting the requests can help you uncover these gotchas.

For this post, we’ll explore debugging on the client and on the server (we’ll save mobile for another time).

Debug on the client.

The developer console is your friend. (tips to find the developer console). It allows you to send manual requests via JavaScript with the functions made available to the DOM.

For instance, you can open up the developer console in your browser while on your site and send calls:

After manually firing the call, you can inspect the Network tab. For Google Analytics, you can filter for requests going to “www.google-analytics.com”, whereas for Mixpanel, you can filter for requests going to “api.mixpanel.com” (though just typing “google” or “mixpanel” in the input field will suffice).

The network tab in Chrome’s developer console filtering for “google”:

Lastly, confirm that the requests populate in the end tool. For Google Analytics, the only report that populates in real-time is the “real-time” view on the left side. Mixpanel events show up in real-time by default, but is easiest accessed in the “Live view” tab on the left.

If the data is not populating in the end tool as you’d expect, check their respective documentation to ensure you are forming the call correctly.

More tips on debugging GA on the client including their Chrome extension here.

Debug on the server.

There are many ways to unpack the request that you’re sending on the server side to see what exactly you’re doing. One way I like best is mimicking the web request with a cURL, so you can experiment granularly on the request level to see if something works or not.

cURL is super powerful, but if you want tools that are easier to use, check out httpie or POSTman._

Note that by design, these tracking APIs will return 200 OK. The key here is to use the real-time views of these tools to see whether or not the data is populated as you’d expect.

With Google Analytics, their measurement protocol defines how you can send a server-side event. They also have a special debug endpoint that you should use to validate the request. Here is a sample cURL command with hit type as “pageview” that you can try in your terminal (replace the UA ID and cid with yours):

Keep in mind that the request data body must be urlencoded!

Note that the special debug endpoint won’t populate your Google Analytics real-time view; instead, you’ll receive a response in your terminal as such (here I provided an invalid tid):

If you’re having some trouble putting together the request, check out Google’s hit builder. It’s a nifty standalone tool that helps populate the parameters in the hit request, as well as validates the request against its validation server.

Once you’ve validated the request, you can send the same cURL again, but this time to the real endpoint (remove “/debug” so you’re sending it to “https://ssl.google-analytics.com/collect“) and you can see it populate in your real-time report.

Mixpanel is similar, but uses a GET request and passes the data as base64-encoded querystring. Here, we take the following JSON object:

..encode it with base64, and append it to the Mixpanel endpoint, so the final cURL looks like this:

Mixpanel will return a 0 if the event is rejected and a 1 if it is accepted. Their documentation also states which properties are required and optional.

Hopefully, you can confirm that requests are being received properly in both Google Analytics and Mixpanel real-time reports. Going through this exercise helps identify whether the request itself is populating as expected; if either one is not populating, then you know where the discrepancy is coming from!

Managing multiple tools and events

Debugging data discrepancies can be a drag, though hopefully these steps made it easier! If you’re interested in narrowing the scope of your tracking code for better maintenance, you can use Segment as a single API to route events to tools like Google Analytics, Mixpanel, and more.

Another way to keep your data discrepancies to a minimum is to document each event you’re tracking, what it’s capturing, and where it should be fired in a tracking plan. A tracking plan provides the necessary structure and discipline for learning about product usage, while keeping all team members on the same page about what events are tracked and why. You can learn more about the benefits of a tracking plan here and check out some downloadable tracking plan templates here.

Become a data expert.

Get the latest articles on all things data, product, and growth delivered straight to your inbox.