Jes Kirkwood on November 15th 2021
Erin Franz on December 20th 2016
We welcome Erin Franz, data analyst at Looker, to the Segment blog! Looker is a Segment integration which allows you to explore and visualize the data you collect with Segment, in Looker. Last week, Looker launched Data Actions—yet another way for customers to interact with their Segment data on the Looker platform. Erin will share how you can take advantage of Segment Sources and Looker Data Actions together to operationalize your data.
Leveraging Segment Sources in your centralized data warehouse provides a complete view of each customer. Combining customer event data from mobile and web with data from cloud sources like Salesforce, Zendesk, and Sendgrid gives you actionable insights for sales, customer success, product, and other areas of your business. Using Looker and Looker Blocks on top of the data Segment collects, you can quickly build a centralized data model in Looker with visualization and exploration capabilities, powering everyone’s decision-making with data.
Looker recently launched Data Actions, which is yet another way you can interact with Segment data on the Looker platform. This feature lets you use your external tools to take action without ever leaving Looker. Segment customers can take advantage of Data Actions in the Sources they’re already using today, such as updating a record in Salesforce, triggering an email in SendGrid, or assigning a ticket in Zendesk – all directly from Looker. Let’s walk through a real life example, where we’ll tackle modeling and joining data from Segment’s Salesforce source and Segment customer event data in Looker, and then create a Data Action to trigger an email in SendGrid directly from the Look we’ve created.
Looker’s LookML modeling layer allows you to model data and expose it to your organization without having to move the data from its source. Looker supports all Segment Warehouses endpoints: Postgres, Amazon Redshift, and, most recently, Google BigQuery. Let’s assume we’ve synced Salesforce as a Segment Source in addition to collecting customer event data with Segment’s browser library. We’ll also consider pageviews as an indicator of engagement on our application. We can join our
pages table to our Salesforce
accounts table by creating the following explore in Looker. We’ll also bring in some contact information to use in the SendGrid Action later.
By joining our pageview data to our Salesforce data, we can get engagement insights at the account level that could indicate propensity to churn. For
pages, we’ll define measures in our LookML
pages view file to help us calculate week over week change in pageview count:
Assuming we’ve already defined a dimension for account names in the Account view file, we’ll select Account Name, Pages Count Last Week and Pages Week Over Week change in Looker’s Explore section. (We also added some conditional formatting to Week over Week change to easily identify at risk accounts!)
We now have a list of accounts that have shown a recent decrease in activity. We’d point our account management team to this Look so they can take action on better engaging those accounts.
Our account management team could take this list of at-risk customers and take action in external applications like Salesforce or email. But wouldn’t it be easier if they could take action directly from Looker? Let’s add a Data Action to email in the view file for our
contacts table. Using the Looker documentation for Data Actions and SendGrid’s Web API docs, we can construct the action in LookML to send an email to the Contact displayed in Looker.
Suppose we want to reach out to Bubble Guru, an account that showed a 74% decrease in usage. We can filter the Account Name on Bubble Guru and add in Contact Email to get a list of emails associated with the account. Now when we click the menu next to each email, we see the option to send Check-in Email.
When we click this option we’ll see a modal window where we can directly compose email from Looker using SendGrid.
Solely using Looker and Segment, we’ve been able to attribute usage data to our accounts, model that data for actionable insights, and take action directly from Looker. We talk to companies all the time about eliminating “data breadlines” — how we can help companies break down data silos and empower business users to get insights from their data. Data Actions is the next step in building a data-driven culture. We can enable teams across an organization to simplify their workflows in Looker with data from Segment, no context switching required.
If you’re not already exploring your data with Looker, we’d love to hear from you!
Julie Jennifer Nguyen on December 19th 2016
According to Comscore’s 2016 Mobile App Report, mobile users spend 9 out of 10 minutes using only their top five favorite apps. Companies are fighting for a coveted spot on that short list, and these days, a highly engaging app isn’t a nice-to-have — it’s a necessity.
In this high-stakes climate, the companies that come out on top aren’t just the ones who have built performant apps, but the ones who constantly iterate and improve their users’ mobile experience and drive ongoing engagement and retention.
Here’s how to make sure you’re one of them.
Successful companies know that using different tools to analyze what their users are doing and optimize their product can sometimes lead to unnecessary SDK bloat. The more bloat, the slower an app is to load, the more it crashes, the more battery it drains — and the higher the risk of an uninstall.
Sometimes, though, adding multiple packaged SDKs in your app makes sense. You need one for analytics, one to send to your own systems, maybe something for email and attribution. But if you could preserve functionality in end tools AND reduce the weight of your mobile app, why wouldn’t you?
Segment provides a single, lightweight SDK that allows companies to use hundreds of mobile growth vendors without having to add each one natively to their app. By leveraging, the latest in mobile technologies to optimize for app size and data deliverability, we help companies reap the benefits, not the risks, of using the right tools for analyzing and reengaging their users.
Savvy mobile teams think very carefully before adding vendor SDKs and “black box” code into their app. Best case scenario: an extra vendor SDK adds weight to your app. Worst case scenario: it causes crashes and errors that result in uninstalls. In a world where 80% of users remove an app after 3 months, it’s important to know what’s going into your app and to reduce uncertainty and risk wherever you can.
Segment’s libraries are all open-source, so teams can see exactly what our code is doing under the hood. Through our extensive catalog of server-side integrations, we minimize the need for companies to load partner SDKs into their apps, which means less code to troubleshoot, less uncertainty to worry about, and more time spent building an app that delivers and delights.
Winning mobile teams know better than to track the same event, like an app install, over and over as they add new vendors. Implementing all of that tracking isn’t just repetitive and mundane, it also means more work to maintain those codebases as vendors make updates to their API or as a company expands or changes their tracking needs. And, sometimes your numbers and event names don’t match up in all of your tools.
Segment’s Native Mobile Spec standardizes events like “Application Installed” and collects them automatically through our SDK. So instead of having to write and rewrite new events to track them in downstream tools, teams can write them once, and we’ll transform them in the end tools.
Smart companies focus on the mobile experience. Smarter companies focus on the customer experience and use what they know about how users interact across every platform to drive more engagement and retention on mobile.
They know, for instance, that their most engaged users tend to download their app after seeing a paid search ad on their mobile website. They also know that those users re-engage when they receive a push notification after 6PM on Sundays and Tuesdays. They even know that users who write in to support have a 20% higher CLTV than users who don’t. They know this because they’ve analyzed it.
Segment Sources combines user behavioral data with data from Google Adwords, Salesforce, Zendesk, SendGrid and more to help companies build better apps and more personalized customer experiences.
Segment’s customer data platform powers the analytics stack for 3,000 mobile apps that, collectively, have over 500 million downloads. Companies like HotelTonight, VSCO, and DraftKings, use Segment to track the entire customer journey and level up their analytics.
Jessica Kim on December 15th 2016
At Segment, we’re constantly looking for new ways to help our customers. Our core mission is to simplify how you collect, unify, and act on customer data – by adding brand-new integrations like Google BigQuery, for instance, or providing expert insights on building your marketing stack for 2017. But we also do plenty of helping in our day to day, when our thousands of customers reach out to us looking for help and advice.
We know that there’s nothing more frustrating than questions gone unanswered, so we hosted our first Brainiac Bar with the sole focus of creating a space for customers to come ask our brilliant Success team their toughest questions. We added a full bar, onsite massages, and a DIY burrito station to help make the process as fun as possible (and also because burritos, am I right?).
Our Brainiacs whiteboarded, troubleshooted, and otherwise applied their collective genius to the tasks at hand. We also took the opportunity to chat with our customers for their take on some common questions about Segment. Here are the top themes addressed at our event – some by our Success team, others straight from our customers themselves:
One of the top questions we get at Segment is about how our platform helps solve different problems for different teams. While chatting with Blake Barrett, CTO of fundraising video startup Pitch.ly and software engineer at a popular music streaming service – both clients of Segment – he hit the nail on the head:
“We have our own analytics-type pipeline where we report events, log them locally, and save them to a data warehouse [where] we run queries on the data. Because we have actual data scientists work on that, the turnaround between asking a question and getting data out of it is a long tale. So the PM who suggested Segment wants to be able to see stuff right away and discover as much information as possible.”
As a CTO and engineer, however, his priorities are different: the value lies in being able to skip the building of integrations for the different tools that PMs and other departments are looking for.
“I don’t really know or care what the PM is doing – he’s going to make dashboards and [explore tools] that are really not my concern. What I really care about is an easy way to feed data in [so the PM can do what he needs to]. That was why Segment seemed appealing.”
Others wondered how to move data from other tools (in this case, Apache Spark) into our platform. If you have historical data you want to import into your Segment warehouse, you can always use one of our server-side libraries:
Export your data from whatever repository it’s stored in
Format it so that it’s ingestible by one of our server-side libraries (Analytics-node.js, Ruby, Python, etc. – see a full list of Segment’s sources here)
Pass the data to Segment in the form of track, identify, page, screen, or group calls
As long as you’ve connected a data warehouse to your Segment account, your historical data will start to populate in the form of nicely schematized Segment events.
One of the best parts of the Segment community is discovering the shared values between our clients and our own company. MonkeyLearn, a Machine Learning API for developers building text analysis applications, is both a vendor and client of ours: they help us optimize our email content to provide better experiences, and we help keep their engineering team lean and nimble as they scale.
MonkeyLearn Co-Founder and COO Federico Pascual put it this way:
“As a small startup, anything that can make our team more efficient is a good thing. The less time our engineers use to implement [different] integrations, the better, so they can focus on the important things like building new features within our product. Segment empowers us to do more with the same amount of resources.”
Over 62% of customers told us they had questions about tracking plans prior to this event, and it’s easy to see why. Tracking plans differ based on each company’s business priorities and the kinds of questions they’re hoping to answer through their data; this leads to a lot of different questions. (Which is also why we give out a thorough, battle-tested tracking plan you can use as your baseline. Just sayin’.)
Though the properties for each business use case may vary, some things are universal. One such tip: to stay consistent across tools, send flat properties instead of nested ones.
Some tools will flatten for you, but others will keep the data hierarchy. Sending everything flat from the start will keep things consistent and save you a potential headache later.
All in all, it was a night of great conversation and problem-solving, two of our favorite things at Segment. Have any more tricky questions? Tweet us @segment– and stay tuned for news of the next Brainiac Bar to come chat with us in person. There will probably be burritos.
Andy Jiang on December 6th 2016
Here at Segment, our marketing team takes a twofold approach to planning for the future. We foster a strong data-driven culture which helps keep everyone aligned around common information and goals. And we use that data to experiment with improving our targeted marketing and customer experience.
We believe in open-sourcing the creative projects we develop, so we’ve released a report that details the exact processes and tools that we use to help you build a best-in-class marketing stack using the latest technologies.
TL;DR: Get a head start on 2017! Learn about Segment’s modern marketing stack and hear what 12 visionaries have to say about the future of marketing.
But what exactly is a modern marketing stack? What factors should be considered when deciding what to include in one or how teams can leverage one effectively?
Our VP of Growth and resident mad scientist, Guillaume Cabane, not only championed a culture of being advanced with our own data analysis, but also built a machine of interconnected tools using Segment that collect, pipe, and enrich data. Though his shoot-from-the-hip attitude towards trying new tools and connecting systems together might seem like unhinged madness, he is in fact quite deliberate in his approach to assessing new tools and automating various growth processes.
“The goal with these systems is to use personalization to create authentic rapport,” Guillaume says. At the end of the day, good marketing campaigns are about people. When automating those campaigns with data, we need to create that rapport and trust. “What would a sales rep do if they had unlimited time and data? How would they handle our leads? The automation must feel authentic to the end user.”
One example of an automated process at Segment to achieve personalization across multiple channels. Download the free report for more examples.
This important principle — creating authentic rapport — is one of a handful that guide our approach towards data, automation, customer interaction, and the process of building our modern marketing stack.
But Guillaume is just one — albeit, highly capable — mad man. We decided to extend the question about the future of marketing to visionaries and experts across the industry. They consist of Andrew Chen of Uber, Danielle Morrill of Mattermark, Hiten Shah of Quick Sprout, and many others.
We’ve narrowed down the considerations for building a modern marketing stack to a set of questions regarding data culture and governance, optimizing for the customer journey, and what personalization means for your customer-facing teams.
Again, their responses focus on the customer. Every customer interaction must be personalized and add value. “Customer engagement is the new marketing,” offers Danielle Morrill. Additionally, your customers don’t interact with your email or CRM tools — they interact with your brand. Hiten Shah echoes this sentiment, saying, “In a crowded market, you don’t win with marketing, you win with brand.”
These expert marketers, though spanning across different medias and verticals, understand that the customer is ultimately responsible for the success of your business.
While there are larger principles that guide how tools are selected, connected, and used by your team, the modern marketing stack should ultimately reflect the needs of your business. We hope that our white paper, along with the trend observations of marketing experts inspire and guide you not only to build the marketing stack that your organization needs, but also to leverage the data to more strongly connect with your customers.
Get a head start on 2017 and view all of the visionaries responses.
Learn how Segment can help your business grow. Get a demo today.
Diana Smith on June 20th 2016
Capturing high quality customer data on mobile devices is essential to making products that customers love. But the process to capture that data can be tedious. We polled attendees at our recent WWDC meetup—a mix of engineers, data analysts, and growth marketers—on their biggest mobile data challenges.
Our poll results showed foundational steps still pose the biggest obstacles to many teams:
Building an effective data pipeline.
Finding the right analytics tool(s).
Tracking across multiple devices.
Last week, Hakka Labs and Segment teamed up to host an event about the right way to track mobile data. We recruited panelists from Instacart, Pandora, Branch Metrics, Invoice2go, and Gametime to discuss mobile customer data collection and share their experience in overcoming data obstacles.
A video of the entire panel will be available soon. In the meantime, we wanted to share some of the insights we learned about the biggest mobile data challenges affecting teams today.
Building an effective data pipeline:
There are several steps to building an effective data pipeline. But here are the two you should start with:
Develop a naming schema that’s followed across your company — and teach your company to follow it.
Implement a central repository for all of your data.
“Maintaining a schema is a challenge,” said Gautam Joshi, Engineering Program Manager of Analytics at Pandora. This was a sentiment that was echoed by others on the panel and our poll participants. He added, “There is always a tradeoff between governance and putting people close to the data in charge of tracking. For example, we’ve decided to implement analytics at the PM level for each product. That makes it harder to enforce naming conventions, but does help that PM make sure he has the data he needs to analyze a feature’s performance.”
Instacart embeds analysts on product and marketing teams to set the tracking schema and KPIs from the beginning of projects. It’s on the analysts to standardize the data and make it easy to use. Instacart has found that leaving this to the analysts ensures that the data provides what the team is looking for in the end.
But the schema is only part of it. To give you a more complete perspective on how business is performing, you should bring your customer data from various systems into a central repository. “You can’t optimize for just one metric, or it will all fall apart,” said Che Horder, Director of Analytics at Instacart.
“If you optimized for one metric, you’re inevitably going to miss other important things,” added Beth Jubera, Senior Software Engineer at Invoice2go. “For example, when we were optimizing for app reviews, people started complaining in reviews that we should stop asking for reviews! Since then, we’ve made sure to balance at least two metrics so we don’t over-optimize one at the sacrifice of user experience in other places.”
Finding the right analytics tool:
Once you’re collecting data, figuring out how to make it actionable is a wholly separate challenge. A number of the panelists recommended Amplitude and Periscope for analysis, but your own data needs might differ. Horder and Jubera both talked about how Segment makes it easier to try new tools. It even allows you to keep your historical data, so that you can use a new tool as if you’ve had it on since day one.
Tracking data across multiple devices:
26% of ecommerce purchases start on a mobile device (but finish elsewhere), which makes cross-device tracking crucial. Mada Seghete, co-founder of Branch Metrics, built her entire business on the idea that without an effective method of tracking user activity across devices, you’re missing a big part of story.
“We track across mobile and web by tying user-level tracking with device IDs,” said Jubera. Horder said they make sure they use the same naming taxonomy across devices to analyze the same features and understand how user behavior differs between devices.
For example, by tracking performance across different platforms, Instacart discovered that their shopper app performed slowly on a few types of Android devices, and now they’re working on enhancements to make every version faster. As a Segment Warehouses customer, they’re able to combine online and offline data from their mobile app, website, and other back office systems to connect the dots across the entire customer experience.
We’ve found that most data engineers want the same things: to collect customer data and use it to create a product that their customers love. And they seem to have similar pain points in getting there. Here are some additional recommendations:
Front load the planning. Start with a tracking plan.
Have a clear thesis. Nothing ships without hypothesis, goal and metrics.
As you progress, hold teams accountable to the data.
Always talk to your customers, even after product market fit.
While your solution will no doubt be specific to your use case, you can certainly learn a lot from those who’ve already overcome common similar challenges. Creating a data-driven culture is hard, but totally worth it. As John Hession, VP of Growth at Gametime, said, “Data is all we care about.”
Jack Mccarthy on June 3rd 2016
The online clothing retailer Trunk Club burst on the scene in 2009 with a breakthrough idea. It employed personal shoppers to give incredibly customized, professional shopping experiences. Customers expressed personal preferences through the company’s mobile app and its website, then shoppers selected a tailored wardrobe, or trunk, for the clients to explore.
But as the company grew, acquired by Nordstrom 2014, it found itself challenged to manage multiple data streams, including online customer data, automated shopping catalogues, and transaction and ordering systems, and it needed to develop a data collection strategy. Trunk Club, like many growth-oriented businesses, found that it’s really hard to keep their data tracking clean and consistent — a requirement for discovering actionable insights.
That’s where Segment came in. Trunk Club found incredible value in being able to track data once across their mobile app and website, and route that data to multiple tools their entire team needed without tying up their product engineers.
I recently sat down with Jason Block, a senior front-end developer at Trunk Club, to learn how Trunk Club uses Segment. Read the case study.
Before Segment, Trunk Club used a few apps to review its data, including Kissmetrics for tracking a limited number of events, BrightTag (now Signal) to track marketing events, and Google Analytics to understand their web traffic.
”But there was no consistency in how the events were captured, where they went, or whether our business intelligence team could do anything with them,” Block said. “We had no direction or strategy when it came to client-side event tracking.”
Segment enabled a comprehensive data-collection strategy. “The idea of creating a consistent developer experience for tracking data resonated with us,” Block said. “There was nothing else like it.”
Trunk Club is using Segment Warehouses to load all of its customer data into Amazon Redshift, giving analysts and developers a single data repository. Almost 100 employees, including product managers, engineers, and people working in business intelligence, design, nance and marketing are making decisions with data collected by Segment.
Of course, Trunk Club isn’t just amassing data, they’re using it to improve the customer experience. Their more than 500 stylists rely on machine-learning powered recommendation algorithms to help find just the right scarf to go with that top. Block said, “We can continue to adjust to the customers’ needs by seeing what they are doing on our sites.”
With Segment, Trunk Club tracks data on its mobile apps and website and then ties that data with shopping catalogues, transactions, and other data sources like Mandrill for email and Intercom for shopper support. “Segment,” Block said, “has significantly affected and improved Trunk Club as a service.”
Brittany Fleit on May 25th 2016
Leanplum, a mobile marketing platform and Segment partner, released a new data science report, “Personalize or Bust: The Impact on App Engagement.” We analyzed 1.5 billion push notifications. The big finding? Personalization increases open rates by up to 800 percent! That’s pretty huge.
Personalization, we found, improved nearly all aspects of the funnel. According to the analysis, push notifications triggered by individual user behaviors produce nine times the open rate of blasts sent immediately.
But first, let’s backtrack a little. Here’s how we got that data.
We looked at 1.5 billion push notifications, sent from apps to users around the world. These included massive apps with millions of Monthly Active Users (MAU) and smaller apps still building steam. Data covered the course of January 2015 to March 2016, or about 455 days.
For every metric, we examined the average open rate and the median time to open for the following four categories:
With this, we wanted to determine why users returned to an app, the time spanin which they re-engaged, and how message relevance affected opens. In other words, how does personalization impact app engagement?
For this blog post, we’ll zoom in on the effect of push notification delivery typeon open rates. To learn more about Android versus iOS, personalization content, or location, check out the full report.
Let’s begin by breaking down delivery types. There are five ways app managers can push notifications to their users.
Immediate: Launch messages right away, to selected users at once.
Scheduled Blast: Send messages to selected users at a predetermined future time.
Scheduled by Time Zone: Program messages to arrive at the same time in every user’s local time zone. For instance, every user would receive the message at 8pm their time.
Optimal Time: Use a machine learning algorithm that analyzes individual app usage patterns to automatically send messages at a time in the day when users are most likely to open.
Behavior-Based: Deliver messages in response to specific behaviors. For example, if a user adds an item to their mobile shopping cart, a behavior-based push reminds them to check out.
To keep it simple, let’s parse out how these delivery types perform in sections. First, let’s review how push notifications scheduled to go out at a future time compare to one another. Below, we’ve laid out the engagement rates for messages sent via scheduled blast, scheduled by time zone, and Optimal Time.
The takeaway: Optimal Time sees the highest success rates. Optimal Time accounts for users’ individual engagement patterns, sending push notifications when users are prone to open the app. The intelligence of the algorithm contributes to much higher open rates.
In another report, we learned that users around the world engage with push notifications at different times of day. Even if marketers schedule by time zone, sending at 8pm for example may result in great open rates in North America. But due to cultural differences, users in other regions might prefer to engage earlier in the day.
While some brands may think that localizing by time zone is sufficient, data shows that users respond to even greater levels of personalization. Every person is unique. Apps must leverage tools that recognize and respond to individual engagement patterns.
Next, let’s look at how behavior-based deliveries. What are behavior-based deliveries, exactly?
Behavior-based deliveries are messages sent in response to user actions. Here, we’ve laid out three sample push notifications sent via behavior-based deliveries. You can see that a travel app may message a frequent flier about hotel deals in response to a booked flight. A music app may send a listener an alert that the artist they recently listened to released a new album. A retail app may send a shopper a notification that the item they viewed last week is now on sale.
Here’s how behavior-based deliveries perform side-by-side with immediate sends.
What we found:
The takeaway: of the five delivery types, brands were less likely to send messages via immediate and behavior-based methods. Both of these categories had under 100MM pushes sent. Yet behavior-based notifications, triggered by unique user actions, saw astounding engagement.
In fact, behavior-based push results in open rates 800 percent higher than immediate blasts. Personalizing a message based on individual actions garners much more engagement.
Starting a one-on-one conversation that recognizes and responds to individual behaviors is invaluable in mobile marketing. After all, mobile is the most personal device people own. Your communications should reflect that.
To get access to the full report, download “Personalize or Bust” today. If your team is looking for a mobile marketing solution, Segment makes it easy to set up Leanplum. Leanplum combines industry-leading solutions for Messaging, Automation, App Editing, Personalization, A/B Testing, and Analytics. We’d love to chat about how we can help you reach your mobile goals. With Segment, you’ll be able to get started in a jif.
Peter Hermann on April 26th 2016
Sales teams play a critical role in driving growth at b2b companies. However, throwing more warm bodies to make phone calls is not the best way to get to that hockey stick curve. The sales process has its own wealth of complexities such as identifying the best opportunities to focus on, determining the most helpful content to give to prospects at the right time, and knowing which marketing channels generate the highest quality opportunities. Before you can reliabily and predictably scale revenue, you need to understand and optimize these dynamics.
We use Salesforce here at Segment, and it provides great out-of-the-box reporting. But, like all out-of-the-box tools, it lacks the ability to answer these types of granular questions about our business. Additionally, the data is silo’d in Salesforce, making it hard to tie sales conversions with product usage or page views on our blog.
Now, with Segment Sources, we can send our sales data into a data warehouse, where we can
JOIN across product usage (collected by Segment), as well as other Sources like Zendesk and Stripe. Having this data accessible allows us to not only make faster and better decisions, but also to launch measured growth experiemnts with more confidence.
In this post, I’ll outline the major questions and related queries combining various datasets for our sales team at Segment. We used Mode Analytics since they have put together some great resources on Salesforce data, like this Salesforce CRM data eBook.
Kindly note that we replaced the number values in these charts with fake data, but the trends and percentages we saw are real. Many thanks to analysts Willand Perry on our team for helping me with some of the queries!
It’s commonly understood that the most effective salesperson is someone who has maniacal focus on time management—that is knowing which opportunities to work and which to de-prioritize.
Knowing which opportunities to focus on ultimately depends on the context. For example, if your quota is based on the number of logos added (i.e. deals closed), then you’d want to optimize for shortest sales cycle and highest close rate. However, if your quota depends on the size of the deals, then it makes sense to aim for the larger opportunities.
We’ll provide the queries for each of these angles (close rate, average sales length, and deal size by ARR—annual recurring revenue), but for this analysis, we’re going to look at these dimensions: employee count and vertical.
This chart that just looks at ARR reaffirms some existing thinking that we had about the perceived value that we’re offering our enterprise customers. Our largest ARR deals have been companies with more than a thousand employees. This makes sense, as they certainly are capable of paying more.
Looking at the other dimensions like close rate furthers our confidence that the 51-200 employee count companies’ pain align with our sales messaging. Understanding these other two dimensions helps us allocate our sales resources to only nudge along opportunities with the highest likelihood of closing or chasing after the deals that we know are easily repeatable and predictable.
Through this lens, the sweet spot for closing deals is in the 51 to 200 employee range: that’s where we have the highest close rate at 64% and shortest average sales cycle of 41 days. Anecdotes from our sales team confirms that these companies both have the financing available and do not want to build and maintain their own in-house analytics infrastructure.
Similarly, we conducted the same analysis, but this time looking at the industry/vertical of the opportunities. We use Salesforce’s
industry property that is tied to the
We see that ecommerce and advertising companies both have high average deal size and pretty strong close rates. To help us bring this sort of success to the other verticals or even improve our sales success there, it’s important we understand what their use cases are with Segment. Maybe there’s a particular use case among ecommerce clients that really make Segment valuable to them. If so, we can create content to really grease the wheels with future ecommerce companies. Or we can draw parallels and try to identify the pains with other verticals to better align our messaging towards them.
Once we have learned what kinds of opportunities we want to chase after, we want to know how to source more of them.
There are three main approaches:
Salesforce campaigns: These are created with the marketer in mind. We can set up these campaigns in Salesforce to manage everything from lead generation to campaign performance. The only downside is that these have to be setup before for it to be working properly. Therefore, we can’t retroactively analyze the performance of campaigns.
Source field on Lead objects: Similar to Salesforce campaigns, this is a simple way to provide a high level attribution to leads. This is typically auto generated when the lead is created, defaulting to “web” (since most leads are web-to-lead). But we set that field whenever we upload a list of leads.
Segment UTM params or
referrer domain: Since we’re using Segment, we can tie in product and page view data with Salesforce, which allows us to leverage UTM params from our marketing campaigns as a source of attribution for Salesforce qualified leads and opportunities.
If we’re just looking at qualified leads and their sources:
Since the majority of our leads are inbound (our BDRs reach out to our registered users), it makes sense that the lead sources are mostly “web”. However, that doesn’t help us out too much if we want to know where on the web these leads are coming from.
We can try
JOIN ing the
salesforce.leads table with Segment
pages to see what
referrer s are most common:
While this is better than the previous analysis, we still are getting
Unknown for the majority of the referring domains. Unfortunately, there are many reasons why
Unknown is a referring domain, most notably clicking a link from outside a web browser (or a mobile app) or clicking from a domain that has HTTPS.
From the information we do know, we can continue to optimize for the channels that work for us: Search, Twitter, and our own content (which get onto Hacker News/Twitter and get indexed on Google).
Ah, the age old question about our sales funnel. There are always areas for improvement, so we like to be critical about our own performance as a team. Though we typically track pipeline with Salesforce’s out-of-the-box reporting, we measure overall funnel performance in SQL since Salesforce only tracks last stage changed instead of the time each stage was changed, making it really hard to get the complete picture of all deals.
We have six opportunity stages: Qualified Primary, Qualified Secondary, POC, Business Alignment, Legal, and Closed Won / Closed Lost / Nurture. Each stage occurs after any previous stage, except that at any point, the opportunity can be labeled as “Closed Lost” or “Nurture” when the deal falls through.
To help us holistically understand what stage sees the most problems, we look at which conversion rate by stage. We also group by company size to see if that bears any influence on conversions.
Here is a chart generated by a Mode query that shows the “Conversion Rate” based on opportunity stage and grouped by employee size. We grouped by 1–70 (“small”), 71–500 (“mid-sized”), and greater than 500 (“large”) for simplicity.
Looks like Qualified Primary (the first stage) has the largest drop off of any stage. Also, for large companies, Qualified Secondary also has a significantly larger drop off than the other two company size categories.
Another notable insight is that the Legal to Closed Won is lower for the small and mid-sized companies. This is strange since Legal stage should really be just dotting and crossing the i’s and t’s, as the prospect should already be bought into the deal.
To figure out how to fix these leaky areas of our sales funnel, we’ll have to look at each lost deal and figure out why they were lost.
Buyers these days are more sophisticated than ever, opting to self-educate with content and their own research rather than speak to sales. As such, aligning content marketing strategies with the common objections and questions from sales is critical to sales efficacy.
But how does marketing know what content to produce to shorten the sales cycle? Which pieces of content today are helping deals progress through the sales cycle?
There are two approaches to answering this question.
The best approach requires some initial setup and ongoing maintenance in your Salesforce, but the downstream analysis is straightforward. You would need to create a custom Salesforce object called
collateral. For each collateral that your sales team likes to send prospects, create that in Salesforce. Then, whenever your sales rep sends out a piece of content, he or she can attach that
collateralobject to the opportunity. (Note there are services out there such as Clearslidethat solve this problem.)
Unfortunately, this approach doesn’t consider two issues: 1. prospects self-educating—that is, going to our home page, looking at our blog posts, or looking at our case studies, and 2. sales reps needing to manually updating opportunities with appropriate colalteral. The first problem is especially prevalent at Segment, since our audience is extremely tech savvy and like to check us out extensively before talking to us. But since we can
JOIN Salesforce data with Segment page views, we can create a table that gives us an idea of what content converts best at which stage.
We can see that case studies have a large impact on demonstrating the value proposition of our enterprise offerings (Redshift), which help facilitate deals early on. However, the technical documentation helps convert later, probably a side effect of the prospect’s developers implementing Segment. From a sales perspective, it makes sense for a prospect who has added Segment to have a higher chance of understanding its value, thereby having a higher chance of ultimately converting.
While this analysis may reaffirm existing intuition held by our sales and marketing teams, it helps to revisit this occasionally to see how different content pieces are performing, as well as know how to coach the sales team about what types of content to send to prospects.
However, this analysis has one major limitation: we can only view the prospect based on
anonymousId. We can improve this analysis by using IP ranges. Not only will we be able to get page views of un logged in prospects, but we’ll also capture page views of other employees at the prospects’ offices. We won’t dig into this here, but this can be achieved in SQL since we do collect IP addresses. Here is a cool resource that can guide you down this path.
Sales today starts far before the lead object is created in Salesforce. Prospects see a few display ads or read a few blog posts before even looking at your pricing page or talking to your sales team.
In order to stay ahead of the curve and be proactive about pursuing the right opportunities and optimizing the sales funnel, having data and analytics is critical. While a great deal of analysis can be done right from Salesforce’s out-of-the-box reporting, tying sales data to product usage or other customer touch points (email, SMS, etc.) requires significantly more work.
This post has been our exploration into using Salesforce Source data to help improve our sales team. Similarly, Trustpilot used our Salesforce Source for the speed and ease of analysis. Mesosphere also leveraged Salesforce and Zendesk data to assign a dollar cost to support actions, such as responding to a ticket, to help understand the value their support team provides.
Diana Smith on April 20th 2016
Do you have a single source of truth for your data? As our customer Ole Dallerup, VP of Engineering from Trustpilot says, “your data is useless if it’s not in one place.” The problem with disparate data sources is that your insights stop with the walls around your data — in your CRM, in your funnel analytics tool, in your email platform. How does one part of the customer experience influence another? What’s the root cause of a behavior? These are the kinds of questions Trustpilot is answering with Segment Sources.
Trustpilot provides an online community featuring reviews on more than 100,000 companies. Their reviews and ratings help consumers make smarter, more informed purchasing decisions. On the other side of their content marketplace, Trustpilot works with business listers to help build their audience and drive traffic to their sites.
Ole believes that, while reporting in tools like SendGrid and Salesforce are pretty good, the real issue is that you can’t join that data with your product behavioral data to gather deeper insights on your customers.
In this Q&A, Ole shares why bringing all of his data sources into a single warehouse is key to investigating anomalies in user behavior, building a better product, and providing exceptional customer service.
Why Salesforce, Zendesk, and SendGrid reporting tools are good, but unfortunately isolated
How tying CRM and email data with product data is the only way to conduct useful, investigative analysis
Top use cases for having his product, sales, support, and email data all in one place
Trustpilot’s philosophy on building vs. buying software services
Dive in below, or get the PDF.
Diana: Hey, Ole! Thanks for chatting with us today. We’d love to hear more about Trustpilot to set the stage here.
Ole: Thanks, Diana. Trustpilot is a review site for businesses. We believe that transparency through reviews helps both end customers and businesses have better experiences. We help customers make informed purchase decisions and empower businesses to use reviews to drive more sales.
Diana: As VP of Engineering, what are you focused on? What projects are you working on right now?
Ole: In short, I oversee all of our systems and infrastructure, and I lead the developers, testers, and data science teams globally. It’s important to me to build a strong culture that empowers the individual engineers as well as the teams to deliver and discover the best solutions using the latest technology. I set our engineering strategy, from APIs first, to continuous development, and microservices
I also work closely with the business and product leadership teams to empower them with the data, tooling, and analytics they need to keep delivering awesome experiences.
Right now, I’m really excited about a project to pull together our data across every customer touchpoint using Segment Sources. We actually have a lot of customer data siloed in different tools like Salesforce, SendGrid, and Zendesk. We want to put it together in one place, so we have the data to answer any question we might have about our customers.
Diana: That’s awesome. Let’s talk a little bit about each source of data and what kind of analysis you’re running. How about Salesforce? What are you working on there?
Ole: Our team keeps all of the conversations and deals with business listers in Salesforce. We were doing a lot of our reporting in Salesforce, but honestly it’s just not that good. We had a bunch of operations and engineering folks working on cleaning up their dashboards into something really useful for the executive team.
However, when we found out we could just push the CRM data into our Redshift database, it made our lives a whole lot easier. Now, it only takes us a quick refresh of a query to answer a question compared to hours of poking around and finagling the data or asking sales ops to run a time consuming custom report.
After turning on the Salesforce Source, the first thing we did was rebuild all of our reports and some new ones with Chartio on top of Redshift. This combo gives us more portability and opportunity to drill into spikes and valleys in the data than Salesforce would on its own.
However, the more visionary analysis we’re working on now is identifying the what types of customers lead to the highest average contract value. We’re also investigating how different segments of customers behave all the way down the funnel and how long it takes for us to close deals across these segments. We’re querying Salesforce data alongside our product data we collect with Segment’s libraries to do that.
Diana: That’s awesome. But, why wouldn’t you just do all of this in Salesforce?
Ole: Well, you can’t really do most of this in Salesforce. You can get okay reporting in there itself if you’ve been working with the interface for years. But since I’m not in Salesforce all the time, I’d much rather just do a two minute query to find the contract value of customer in a specific segment or customers using our product in a specific way. Before, I had to hunt down Sales Ops folks and ask for a few hours of their time.
The real problem is not with Salesforce reports; it’s that I can’t merge my Salesforce data with the product data I’m collecting through Segment. And that’s where those fun analyses like close rates, etc for particular customer segments come in.
What’s more exciting is an analysis we’re working on uncovering which behaviors in the product lead to bigger deals. Once we have this data, we’ll tweak our design to encourage those behaviors.
You also mentioned SendGrid and Zendesk data sources. What are you doing with that data?
Primarily we’re using the Zendesk data for internal KPIs. For example, we’re measuring how much time we actually spend on solving tickets, and the total volume of the tickets we solve.
We also compare these numbers to see how support tickets are increasing in comparison to the number of reviews and business listings. For one, it helps us project how many support folks we need to hire. As you could imagine, we don’t want to infinitely increase our support team in direct correlation with growth, so we’re also looking for ways to minimize the
Review to Ticket ratio and become more efficient.
The other cool thing about having all of this data in SQL is for ad hoc analyses. If people on the support team have a hypothesis, it’s super easy for them to run a query and see if they were right.
What’s an example of one of those ideas?
Let’s say they have a feeling that a certain question is coming up over and over again, and fixing the underlying problem would reduce ticket volumes. They can run a quick query to get a sense for the number of tickets on that topic against the overall ticket volume.
If there are only actually ten tickets, they probably won’t do anything. But if there are 100 tickets, they might figure out a way to automate a response, write a help article, or push through a product change.
Neat! And, are you tying together the Zendesk data with Salesforce?
We’re building a customer success dashboard in Chartio with product, Salesforce, and Zendesk data to give our account managers a view on the entire account. We want to see the stages of accounts, from the money, tickets, and product usage point of view.
So if I’m an account manager, and I know you’re using our products and have a good deal, but you also sent in a hundred tickets to our support team this week, we need to talk about that. It’s also just great context to check before sending an email to make sure nothing is on fire or find a great entry point into a relevant conversation.
Who’s looking at these kind of reports on the Zendesk data? Is it the head of customer support? Individual support reps?
Right now, it’s the team manager who is working very closely with the engineering team and analytics team to figure out what information each rep needs.
Mostly the technical analysts have the SQL skills to build these original reports, but once they are built, individual support reps and the team manager can use drag and drop capabilities within Chartio to drill down and find more information, and to replicate the reports for each account.
That’s awesome. You also mentioned SendGrid. How are you using SendGrid internally?
We’re heavy users of SendGrid. One of our products allows businesses with listings to email some of their customers asking for review, and that’s all powered through SendGrid.
We send 10, 15 million of these emails every month, so more insights are welcome! The reporting in SendGrid not bad, but again, the problem is that I can’t join it with the rest of the data I have.
One of the big things we’re using Sources for is to identify the root cause of bounce behavior. In SendGrid, I get an overview of bounces, but I need to know if it was a particular customer, region, or even domain that was causing the bounces. This is also really important for catching fraud. For example, some businesses that send tons of spam email to get reviews. Obviously that hurts our brand and our IP status.
With the data in a more flexible format, I can join with in-app behavior to isolate peaks and valleys in our data and find the real issues.
What would it look like if you were to build this kind of pipeline in house?
We’re a big buy over build firm. Our approach is to generally look for new technologies that can solve our problems. If we have to, we’ll build out our own solutions, but we’d prefer not to unless it’s really core to our business. In Segment’s case we were definitely looking to buy a product to manage data infrastructure, especially because my team doesn’t want to work on these kinds of piping and tracking projects. It would probably take at least a few weeks for each source, plus an annoying amount of maintenance to set this up ourselves.
We’ve already really enjoyed using Segment to send our website and mobile data to integrations like Mixpanel and Google Analytics. We started with integrating just a couple of analytics tools through Segment, but now, since it’s so easy to turn tracking tools on and off, I often browse the Segment catalog for new things to try.
I look at all the tools you bring in and see if it’s something we could benefit from. It’s completely worth it for me to take a day of my time verify if a new tool is actually useful for our business. I flip it on and play around with the data. If it stinks, no worries. We didn’t waste a bunch of time getting it integrated. If it’s helpful, well, that was easy. We’ll just start using it. The cost of the tool itself usually isn’t the issue; it’s the whole process of evaluating and instrumenting tools. Segment eliminates that for us.
After we were using Segment Integrations, it made sense to use the same tracking code, with no additional work on our end, to start piping that data a SQL database for more advanced analysis. We just turned it on and got a lot more granular access to our data.
And, now that you’re offering Sources, you’re solving a new, but related problem for us to get third-party cloud apps into the same data warehouse. Why would we build out something when you already have it, and we know it works?
Thanks to Ole for taking the time to share how Trustpilot is using different types of data sources to improve their product and customer service.