Go back to Blog

Growth & Marketing

Jes Kirkwood on November 15th 2021

Shopify's VP, Growth Morgan Brown reveals how the company's growth team drives results in an exclusive interview.

All Growth & Marketing articles

Geoffrey Keating on December 9th 2019

If there's one thing we've learned over the last half-decade, it's that companies obsess over building their tech stack and picking the right analytics and growth tools. Should we use Amplitude or Mixpanel? Should we use Redshift or BigQuery? All tool categories will have hundreds of players whose feature sets will allow you to accomplish 95% of your use cases with ease. 

We've now seen 15,000+ stack evolutions over 6+ years and we've learned one thing. It's not about picking the perfect tool. It's ALL about being in a position to quickly adapt your stack to changing business needs and the ever-changing martech ecosystem.

Over the next few months, we’re bringing you a series of real-life stories from those who have seen their stacks evolve in extraordinary ways. You’ll get a behind the scenes look into the growth of companies like Datadog, Pagerduty, and Gusto, as they share the how and why behind some of their key technology decisions.

This week, we talked to Frame.io’s VP of Growth & Analytics, Kyle Gesuelli. Founded in New York City in 2015, the video collaboration software company has grown to 120,000 monthly self-service and enterprise users. Their stack has seen similar growth. Since joining in 2017, Kyle has scaled its tech stack from a handful of tools to well over 100.

Fresh off their Series C announcement, Kyle sat down with us for a wide-ranging chat on:

  • The inflection points in their business that drives the adoption of new software

  • How he keeps so many different tools in sync

  • Why focused, point solutions beat all in one software

  • What he looks for in new technology

Dive into the interview below.

How Frame’s stack evolved

Geoffrey: Kyle! Thanks for joining us. When you arrived at Frame, how did you determine the health of your tech stack?

Kyle: I arrived at Frame in 2017, when we were about 17 people. Back then, there was no one on the marketing team. There was no one on the data team. It was just a few engineers, designers, and customer support. 

As a result, our tech stack was pretty simple. We had probably 300,000 registered users and 30,000 monthly active users, but really, Mixpanel, Amplitude, and Intercom and a few other tools were all we needed. 

Frame.io’s stack in 2017

Geoffrey: What was the first change you made, and why?

Kyle: When you come in as the first growth hire, your ability to build new experiences is pretty limited until you hire a team. You can only focus on the areas that you can control, areas like communications, both in the product, outside of the product, whether that be email, retargeting, advertising, that sort of thing. It’s more about bolstering existing areas as opposed to diving into new ones.

Firstly, someone gave us the smart idea to install Segment. I think it’s probably because we share Accel as an investor! We started pretty simple; it was implemented client-side on a few core events. At that stage, we certainly weren’t tracking everything across Frame.io.

The second thing I did was help implement Autopilot for more robust journeying capabilities. Our previous tool, Intercom, was great for customer support and in-app messaging, but we found it less useful for communicating things like product launches. I wanted to be able to create richer email content that aligned with our brand, which is very visual and design-heavy, that I couldn't do in Intercom.

I also added Clearbit Enrichment to get a better sense of who our customers were, what their job titles were, etc. Then I connected all of the ad platforms – Facebook, Twitter, LinkedIn – so that we could use the key events we were logging in Segment as conversion values. This would tell the ad platforms that customers in these campaigns were doing X, Y, and Z and go find more customers who would exhibit the same behaviors. 

Geoffrey: What have been the most significant changes in your stack to date?

Kyle: I think as we’ve grown, we’ve seen a growing trend towards specialization, which has affected the makeup of our tech stack. As we’ve grown, we require tools that go deeper than the ones we already have, which then drives the adoption of new tools. 

For example, we loved Intercom for support and documentation but wanted to connect our sales team with live visitors so adopted Drift. We liked Drift, but we needed a way to enrich anonymous IP addresses, so we adopted Clearbit Reveal. 

Yes, there’s probably one product that could do all of this, but not in the depth or sophistication that we need for a business of our size.

Where Segment is really valuable is that it helps us discover and connect with all of these great point solutions.

For example, when we needed a tool to capture net promoter score, our first step was to look in the Segment catalog, and see what tools are available there. What's going to be the easiest thing to quickly turn on in one click. It was in the catalog that we found Promoter.io, which wasn't a standard source in the Segment catalog, but that you could set up easily via an HTTP source. 

Frame.io’s stack in 2018

Geoffrey: With so many different tools and so many different data sources, does it become hard to keep track of all of these tools at once?

Kyle: When we bought Segment Personas, that changed things significantly for us. One of the issues of having Segment connected to a lot of marketing tools, in particular, CRMs, is that event properties get sent as user traits. 

This becomes hard when you’re tracking as much data as we are. I'd guess we're in the top 1% of Segment customers in terms of just how much data we're logging. It’s probably unusual for a company of our size, but we log everything, even every button click. We have about 700 different events that we log, each of which has its own set of unique properties. 

Personas enabled us to craft traits about our users and then send those cleaned traits out to all the tools so that they're all in sync. That got ratcheted up even further when Personas released SQL traits. That was the holy grail for me. I can now crunch statistics or traits about users from not only the data that Segment collects on them but from our own database of transactional data.

So I write queries in Segment, in Personas, I connect Segment data and transactional data to get these traits that I want. And then I federate those traits out to all of the tools that need them. 

So if I want to know if a user is associated with an enterprise plan and Frame.io, I can craft that trait on my transactional data, understanding all the relationships a user has with multiple accounts. It will then give a true/ false answer as to whether they are associated with an enterprise account.

Then I can send that data out to Intercom and Salesforce and Autopilot and Facebook, and I’ll be able to tailor communications and other experiences accordingly.

Frame.io’s stack in 2019

Geoffrey: I feel like businesses are often sucked in by the latest and greatest marketing technology, and will just rip and replace older tools as opposed to iterating on what they currently have. How do you approach new technology at Frame?

Kyle: I don't think we've ripped out anything just yet. I think we're coming to that point today where our scale begets a change. Maybe you're getting too big for a certain system or that company the company might have changed its pricing, and it just doesn’t make sense to use them moving forward. 

But in general, our technology has all been additive, to supplement our existing technology with new functionality. For example, even though we use Intercom, we also brought Appcues in a few months ago. Appcues can do NPS, product tours, and in-app messaging. I’d probably spend twice as much as I need to on all of that if I tried to consolidate all of that into one tool. I think we’ll end up gradually move towards consolidation, but we’re not necessarily at that scale yet.

The endless debate - all-in-one consolidation vs. best of breed 

Geoffrey: Most marketing technology platforms fall into one of two categories. They’re either point solutions that specialize in solving a deep, defined subset of challenges. Or suites that offer multiple tools and and solve the “tool overload” challenge facing many marketing organizations. Do you prefer best of breed solutions or consolidation?

Kyle: It is a case by case basis. I would love to consolidate, but sometimes these all in one tools will get you 90% of the way there, but then fail at that last 10%. For example, Intercom has certain features of Drift, and Drift, has certain features of Intercom, but they don't necessarily do 100% of what the other one does. That’s why we have two, and we will continue to add more and more as time comes about.

In general, we’re big fans of point solutions that specialize in a specific area. For example, the impetus for Appcues was that we had spent four months building our homegrown product tour and realized we needed to be able to quickly iterate without lots of hardcore development work. That’s just not a good use of our time. 

And so we'll look for that point solution, but in discovering the point solution and trying to bring it on board, we’ll say “Oh, maybe there's an opportunity for consolidation here”. It very much depends. We're now a five-year-old company, with over 100 pieces of software used across the entire business. Segment is connected to 20 plus alone.

Stack changes are correlated with specialization

Geoffrey: Wow, so it sounds like you’ve moved from a tech stack of just a few tools to one of hundreds. I’m interested in whether there were key inflection points that caused you to adopt new tools? Product launches, new hires?

Kyle: I think scale and specialization allows for more advanced tools to be introduced. When a company is small, with limited resources, they'll typically purchase more user-friendly, approachable tools that the product team can get into and understand easily (Intercom, Mailchimp, Amplitude, etc.). As the company scales and gets more sophisticated, more complex and powerful tools are introduced (Salesforce Marketing Cloud, AutoPilot, Looker).

For example, we recently moved off Amplitude. It was an incredible tool for lightweight analytics, where you didn't manipulate the underlying data. It was simple and out of the box – perfect for smaller companies. But when you start to build up more and more tracking in your platform, you really need the ability to manipulate the underlying data, so we replaced Amplitude with Looker two years ago. 

So there's a variety of reasons why a tool needs to change, but ultimately it boils down to three.

  1. You’ve scaled past it in terms of many users it can hold in a functioning way, and it can't handle the volume of data and usage that you're throwing at it. 

  2. You've moved past it from a sophistication level. It was simple enough back in the day, but now that you have just more nuance, more complexity, you need something that can handle that complexity. 

  3. You're starting to focus on new areas, and your existing tools only do so much.

The final thing I’ll add is that you have to find tools that help you pull those all together. Fortunately, Segment is an incredible pipeline for us as well as a core analytics and tracking tool. Over the years, as we’ve brought on more and more tools, we bring a lot of data from these other sources into our warehouse via Segment.

So Intercom is sending us conversation data back into our warehouse via Segment. Stripe, the same thing. Salesforce, the same thing. Segment is the connective tissue of our tech stack.

How Frame.io invest and experiment with new technology 

Geoffrey: Are there any key criteria you look at when you’re evaluating a new technology? Is it cost, interoperability, scalability, all of the above?

Kyle: First and foremost, we ask ourselves: “What would it take for us to build this?”.

At the scale of the company is at right now – we're 115 people – it usually doesn't make sense to devote our resources to building our own solution or hiring a new engineer to build it. I'd rather buy a tool that doesn't need a one-on-one every week than hire a person that does.

I have 10 direct reports, and can’t add more. So first and foremost, I need a tool that can do the job of a human, but that doesn’t require the same level of oversight.

The second is cost. Is the cost of that tool not outlandish? There's a level at which you care about price and a level that you don’t, which you don’t. If a tool is in the < $20,000 a year range, cost doesn’t really too much in our decision making. It’s when you get to $20-50,000 a year, you start putting this tool in the context of your existing stack, and wondering whether it can become a consolidation or replacement for other tools.

Then if you're talking >$80,000 a year, you have to do your due diligence and consider what the alternatives are and what the real value you’re getting out of it is. 

But often you're moving so fast that you just need to get the thing done. By spending more of your team's time evaluating the tools, you're wasting more resources than just buying the tool.

Geoffrey: What are some of the new technologies you’ve experimented with recently and have found most valuable?

Kyle: Well, the good thing is, between Segment and Zapier, almost anything is possible, especially for someone who doesn't write Python. 

But beyond that, I like to experiment with technologies and use cases that I can't do. So that could be adopting a new customer interface that I can't recreate without an engineer, like Appcues. Or it's a technology that I can't recreate. One of the things we're really hot on right now is Clearbit X because they have some crazy matching technology in their backend that gets me a 50% match rate on B2B email addresses in my social tools. 

I just can't do that. That is a unique capability of the five years of work Clearbit put into that company. I'm not going to be able to recreate it. 

So I would say my unique superpower is understanding how to take data from different contexts and using software to stitch it together to create an experience for a user. 

Looking ahead to 2020

Geoffrey: As we wrap up 2019 and look ahead to the new year, what new technologies are you eyeing for the future?

Kyle: Right now we're looking for a better data pipeline. We're collecting so much data from Segment, that we need a tool that ensures accessibility and real-time data for all parts of our customer communications, as well as in our analytics. 

The second is that we are shifting heavily to becoming enterprise-grade software. With that comes a need for account-based marketing, prospecting, and coordination of relationships with companies to sales team members that own those relationships.

This means making sure there are named accounts in Salesforce, which is connected to Drift, which is connected to Segment Personas. This ensures we have a 360-degree view of the named accounts we're going after and all the accounts that we have in the opportunity stage to sell to. 

What’s most exciting is that we have found tools that go beyond the company. Instead of account-based marketing, it's actually person based marketing at those accounts. So we use a tool called Influ2, which does person-based marketing. Monitor and control who sees and clicks your ads, by name, which helps us be super focused on enterprise decision-makers. So if I'm targeting somebody from Target, my ad can be to that specific person at Target and the creative will include something about Target.

Then I'm getting data back into Salesforce or in back into my database about when that company hit, or when that person saw 5, 10, 50 impressions or when they clicked one, two or three times. Then I'm connecting that Outreach.io sequences. So once I get enough impressions or engagement, I'm then starting a drip-email campaign, then I'm changing the onsite experiences based on all the impressions that we've gotten from that company and the likes. So rather than going for a broad solution, we're rolling our own, based on a bunch of different tools.

Thomas Gariel on December 3rd 2019

Recommendation engines are a key ingredient of e-commerce today. Pioneered by the likes of Amazon and Netflix (who went so far as to offer $1 million dollars to anyone who could improve their engine by 10%), the ability to predict a customer’s needs, and provide proactive recommendations based on this understanding, is reshaping how businesses interact with their customers.

Many of us use these recommendation engines every day, but how do you actually go about building one?

Recommendation engines have traditionally required running complex data infrastructure to collect and centralize data across sources, and large in-house data science teams to train and build these models.

This wasn’t the case for the team at Norrøna, who built a complete recommendation platform, from data collection to serving machine learning predictions, in just six months. In this article, Thomas Gariel, Product Manager at Norrøna, shows us how recommendation engines are very much in reach, both for digital-first companies and bricks and mortar businesses alike.

As the leading brand of outdoor clothing in Scandinavia for more than 90 years, Norrøna is worlds away from being your traditional data science company. For most of that time, we’ve primarily been a wholesale business. As such, we had very little relationship with our end users.

That all changed in 2009 when we opened our flagship retail store in Oslo and launched our e-commerce store. Since then, we’ve gone from no customer-facing stores or online shop in 2009 to 22+ physical stores generating approximately 50% revenue from direct-to-consumer in 2019.

In a short space of time, we had to transition our business entirely. After decades of selling to thousands of wholesalers, we had to learn how to connect with millions of customers.

Now, our company has started to tackle the most critical challenge we’ve had in the past ten years: transitioning from a wholesale business to a direct to consumer one.

To build or to buy a recommendation engine

The first challenge we had to make for our recommendation engine was the age-old “build vs. buy” question. Since the beginning, our mission has been to produce high-end, performance-driven products. To accomplish that mission, it’s critical we focus on the integrity, innovation and, above all, the technical function of our products.

We are also proudly self-sufficient. From concept to creation, our in-house team of designers and craftsmen build everything themselves. We are one of the few brands that have internal R&D and prototype and material testing in our HQ. We believe that if we internalize critical parts of the value chain, we will achieve the highest level of quality.

Our rationale was: If we’ve always done things ourselves for our physical products, why not try to do the same for our digital ones?

Thankfully, this has been made easier by the global commoditization of technology both on the customer data side and on the machine intelligence side.

In the same way that software like Squarespace has allowed millions of people to build, design, and host websites without needing a developer, tools like Segment have allowed us to solve common customer data infrastructure problems without the need for a large and expensive staff.

This led us to the decision to build our own recommendation engine using Segment as our customer data platform and then to utilize the machine intelligence tools available on the Google Cloud Platform.

The stack we used to build our recommendation engine

We have a very small team working on data at Norrøna. Since our resources are limited, it’s important technology is easy to use and can be adopted without significant engineering resources

That’s where Segment came in.

Norrøna uses Segment for data collection and master data management. We use it for out-of-the-box and close-enough-to-real-time tracking of user interaction, both client-side and server-side. Segment then assigns an ID, either identifiable or anonymous depending on GDPR consent, to each customer.

Collecting all this customer data consistently using a standard schema couldn’t be easier with Segment. Sending super clean data and storing it in a cloud instance is accessible at the flip of a switch. Since a recommendation engine is only as good as the data behind it, this step was key.

With our clean datasets in place, we were ready to layer intelligence on top of it. We chose the Google Cloud Platform (GCP), which has all the essential building blocks for developing and deploying a scalable machine learning platform in the cloud.

GCP has a large ecosystem of clean, relatively intuitive tools that are easy to use, even for non-tech people. It’s all based around the concept of modularity, providing people with ready-made “bricks” that you can use to get up and running in no time.

To achieve a good compromise between speed, simplicity, cost control, and accuracy, we used Big Query for storage, Data Studio to separate that data and Jenkins and App Engine to power that data.

Additionally, we used BigQuery to store consumer data in the cloud. For simplicity’s sake, we used the pre-existing components that would deliver the most impact with the least effort possible.

The recommendation engine in action

Every season Norrøna has the challenge of helping users navigate 350 new products across 18 different collections. Since some of our products are quite specific, it can be hard for customers to find the perfect product for their needs.

For example, we have a ski touring collection. We have a backcountry skiing collection. We have an urban skiing collection. For a new user, it’s not always easy to understand the function of each product and each collection. To help fix this, we wanted to build a system that would be able, given one product in our inventory, to calculate the proximity to every other product in our inventory.

So, if I input the product "A" into an API, the system can retrieve every single product in our catalog and approximately score for each of those products.

Going back to our pipeline, we had the assumption that some events from the e-commerce library in Segment would be a good proxy for calculating product proximity – product, clicked, product added to cart, product added to wish list and other completed actions.

The idea we landed on was pretty simple – collaborative filtering.

Collaborative filtering works by making a series of automatic predictions for one user (filtering) by looking at the preferences of many users (collaborating) and combining them into a ranked list of suggestions. Basically, if someone is “like” you in his or her browsing pattern, we recommend the items he/she has viewed that you have not.

We take certain events from Segment (Product Clicked, Product Added, Product Added to Wishlist) and then load it into BigQuery with the past 30 days of data. (For us, that's approximately half a million observations.) Then every day, the algorithm outputs a list of every possible pair in our catalog with a proximity score associated with those pairs, which then we expose through an API on each product page on norrøna.com.

From manual to algorithmic recommendations

This led to a significant improvement in how people discovered new products at Norrøna.

Up until this point, we had assumed that to help people navigate our products and our collections, putting complimentary products close to the main product was the best strategy. So, if I have a jacket, putting trousers, a base layer or a mid-layer would maximize the ability of people to discover other products.

This was all done manually. Each season, our e-commerce manager manually added a series of complementary products for each new product on our website. It was a huge time sink.

By using Segment and BigQuery, we could replace this manual work by taking the output of the algorithm and filtering it on specific products. What we found was that the algorithm was systematically recommending similar products, not complementary products that we had been selling for a couple of years.

Saving us a lot of manual overhead was positive in its own right, but what was more important was uncovering which system of recommendations had the most revenue impact.

So, we A/B tested both product pages and directed 50% of the traffic to version "A", and 50% to version "B".

The results were incredible. The algorithm recommendations beat the manual recommendations across the board.

Our recommendation engine was designed to solve one specific business challenge. But the beauty of the pipeline is that it is generic enough to fit a large number of these business challenges. With the infrastructure“backbone” in place, we’re using the recommendation engine to uncover all sorts of interesting new insights.

For example, as we thought more about it, the algorithmic output of product pairs and associated scores and scores is basically like a graph. The products are the nodes of the graph, and the scores are the edges of the graph. So we thought it would be fun if we could visualize the different relationships within our entire catalog as a graphical visualization using Gephi.

This helped us find some interesting insights in our product catalog that may otherwise have remained hidden. For example, we found three big red clusters in the middle – our skiing products. This showed us that when it comes to choosing your skiing products, people don’t care for the existing way we were recommending products. They browsed house collections (i.e matching trousers, jackets, etc.) and were looking for similar products within those collections.

On the other hand, when it comes to lifestyle products, we saw a blue cluster on the side. People who are interested in those products tend to stay within that cluster and browse only the lifestyle collection. So someone interested in a flannel shirt might also be interested in a hat or a jacket from the lifestyle range.

All these patterns and interdependencies became much more clear once we started visualizing the data, and it had the added upside of helping us communicate results to stakeholders in a simple to understand way.

The immediate, short-term, machine learning use cases have been strongly e-commerce focused, but we foresee that future use cases will be found in areas like product development, sustainability, demand prediction, and logistics. Now that we’ve seen just what’s possible with Segment and Google Cloud (and how easy it is) the ways in which we can use our recommendation system are only up to our imagination.

Kevin Garcia on November 25th 2019

Healthy companies are fueled by healthy employees. Dialogue, a virtual healthcare platform, helps businesses ensure their employees stay healthy by offering them premium healthcare options online and via a mobile app. 

One of the secrets to their success has been using data to power their virtual healthcare. Behind the scenes, data scientist Jacob Frackson and his team are building the data pipelines and solutions that help Dialogue promote wellness and create care journeys.

As the company has grown, his team has helped solve two big problems with technology rather than human time: providing the best experience for patients and providing the best experience for the healthcare professionals helping those patients. Here’s how he built the data infrastructure to solve for both.

Allowing healthcare professionals to focus on care

Dialogue saves people time by helping them skip in-person consultations. In order to truly deliver on that promise, the customer support team wanted to make sure that healthcare professionals could focus their time with patients on actual care and not troubleshooting app or IT problems.

The product team identified a simple way to solve this issue: use a customer chat tool to help with non-medical or app-related issues. Once the project came to Jacob, he wanted to make sure that the chat tool could ingest the user ID, basic device information, and additional customer details in order to properly address their issue.

He made this possible by leveraging Segment with Intercom. He was able to bring user ID data into Segment and use it to inform the support rep with rich, accurate data while they were live-chatting on Intercom. With just a small amount of work, he was able to help offer real-time customer support for hundreds of tickets per week, and allow nurses and doctors to focus on medical care.

By early 2019, Dialogue had scaled operations to support over 300,000 lives. Even so, their support team was able to stay lean by leveraging technologies like Intercom and Segment to scale their processes and save them time.

Improving the caregiver and patient product experience

Over the last few years, Dialogue has hired and grown significantly. There are now over 30 developers (Jacob was the 2nd data team hire!), and the team has started shipping new products much faster.

With more products comes more complexity. Jacob now manages the data sources and pipelines for several areas, including their patient and caregiver apps. All of this added complexity has also made it more difficult for the product teams to know which features to keep, add, or modify. This meant that his team needed a solution to help the product team make data-informed decisions.

He set out to collect all product analytics in one central place to make it easier to bring together data and empower the product team with self-serve analytics. He did this by using Segment to collect all of the product data and connecting that data to their data visualization tool, Tableau.

Jacob’s view in Tableau

In Tableau, Jacob is able to report on the frequency of usage across different features. For example, each dot in the graph above is a feature plotted for frequency of use (y-axis) and adoption share (x-axis). This doesn’t just provide visibility for the team, it also helped inform critical decisions for the Dialogue team when it came to their caregiver app. 

Dialogue nurses can work with hundreds of patients per day—more than most emergency rooms in the world—so they need an app that helps them focus on patient engagement over administrative tasks. Because all of Dialogue’s product analytics flow through Segment, the product team was able to identify a big opportunity to improve the usability.

Nurses primarily use the app to navigate between patient profiles, but the product team had used a lot of valuable real estate to provide other functionality. They decided to test a new navigation UI (navigation_1) that focused on keeping patient status front of mind against their existing UI (navigation_2) to decide how to move forward. In the Tableau view above, you can see that navigation_1 had been used about 15K times while navigation_2 had been used more than 300K times in the same timeframe. In 5% of the clicks, the nurses were able to do the same workload with the new UI.

Dialogue caregiver app (January 2019 version)

The old navigation was not optimized for the right use cases and, as a result, only 70% of the care team used it. By contrast, the new navigation once fully rolled out was used by 99% of users—an almost 30% improvement—and often saved healthcare professionals multiple clicks in the process.

Staffing nurses in real-time

As mentioned before, Dialogue cares a lot about how long it takes for a patient to receive care or have an issue resolved. However, this success metric becomes difficult when any onboarding customer might add hundreds to thousands of new patients to their app all at the same time. This reality led to busy and busier days, where short-staffed caregiver teams sometimes created longer wait times. 

Their original way of piping patient volume data to their database had a 12-hour latency. This meant that by the time they had the right data, it was too late to make any staffing improvements. They replaced this solution by using Segment to track visit volume and ingest Amazon Kinesis data streams to create real-time dashboards of patient visits in Amazon Athena with less than three-minutes delay.

Their new real-time dashboard meant that Dialogue could quickly identify spikes in patient volumes and recruit extra doctors on-call to avoid any increase in wait times.

Powering care with data

By using data to power tools like Intercom and Amazon Kinesis, as well as data-informed decisions for the product team, Dialogue has been able to truly deliver on both a great caregiver and patient experience. They have:

  • Cared for 300K+ users with a technology enabled support processes

  • Helped the product team unify and self-serve product analytics

  • Improved caregiver app adoption by 30% using A/B test insights

  • Created real-time nurse staffing models with less than three-minute latency

With much more growth ahead, Jacob is excited to continue to tackle the data complexities that come with it and deliver world-class experiences to their customers.

Doug Roberge on November 13th 2019

As Ben Clarke at Fast Company said, “These days, the true test of how innovative a company can be is how well it experiments.” By that standard (and many others), Imperfect Foods is one of the most innovative companies out there.

Founded in 2015 with a mission to reduce food waste and build a better food system for everyone, they offer imperfect (yet delicious) produce, affordable pantry items, and quality meat and dairy on a weekly subscription basis via their website. With over 200,000 subscribers, serving 25 cities, they’ve saved 80M pounds of imperfect food from being thrown away.

Patti Chan, who leads the digital product department, is an avid supporter of experimentation and experiment-driven product development. However, with a small team, she was challenged with experimenting often while still keeping up with the day-to-day demands of the business. 

Companies like Netflix have teams of over 300 people running experiments, while her team consisted of six engineers, two QA engineers, a designer, and a PM that were responsible for four products for Imperfect Foods.

Patti needed a scalable way to run experiments and measure results without stretching her team too thin. That’s why her team implemented a data infrastructure that made their testing dreams a reality.

Here’s the experimentation infrastructure Imperfect Foods use:

  • Collect user event data from the Imperfect Food website via Segment.

  • Send user event data to Split.io and AB Tasty for experimentation.

  • Send user event data and test results to Snowflake for data warehousing.

  • Run queries and build reports in Mode Analytics and Amplitude.

Growing a culture of experimentation

As a small team with multiple responsibilities, it’s often difficult to make time for experimentation. Experiments require a lot of planning and the right technology to make sure you’re getting effective results from the experiments you do choose to run. But not experimenting at all can lead to countless missed opportunities.

Patti knew having the right experimentation framework and technology in place would be the best way to build a culture of experimentation, without overburdening her team. 

Here’s the process they landed on:

  1. Define your problem and hypothesis clearly. Know the question you want to answer and set up your test around that question.

  2. Pick a reliable leading measure to move. Don’t choose lagging measures because it will take too long to see results and gauge impact in the short term.

  3. Do things that are not scalable to start. You’ll get to your findings faster and can worry about scaling things like automation and admin tools later, when you know there’s value.

  4. Don’t stress over smaller sample sizes. You should aim instead for a large sample size.

  5. Choose bold ideas. You won’t see big gains without breaking new ground.

  6. Share your learnings broadly. This helps all departments benefit from the findings of each experiment and creates a culture that celebrates and prioritizes experimentation.

Of course, the right process will naturally fall short without the right technology to support it. So, Patti and the Imperfect Foods team implemented a stack that helped them offload countless hours of work. They rely on Segment as their customer data infrastructure to collect and deliver the customer data needed to run tests and evaluate results. In addition, they use Split.io and AB Tasty to reduce the amount of work required to change their UX and route traffic to the right experiments.

“Segment is the glue that holds our experimentation infrastructure together.” – Patti Chan, VP of Product @ Imperfect Foods

With the right experimentation process and data infrastructure in place, Patti and the team can run a test per week with the equivalent of less than five dedicated team members.

Enhancing the customer experience for big retention gains

While not all tests generate positive results — in fact, according to Patti’s estimates, as many as 50% fail — when you do get a winner it can have a big impact on the business.

Patti’s team came up with an idea when they were brainstorming ways to improve the customer experience at Imperfect Foods. They had a hypothesis that allowing customers to select foods they didn’t want in their monthly box would improve customer satisfaction and loyalty. Despite it being complicated logistically — this feature could lead to thousands of custom boxes and substitutions — they wanted to test it to see if the idea was viable.

They built the functionality in about three weeks, tested it, selected their target market (Los Angeles), launched the feature to 50% of their LA subscribers, and started waiting anxiously for the results. The team was stunned when the results came in. Despite low adoption in the first iteration, customers that used the new feature were 21% more likely to be retained than users who did not have access. 

It was clear that this was something that should be invested in further and rolled out to the rest of Imperfect Food’s customer base. 

Applying experimentation to internal processes, too

Experimentation is often incorrectly associated with only changing customer facing UI. However, experiments are also an opportunity to improve operational efficiency. One example of this is how Patti and her team worked with the customer care team at Imperfect Foods. 

Before July, the customer care team didn’t have quick access to critical information that could help them diagnose customer problems, like the delivery status of an order. Patti and the team hypothesized that getting this information to their care team members in real-time with fewer clicks during their calls would help drive customer satisfaction and reduce the time to resolution for those support calls.

Patti and team built a feature for their customer care associates that exposed delivery details like if an order was out for delivery, if the delivery was marked as completed, if there was photo confirmation, and so on. Similar to their external experiments, the team rolled it out to a select few support team members and compared their results with the rest. Yet again, her team struck gold! They managed to reduce support call times for the category by 10%.

Harvesting the right ideas for your business

For Patti and the team at Imperfect Foods, experimentation allowed them to explore more ideas and ultimately build a better product for their users. All-in-all, the results her team shared speak for themselves:

  • Comprehensive experimentation framework and tech stack implemented

  • 22 experiments run in 6 months

  • 21% increase in retention (for Los Angeles test group)

  • 10% reduction in time to resolve a customer status inquiry

One great idea has the potential to change your entire business. To get to that great idea, you’re going to need to plow through quite a few duds. It’s not easy, and not every experiment comes along with double digits metrics gains, but building a culture of experimentation within your organization will always prove fruitful.

The experimentation process can be disappointing and humbling at times. Do it anyway! The confidence we get from knowing that our solution not only fits a spec but solves for a real customer need is invaluable. – Patti Chan, VP of Product @ Imperfect Foods

Nicole Nearhood, Olivia Buono on October 21st 2019

As any sales team knows, building proposals can be a tedious, painful chore. So in 2013, Nova Scotia startup, Proposify, set out to revolutionize the entire proposal process, from creation to close and every deal-making moment in between.  

In 2017, Proposify began to see a significant uptick in growth. While this was a net positive for the business, rapid expansion brought a whole host of problems for their sales team.

Specifically, Proposify struggled with turning inbound leads into paying customers. Inbound interest was outpacing their small team’s ability to execute. Max Werner, Proposify’s marketing operations and analytics specialist, decided it was time to figure out how to improve the conversion rate without increasing headcount. 

He identified three major opportunities:

Scoring leads to focus on high-value conversations

Firstly, Max identified that Proposify’s lean and nimble sales team had no way to identify which prospects were the best fit to focus their energy. Some visitors were genuinely interested in buying, but others were just kicking the tires. They didn’t have the resources to give all visitors the same level of service, so they needed some way to prioritize their leads.

Enabling all teams to use their preferred tools with little engineering work

Second, with a major new product release on the horizon, Max worried he did not have the proper tooling in place to quickly and reliably add tracking to the app. Various departments were using different systems for analytics, all of which needed to have customer information from the launch.

This meant that the development team needed to handle multiple APIs and maintain numerous integrations. Because integrations were done at different times to different tools, Proposify lacked data parity across its various systems.

Empowering support and success teams to get deeper insights

Third, Proposify’s customer success and support teams were crying out for a solution to gather deeper engagement and utilization analytics. In order to provide personal, helpful experiences, they needed a better understanding of their customers, all but impossible without clean and reliable data. The support and success teams could have enlisted the support of their engineers to get the data they wanted, Max didn’t want to burden the development team who was already strapped for time building product features. 

“We've always strived for data-driven decision making, but without proper data, it was hard to do. Our sales team was prospecting every trial user we had coming in. Marketing had a hard time keeping track of churn. Support had a hard time reporting on SLAs.” - Max Werner, Marketing Operations and Analytics Specialist, Proposify

Enter Segment

To streamline this process and bring scale to teams across the organization, Proposify chose Segment as a backbone for its customer data

Proposify’s development team just needed to add Segment to identify, group, and track events during product development. From there, Max and the marketing team could easily connect various destinations like Marketo, Salesforce, Intercom, and more.

Here’s just a taste of the integrations set up by each team 

  • Support and Success: Intercom, Gainsight

  • Marketing: Marketo, Visual Website Optimizer, Google Tag Manager, Clearbit

  • Sales: Salesforce

  • Product: Heap

  • Operations: ChartMogul, Amazon Redshift, Recurly

Now that Max had a better handle on his data, he could start tackling the challenges impeding Proposify’s growth. 

“The best part of using Segment for data collection is definitely that we fight with our product team and project managers a lot less. Adding or extending Segment tracking is easy and is instantly available for all downstream destinations. (thanks, Segment debugger!).” 

A new lead scoring model using Segment and Clearbit 

Onboarding questions can help you triangulate the value of a prospect, but they don’t give you all the information you need to complete your qualification. Plus, the more questions you ask, the lower your conversion rate will dive. Max wanted to create a more sophisticated model.

  • First, he added in customer behavioral data (i.e. how many times a user performs a certain interaction; the last time a user performed a behavior). 

  • Second, he also enriched each lead with firmographic data from Clearbit, such as information about a company’s funding, tool stack, and industry. 

Using these inputs, Proposify generated a new lead scoring model and piped it back into Marketo through Segment. As a result, the sales team could more quickly disqualify leads with incomplete profiles and low scores. 

Real-time data, without the engineering work

Proposify’s development team uses the Segment SDKs to add user/company traits or track events as the team develops and refines features. Now that Segment is implemented, Proposify is confident that any tools connected through Segment are piped the same data in the same format in real-time.

The time and effort saved through standardizing tracking against one API also helps Proposify iterate, extend, and improve upon its tracking significantly faster than was possible before; and the team doesn't have to worry about random APIs deprecating. 

Thanks to this:

  • The product team can use customer traits inside Heap to segment its user base more effectively to evaluate how features of the app impact conversion. 

  • The product team connects customer behavior data via Segment to find out the optimal number of proposal pages and which pages get the most visibility. 

Self-serve analytics for success and support

Additionally, Proposify’s sales and support teams benefit from better customer data and a streamlined process for their downstream tools. With the Intercom and Gainsight integration via Segment, Proposify’s Customer Success team can self-serve customer health information. 

With all customer info in one location (Intercom), the success team can provide quick support without having to dig around for details about each customer. Due to their fast and accurate service, the Success team has been able to maintain a negative net MRR churn almost every month.  

A stable data infrastructure to help future growth

Since implementing Segment, here are just a few of the results Proposify have seen so far:

  • With Proposify’s new data infrastructure, the sales team has increased the size of its sales pipeline and velocity by 152% and 312% respectively.

  • This directly leads to the ability of the company to scale, ensuring data parity across its various systems to effectively turn interested prospects into happy, paying customers. 

  • Knowing more about the app-usage of high-value customers, customer success has managed to maintain a negative net MRR churn almost every month.

  • In addition, the average data preparation for a Gainsight implementation is three months. Segment enabled Proposify to do it in just one month.    

Thanks to their work, the Proposify sales team is now able to spend time talking to their most valuable prospects, and ensure they turn into long term, successful customers once they do convert.

Doug Roberge on October 7th 2019

If you build it, they will come. While maybe true for amusement parks or $5 all-you-can-eat buffets, this adage does not apply to new software features. 

A lot goes into building, designing, and marketing a new feature in your app. If one piece of the equation fails, your stellar feature could quickly turn into a dud. 

Rahul Jain, longtime PM at experience optimization platform VWO, understands this better than most. He has rolled out countless features during his 5-year tenure and knows first-hand what makes some products more successful than others. But, it wasn’t always so easy.

In this story, we’ll share how Rahul used analytics to build a data-driven organization, improve product adoption by up to 15x, and prevent churn.

Building a data-driven culture

As with any SaaS platform, VWO is in a steady state of change. They build new features, get customer feedback, and then, naturally, build some more. Since Rahul joined the team, VWO has evolved from an A/B testing platform to a complete experience optimization platform that offers deep insights (like funnels, session recordings, and heatmaps) and push messaging, among a host of other things.

VWO product offering, September 2014

VWO product offering, September 2019

At first, VWO’s product analytics were less than stellar. Rahul was only able to track page views. Most of the actions users were taking in the app weren’t being tracked anywhere. For example, they weren’t tracking key user actions in the setup flow like URL selected and Audience created, which are critical in understanding product adoption and engagement. At that point, his team was only consistently tracking 10 events, which only covered one feature in the app.

“We knew that we couldn’t scale things on assumptions and intuition. We needed to be data-driven and let the data speak for us when it came time to manage stakeholders and make product decisions.”

-Rahul Jain

Rahul needed a solution that would work with his existing architecture, require limited engineering resources (they had a product to build!), and provide granular product analytics. He did this by using Segment to collect user behavioral data in the VWO app and send that data in to their data warehouse, BigQuery. He then was able to run analysis and set up reports in their product analytics tool, PowerBI

Improving activation rates for new features

With analytics in place, it became much simpler to get an accurate view of how well new features were performing. Instead of just being able to see what pages a user engaged within the app, Rahul and his team implemented analytics for every element of the product. For example, they could now track each step required for creating and analyzing an A/B test, such as segmentation and targeting. 

Elements in the VWO segmentation setup flow that can now be tracked

For VWO’s product team, one of their key KPIs is product activation rate — how many users are using a new product and at what frequency. To keep a pulse on that, they set up reports for every feature. In an ideal world, all features would just naturally land in the top right corner of the chart below. However, it’s rarely that simple.

Adoption vs. frequency graph of a product feature

For the less successful features — the features that wound up in the bottom left of the chart — they could start exploring the reasons behind it. Are customers able to find the feature? Do customers know how to use the feature? Do customers need the feature? 

For example, Rahul and his team built and launched a new segmentation feature that allowed users to set up a behavioral analysis of visitors converting/not converting for a set goal. They added the feature to a dropdown menu where it seemed like a natural fit. But, it wasn’t being used! Naturally, his team wanted to understand why. They decided to send a survey to get some candid customer feedback on the feature. The results were surprising. They realized that it wasn’t that the feature wasn’t useful, it simply wasn’t discoverable!

Rahul and his team set out to fix it. They added a widget in the app which made the new feature more accessible to everyone. They also kicked off a marketing campaign via Appcues that highlighted the feature to users that hadn’t yet used it. This increased adoption from less than 1% to over 15%.

VWO app without the widget

VWO app with the new widget

Being proactive about churn 

Successful onboarding and ongoing usage is essential to retaining customers. With a baseline of product analytics in place that gave them clear insight into both these metrics, the success and growth teams at VWO were able to start getting ahead of churn risks.

Rahul started by giving more visibility to CSMs by setting up custom dashboards in PowerBI. Each dashboard had in-depth information about a customer’s current product adoption. That information was also reflected in the customer record stored in Salesforce. CSMs could quickly glance at their customer dashboards before getting on calls and give more insightful product recommendations. 

The growth team at VWO also uses this data to build engagement buckets (high, medium, low, critical), which are based on the frequency of important actions a user takes inside the VWO app over a period of time. Customers with low and critical scores are churn risks. When a user is at risk, they’re automatically entered into a personalized email flow designed to get them re-engaged. Because this information is also in Salesforce, sales and customer success can take the appropriate action as well.

VWO’s data-driven approach to customer success and churn prediction has helped increase dollar retention rate (DRR) significantly.

Delivering better results for the business

The business impacts of this enhanced focus on product analytics are impressive:

  • Democratized access to detailed product analytics and adoption dashboards

  • Increased the number of events tracked from 10 to 1,000+, covering every possible customer action in the app

  • Improved adoption rates across all features, including a 15 percentage point increase of the new segmentation feature

  • Significant increase in dollar retention rates

In addition to all of that, Rahul and his team managed to create a more data-driven product organization. They now have a deep understanding of their customers and can make better decisions as a company. Growth can focus on promoting the right features to the right customers. Customer success can take action on at risk accounts before it’s too late. And, lastly, Rahul and his team can eliminate assumptions when it comes to prioritizing focus areas for the product.

Mark Hansen on October 3rd 2019

A few months ago, we shared an inspiring story from Mark Hansen, the co-founder of Upsolve and one of the first members of Segment’s Startup Program. In this post, Mark explains how Upsolve leverages Segment and SEO to drive thousands of high intent buyers to their product every month.

When you’re a cash-strapped nonprofit competing for attention against multi-billion dollar public companies, you have to focus on your mechanism for growth, and then become world-class at it.

At Upsolve, we help low-income families file bankruptcy for free, and during our time at Y Combinator, it became clear to us that search engine optimization (SEO) was going to be our most important growth channel.

The main mantra at Y Combinator has always been: “Make something people want.” But we also heard another mantra:

“Just because you built it, doesn’t mean they’ll come.”

We needed a cost-effective, scalable way to bring people to our service.

SEO could deliver that, but there was a problem: we couldn’t compete in traditional ways. We didn’t have the resources to hire dozens of writers or marketers, so we had to think of creative ways to supercharge our content creation and digital marketing.

There were two ways we could tackle SEO: Editorial SEO and Programmatic SEO. Both methods use SEO optimization (such as targeted title tags, headers, and subtopics) to drive organic traffic, but they're otherwise very different (though they work best in tandem and compliment one another if done correctly). First, let’s take a closer look at these two different methods.

Programmatic SEO vs. Editorial SEO: What’s the Difference?

The main difference between programmatic and editorial SEO is that programmatic SEO (as the name suggests) is driven by automation and is easier to produce at a large scale, while editorial SEO is more time-intensive and requires more detailed manual work. 

What is programmatic SEO?

Programmatic SEO is the practice of using automatically generated or user-generated content to create landing pages at a large scale that target high-volume search queries with transactional intent (with Pinterest and Zillow as the canonical examples). 

What is editorial SEO? 

Editorial SEO is the practice of creating high-quality, editorial, long-form landing pages focused on topics related to your audience. While also driven by keyword research like programmatic SEO, editorial SEO focuses more on creating quality content. Hubspot is the canonical example here.

We started with the editorial approach and created long-form landing pages that spoke to the most common questions people had when considering filing for bankruptcy.

A guide on rebuilding credit after bankruptcy, an example of our editorial landing pages

But after a few weeks in YC, we complemented those with programmatic, locality-specific landing pages. We couldn’t compete on keywords with high search volume like “filing bankruptcy online” so long-tail keywords with less competition provided us with a great way to make our mark in SERPs.

The first iteration of these was a New York bankruptcy guide, after which we rolled out similar pages for other states and smaller localities (a bankruptcy guide for Brooklyn, for example).

A bankruptcy guide for Brooklyn, an example of our programmatic landing pages

We saw some early promise with these approaches, but we were still in the dark as to which landing pages were performing better.

Which pages were bringing in committed users that finished the signup process? And which pages were bringing in people who were kicking the tires? We needed the answers to prioritize what content we should create going forward.

Uncovering the data that would help us answer this question was surprisingly hard to come by.

In the best-case scenario, the people I talked to would use content groups in Google Analytics. In the worst-case scenario, people had no idea how their content was performing. I couldn’t understand why people weren’t capturing meta-information about how people were interacting with their site, and then using that to help guide content creation. It seemed essential.

Eventually, I couldn’t wait to find the answer and had to try something that was stuck in the back of my head for some time.

A few weeks earlier we had spoken with Gustaf Alströmer, a partner at Y Combinator. During one of our office hours, he discussed his time leading Growth at Airbnb. To measure the impact of their work, his team had tracked the first interaction someone had with the Airbnb site and the last interaction before they hit the signup flow.

Multi-touch attribution for a hypothetical user journey at Airbnb. Credit: Airbnb

I didn’t ask him to go deeper, but this first/last interaction concept painted a wonderful picture in my head. It sounded like the perfect way to measure the effectiveness of our various landing pages.

At this point, we already had event tracking up and running throughout our product and were using Segment to handle Google Analytics (and, at times, FullStory) on our website. As a solo developer, I always make sure we don’t add additional tools for the sake of it and tie ourselves up in complexity.

So why not look at what we could do with the tools we already had?

As I was looking through Segment's identity docs, I stumbled into something interesting – a way to save data to users pre-signup via traits. Between page calls and tracking calls, a user’s actions were already stitched together in Segment behind the scenes. Adding a few user traits to those calls would be huge.

With traits saved to anonymous users, all that was needed was a GET request to the Segment’s Personas API. That meant we could pull their anonymous traits and store them with our new database of user records. Storing this information on a JSONB column made it easy to run analysis through Chartio and Postico and understand how our content was performing.

We started with the following set of traits. These were saved on each page redirection or transition a user made on upsolve.org:


"lastInteraction": {

"contentPath": "/la/lafayette/“,

"contentGroup": "cityPage”,

"contentTopics": [],

"interactionAt": "2019-08-14T04:21:37.797Z”


"numInteractions": 5,

"firstInteraction": {

"contentPath": "/la/“,

"contentGroup": "statePage”,

"contentTopics": [],

"interactionAt": "2019-08-09T02:07:13.302Z”



This gave me two charts and two crucial insights.

Turning application code into programmatic content

Of the content we’d produced, only ~10% of our conversions were coming from our editorial articles while ~70% were coming from state and city page templates (created programmatically).

A breakdown of which landing page types were converting best in Personas

The data was even more surprising given the effort we were investing in each. We had four people working on editorial articles around the clock. Meanwhile, the city and state page templates were written once and dynamically generated with additional content from other data sources and our petition generating application code.

Based on the data we saw in Personas, we all quickly saw where our growth was coming from and devoted the time previously set aside for editorial toward improving the quality of our programmatic content. It’s been so successful, we’ve now created over 95,000 landing pages!

The time when my laptop kept running out of memory building our website

This mirrored recommendations we were getting from our SEO agency, who showed us that some of our programmatic content was marked as duplicate. When content is seen as duplicate, it eats up the search engine’s crawl budget and the algorithm struggles to understand which pages are best to serve, preventing these pages from ranking well.

This helped us understand that the key to success was a hybrid approach to SEO – programmatic content that was highly scalable, but with enough editorial value to avoid duplication.

Making the most of transactional intent

Segment also helped us understand what actions our visitors were taking on our landing pages. This may come as no surprise to others building landing pages, but we were surprised that the vast majority of our website visitors were not consuming multiple pieces of content during their visit.

A breakdown of how many pages were visited before conversion

Since our programmatic city pages had more clearly defined intent, visitors were converting from the first page they landed on. For example, if someone arrives at Upsolve having searched “Iowa Bankruptcy Forms” they are much more ready to convert than if they searched “What happens to secured debt in bankruptcy?

The data told us we needed to treat every page of our site like our home page, which drove a series of design changes on our landing pages.

We now have a large, bold call to action at the top of every page so visitors can covert right away.

Being exceptional at organic growth is the only way our team of 6 can compete with publicly traded companies willing to spend an incredible amount of money on paid ads. SEO continues to be our primary mechanism for growth today and is something we’ve continued to improve on.

Months ago, we were only getting the hang of Google Search Console. Now with the help of Segment’s infrastructure and deeper features, we’re able to easily grasp the impact and revenue each of the articles is bringing in. There’s no way we could have grown our bottom line impact or revenue to support the organization without it.

Shout out to the Segment team for supporting us in our experimentation with Personas, the GatsbyJS community for helping me get a 95,000-page build working, and everyone on the Upsolve team for coming together to make this growth possible – Andrea, Nicole, Rohan, and Tina.

Kevin Garcia on August 12th 2019

Twilio Segment Personas is now part of Segment’s Twilio Engage product offering.

If it takes more than one pizza to feed your whole sales team… then you know the struggles of interfacing with your CRM.

Data is inconsistent and poorly tagged. Instead of being able to quickly filter by the reasons you lost business (price, competitor, timing), you’re greeted by long strings of free-form text:

Worst of all, sometimes new key accounts go unnoticed until days or weeks after they’ve reached out for a demo. You’re losing business. Not because of your product, but because your growth machine isn’t working as well as it could.

This was the future facing Axelle Heems, who runs growth operations at Gorgias. Gorgias builds tools for ecommerce stores running on Shopify and Magento2. They help those stores automate their customer support and track their overall spend. 

Over the past year, Gorgias has seen a soaring number of prospects requesting demos. A blessing you say? Not when your company isn’t hiring account executives at the same pace. 

Here’s the story of how she turned their CRM from a manually updated database to a smart machine that runs their business. In this post, Axelle shares how she automated her CRM (Hubspot) to drive a 174% increase in sales.

Fail fast and pivot

Gorgias had one goal in mind: turn inbound prospects into paying customers.  Axelle’s first instinct was to automate some of her sales team’s in-person interactions with smaller prospects. If she could automate the sales experience for smaller customers, her team could focus their efforts on the bigger ones. 

But there was one problem… it didn’t work.

Customers that received a demo from an account executive closed 73% of the time. Those that received the automated sales experience closed only 30% of the time. It became clear that focusing on automating the sales experience was a dead end.

The growth team then switched from automating the customer experience to instead automating the sales process. If human interaction was critical to the sales experience, then Axelle and her team would help clear the path for account executives to focus their time on delivering those moments for prospects. 

Their new goal was both ambitious and unprecedented: to offer every prospect a demo.

Automating the sales process

Step 1: Lead Qualification The Gorgias team wanted to automate away everything which stopped their AEs from getting in touch with a customer. So they started with lead qualification.

In most organizations, new signups are added directly to their CRM as leads. From there, a person manually looks through all the leads and “qualifies” them to understand how good of a fit they are, and then assigns them to a salesperson. This process might take anywhere from thirty minutes to 24 hours. 

But that leaves a massive problem: after 20-30 minutes, your lead has probably gone cold. So, Axelle and her team set out to solve that problem with software.

When new users click the “book demo” button, Axelle has added javascript that enables them to pre-qualify the prospect before they create an account on Gorgias. Based on their answers in the demo form, the automated process redirects the prospect to the right AE focused on their predicted value tier. All together, this means instant scheduling for a follow up meeting using Calendly

For the prospects that create an account, the lead is also automatically qualified. Gorgias uses data collected from Clearbit and Datanyze, and routed through Segment, to qualify the lead as soon as they sign up. Clearbit pulls in company information based upon the user’s email, and Datanyze analyzes traffic patterns and technology on the user’s website. Each lead is then assigned a score that is used to match them to the right salesperson.

Step 2: Deal updates Once her growth team automated the initial deal creation, Axelle turned her attention toward the next biggest win: updating a deal in their CRM.

Looking across industries, we find that salespeople typically update records in their CRM 30-50 times per day. This means a lot of wasted time—it can take 30-60 seconds each time a salesperson updates the CRM—and the data is wildly inconsistent. So Gorgias decided to take their salespeople out of the equation. 

Axelle built a system where updates about usage would flow automatically from Stripe (payments), Gong (sales conversations), and Vitally (account health/usage). All of this data flows in and out of Segment.

Vitally is Gorgias’s source of truth for understanding customer engagement in their app and passes Stripe data to the rest of their stack dynamically. It provides account executives with important information like “whether the prospect had signed up for a free trial”, and “whether the user has added critical integrations like Gmail or Shopify”.

Here’s a look at Gorgias’ Vitally view, complete with the elements that influence their success metrics:

As the deal progresses, Gorgias uses the events flowing through Segment to create “account properties”. As these account properties update, the salesperson is able to know more about the customer journey in real-time.

On top of that, salespeople don’t even have to manually move the deal along the pipeline. The translation of Segment data and its ingestion within workflows takes care of that for them. 

To give you an idea of what this looks like, here’s a view from their HubSpot workflow builder. In this workflow, they use Vitally to translate Stripe data from user-level to account-level, send that account-level data into a Hull segment, and use Hull to specify when an account becomes a paying account. This sets the deal to “won” automatically and brings in the exact deal amount directly from Stripe. No manual effort needed.

By setting up a workflow that ensures that users and accounts are set to the correct pipeline stage, Gorgias empowers their sales team to focus only on deals that have scheduled a demo or recently created an account. The rest is handled automatically. 

Step 3: Reporting Now all of this sounds good in theory. But without any sort of reporting, it’s hard for Axelle to determine whether her changes are actually having impact. So she used another tool in her toolkit: Periscope. Periscope lets the Gorgias growth team create sales-dedicated dashboards powered by their Segment data. 

Axelle can track deal evolution, the monthly pipeline, and seller activity all in one place that helps her identify potential improvement areas very quickly.

The best part? Setting this up isn’t complicated. Axelle connected their different data sources—their app, Vitally, Stripe—to Segment so all of their customer data was complete and accessible across many tools. 

She used Segment Personas and Hull to get account properties into HubSpot and then set up workflows in HubSpot, Zapier, Hull, and Segment Personas using Segment data.

Overall this should take about a day of work. You need to add a bit of time for data monitoring in the beginning, but then you are good to go. It is that easy! - Axelle

Delivering great experiences (and results)

The results of this self-driving CRM are downright impressive:

  • A 143% increase in the number of prospects a sales rep can reach out to

  • A 73% close rate for prospects who receive a demo

  • A drop in sales cycle from 20 days to 13 days

  • Cleaner, more consistent data across their different fields

  • Instant, real-time reporting on sales numbers and closes.

With a little bit of this automation, each Gorgias rep is able to interact with 80+ prospects per month—more than 2x the industry average! Axelle and her growth team have created wins across the business just by figuring out the right levers to automate. Thanks to their work, the Gorgias sales team can spend more time talking with prospects, and less time wasted on manual, error-prone data entry. 

Seth Familian on July 14th 2019

You’ve probably heard that having “high-quality” data is critical for enterprise success. It drives trustworthy analytics, reliable automations, and measurable business impact like revenue growth and customer retention. But what ensures good data—especially at scale?

As a Solutions Architect helping customers implement Segment, I’ve found that achieving high-quality data always boils down to three key ingredients: standardization, ownership, and agility.

In this post, you’ll learn why data is worth standardizing, two models of ownership for driving data standards at your company, and how to stay agile in the process.

Why standardize?

Let’s say your company runs a SaaS app on web, iOS, and Android. If you don’t pay attention to data standards, you run the risk of measuring the same events (like Signed In or Step Completed) with slightly different spellings, hyphenation, property names, and values on each platform:

There’s a lot of inconsistent data in the table above:

  • Website and Android use spaces in event names, while iOS uses hyphens

  • Website and iOS use camelCased property names, while Android uses snake_case

  • Website uses lowercase property values, while iOS uses Title Case and Android uses Title Case or integers

As a result of these inconsistencies, you can’t accurately compare the same event across platforms. To fix this problem you need standardized data—which ensures that…

While these issues can be automatically detected with Segment’s Protocols product, it’s still important that your organization stays focused on ensuring this consistency even during the data planning process. Doing so drives a number of benefits for yourself, your team, and your organization:

  • Data science and IT won’t waste hours or days performing “retroactive ETL” to normalize otherwise inconsistent property values.

  • Product, engineering, and BI will produce reports with greater clarity and consistency when exploring the data in analytics and dashboarding tools.

  • Marketing will build more accurate automations and audiences, which will lead to higher ROI and ROAS.

  • The C-Suite will view your product and performance metrics as trustworthy and reliable. And that trust will cascade down through all levels of the organization, erasing the suspicion that those great (or problematic) outcomes shown in reporting “must be due to bad data.”

As your standardized data gains trust throughout your organization, it’ll also become easier to onboard new brands and products onto your tracking framework. Ultimately, this paves the way for unified analytics across teams and business units. This shared framework will become a common language for employees across teams—whether in BI, marketing, product, finance, sales, or engineering—to more easily communicate and collaborate with one another. 

How to standardize?

So how do you achieve organizational data Zen? By standardizing ownership of your data framework through people and not just a data dictionary. Don’t get me wrong—data dictionaries and solid documentation are critical for driving successful adoption of any data framework. That’s why Segment encourages all of its customers to build a robust tracking plan. But having the right technology and people in place to advocate for that framework—and to enforce it—is what really makes all the difference in the world. 

Two models of ownership: The Wrangler & The Champions

In our experience helping thousands of companies onboard to Segment, we’ve found that two basic models of ownership can each drive successful adoption of data standards across an organization. Neither of these frameworks is inherently “better” than the other, and their efficacy all depends on the nature of your organizational culture. So with that in mind, let’s explore each.The Wrangler is the white hat standards sheriff in the wild west of your organization’s data management. This individual (usually there’s only one Wrangler) typically:

  • Owns the authorship of data standards, 

  • Instructs product, engineering, and marketing managers on those data standards, 

  • Oversees and approves the creation and revision of all tracking plans, 

  • Monitors the Segment workspace for violations, and 

  • Holds each team accountable for any data inconsistencies that might arise. 

The Wrangler is especially good for organizations who rely on a sole “Directly Responsible Individual” (DRI) to drive change management initiatives or for organizations with strongly hierarchical models and reporting structures. Within these organizations, the Wrangler reinforces accountability to a unified, standardized model of data reporting. And while the Wrangler might often be seen as the data “Bad Cop,” they can be quite effective in their role as long as all data standards and violations monitoring flows through them. 

The Champions model fosters the development of a series of more enthusiastic and positive-minded Wranglers throughout the organization. As a result, this model helps address the one big downside to the Wrangler model: that standards and violations monitoring rests upon the shoulders of one person. In contrast, Champions act to collectively educate on and enforce data standards. This model is more useful for matrix organizational structures or “flatter” hierarchies which have many teams reporting up to a large executive team. 

Each functional group within the organization—such as product, marketing, sales, and finance—has its own “Champion” responsible for buying-in to the organization’s data standards, and advocating for their team’s needs. In doing so their teammates are more likely to abide by the standards framework since they know their voice can be easily represented on the larger “council” of Champions. This council can also help collectively steer improvements to the company’s common schema and data standards, meeting periodically to review change requests. 

While the Champions model seems potentially idyllic, it’s a structure that only works for the most collaborative and interconnected organizations. Applying a Champions model to a more hierarchical company might result in slowdowns and frustration in efforts to build consensus. 

Embrace agility

Regardless of which ownership model you adopt, being agile and open to constant change is critical to your data governance and standardization strategy. The initial hypotheses posited by the first versions of your data standards might be disproven over time—and if they do, that’s okay! Here are some of the easiest ways to stay agile with your data standards development:

  • Periodically send a “data standard satisfaction survey” to all relevant stakeholders—from engineers and product managers to marketers and analysts—so you can take an organizational temperature check on the efficacy of the data standard. 

  • Conduct a quarterly data standard review either on your own (if you’re the Wrangler) or with all Champions to brainstorm and evaluate adjustments that will make your data increasingly useful and consistent. 

  • Consider the implications of changing the standard before introducing those changes, so you’ll avoid wasting engineering time on retroactive ETL or other potential headaches.

Ready for good data?

Here at Segment we’re always looking to deliver useful products, tooling, and processes to help customers standardize and optimize their data. Our infrastructure helps organizations of every size take a proactive approach to good data by helping them plan standards thoughtfully, monitor easily, and enforce effortlessly. That’s why we believe good data is Segment data. Ready to standardize your data with Segment? Reach out. We’re happy to discuss how we can help! 

Become a data expert.

Get the latest articles on all things data, product, and growth delivered straight to your inbox.