Go back to Blog

Growth & Marketing

Jes Kirkwood on November 15th 2021

Shopify's VP, Growth Morgan Brown reveals how the company's growth team drives results in an exclusive interview.

All Growth & Marketing articles

Doug Roberge on November 13th 2019

As Ben Clarke at Fast Company said, “These days, the true test of how innovative a company can be is how well it experiments.” By that standard (and many others), Imperfect Foods is one of the most innovative companies out there.

Founded in 2015 with a mission to reduce food waste and build a better food system for everyone, they offer imperfect (yet delicious) produce, affordable pantry items, and quality meat and dairy on a weekly subscription basis via their website. With over 200,000 subscribers, serving 25 cities, they’ve saved 80M pounds of imperfect food from being thrown away.

Patti Chan, who leads the digital product department, is an avid supporter of experimentation and experiment-driven product development. However, with a small team, she was challenged with experimenting often while still keeping up with the day-to-day demands of the business. 

Companies like Netflix have teams of over 300 people running experiments, while her team consisted of six engineers, two QA engineers, a designer, and a PM that were responsible for four products for Imperfect Foods.

Patti needed a scalable way to run experiments and measure results without stretching her team too thin. That’s why her team implemented a data infrastructure that made their testing dreams a reality.

Here’s the experimentation infrastructure Imperfect Foods use:

  • Collect user event data from the Imperfect Food website via Segment.

  • Send user event data to Split.io and AB Tasty for experimentation.

  • Send user event data and test results to Snowflake for data warehousing.

  • Run queries and build reports in Mode Analytics and Amplitude.

Growing a culture of experimentation

As a small team with multiple responsibilities, it’s often difficult to make time for experimentation. Experiments require a lot of planning and the right technology to make sure you’re getting effective results from the experiments you do choose to run. But not experimenting at all can lead to countless missed opportunities.

Patti knew having the right experimentation framework and technology in place would be the best way to build a culture of experimentation, without overburdening her team. 

Here’s the process they landed on:

  1. Define your problem and hypothesis clearly. Know the question you want to answer and set up your test around that question.

  2. Pick a reliable leading measure to move. Don’t choose lagging measures because it will take too long to see results and gauge impact in the short term.

  3. Do things that are not scalable to start. You’ll get to your findings faster and can worry about scaling things like automation and admin tools later, when you know there’s value.

  4. Don’t stress over smaller sample sizes. You should aim instead for a large sample size.

  5. Choose bold ideas. You won’t see big gains without breaking new ground.

  6. Share your learnings broadly. This helps all departments benefit from the findings of each experiment and creates a culture that celebrates and prioritizes experimentation.

Of course, the right process will naturally fall short without the right technology to support it. So, Patti and the Imperfect Foods team implemented a stack that helped them offload countless hours of work. They rely on Segment as their customer data infrastructure to collect and deliver the customer data needed to run tests and evaluate results. In addition, they use Split.io and AB Tasty to reduce the amount of work required to change their UX and route traffic to the right experiments.

“Segment is the glue that holds our experimentation infrastructure together.” – Patti Chan, VP of Product @ Imperfect Foods

With the right experimentation process and data infrastructure in place, Patti and the team can run a test per week with the equivalent of less than five dedicated team members.

Enhancing the customer experience for big retention gains

While not all tests generate positive results — in fact, according to Patti’s estimates, as many as 50% fail — when you do get a winner it can have a big impact on the business.

Patti’s team came up with an idea when they were brainstorming ways to improve the customer experience at Imperfect Foods. They had a hypothesis that allowing customers to select foods they didn’t want in their monthly box would improve customer satisfaction and loyalty. Despite it being complicated logistically — this feature could lead to thousands of custom boxes and substitutions — they wanted to test it to see if the idea was viable.

They built the functionality in about three weeks, tested it, selected their target market (Los Angeles), launched the feature to 50% of their LA subscribers, and started waiting anxiously for the results. The team was stunned when the results came in. Despite low adoption in the first iteration, customers that used the new feature were 21% more likely to be retained than users who did not have access. 

It was clear that this was something that should be invested in further and rolled out to the rest of Imperfect Food’s customer base. 

Applying experimentation to internal processes, too

Experimentation is often incorrectly associated with only changing customer facing UI. However, experiments are also an opportunity to improve operational efficiency. One example of this is how Patti and her team worked with the customer care team at Imperfect Foods. 

Before July, the customer care team didn’t have quick access to critical information that could help them diagnose customer problems, like the delivery status of an order. Patti and the team hypothesized that getting this information to their care team members in real-time with fewer clicks during their calls would help drive customer satisfaction and reduce the time to resolution for those support calls.

Patti and team built a feature for their customer care associates that exposed delivery details like if an order was out for delivery, if the delivery was marked as completed, if there was photo confirmation, and so on. Similar to their external experiments, the team rolled it out to a select few support team members and compared their results with the rest. Yet again, her team struck gold! They managed to reduce support call times for the category by 10%.

Harvesting the right ideas for your business

For Patti and the team at Imperfect Foods, experimentation allowed them to explore more ideas and ultimately build a better product for their users. All-in-all, the results her team shared speak for themselves:

  • Comprehensive experimentation framework and tech stack implemented

  • 22 experiments run in 6 months

  • 21% increase in retention (for Los Angeles test group)

  • 10% reduction in time to resolve a customer status inquiry

One great idea has the potential to change your entire business. To get to that great idea, you’re going to need to plow through quite a few duds. It’s not easy, and not every experiment comes along with double digits metrics gains, but building a culture of experimentation within your organization will always prove fruitful.

The experimentation process can be disappointing and humbling at times. Do it anyway! The confidence we get from knowing that our solution not only fits a spec but solves for a real customer need is invaluable. – Patti Chan, VP of Product @ Imperfect Foods

Nicole Nearhood, Olivia Buono on October 21st 2019

As any sales team knows, building proposals can be a tedious, painful chore. So in 2013, Nova Scotia startup, Proposify, set out to revolutionize the entire proposal process, from creation to close and every deal-making moment in between.  

In 2017, Proposify began to see a significant uptick in growth. While this was a net positive for the business, rapid expansion brought a whole host of problems for their sales team.

Specifically, Proposify struggled with turning inbound leads into paying customers. Inbound interest was outpacing their small team’s ability to execute. Max Werner, Proposify’s marketing operations and analytics specialist, decided it was time to figure out how to improve the conversion rate without increasing headcount. 

He identified three major opportunities:

Scoring leads to focus on high-value conversations

Firstly, Max identified that Proposify’s lean and nimble sales team had no way to identify which prospects were the best fit to focus their energy. Some visitors were genuinely interested in buying, but others were just kicking the tires. They didn’t have the resources to give all visitors the same level of service, so they needed some way to prioritize their leads.

Enabling all teams to use their preferred tools with little engineering work

Second, with a major new product release on the horizon, Max worried he did not have the proper tooling in place to quickly and reliably add tracking to the app. Various departments were using different systems for analytics, all of which needed to have customer information from the launch.

This meant that the development team needed to handle multiple APIs and maintain numerous integrations. Because integrations were done at different times to different tools, Proposify lacked data parity across its various systems.

Empowering support and success teams to get deeper insights

Third, Proposify’s customer success and support teams were crying out for a solution to gather deeper engagement and utilization analytics. In order to provide personal, helpful experiences, they needed a better understanding of their customers, all but impossible without clean and reliable data. The support and success teams could have enlisted the support of their engineers to get the data they wanted, Max didn’t want to burden the development team who was already strapped for time building product features. 

“We've always strived for data-driven decision making, but without proper data, it was hard to do. Our sales team was prospecting every trial user we had coming in. Marketing had a hard time keeping track of churn. Support had a hard time reporting on SLAs.” - Max Werner, Marketing Operations and Analytics Specialist, Proposify

Enter Segment

To streamline this process and bring scale to teams across the organization, Proposify chose Segment as a backbone for its customer data

Proposify’s development team just needed to add Segment to identify, group, and track events during product development. From there, Max and the marketing team could easily connect various destinations like Marketo, Salesforce, Intercom, and more.

Here’s just a taste of the integrations set up by each team 

  • Support and Success: Intercom, Gainsight

  • Marketing: Marketo, Visual Website Optimizer, Google Tag Manager, Clearbit

  • Sales: Salesforce

  • Product: Heap

  • Operations: ChartMogul, Amazon Redshift, Recurly

Now that Max had a better handle on his data, he could start tackling the challenges impeding Proposify’s growth. 

“The best part of using Segment for data collection is definitely that we fight with our product team and project managers a lot less. Adding or extending Segment tracking is easy and is instantly available for all downstream destinations. (thanks, Segment debugger!).” 

A new lead scoring model using Segment and Clearbit 

Onboarding questions can help you triangulate the value of a prospect, but they don’t give you all the information you need to complete your qualification. Plus, the more questions you ask, the lower your conversion rate will dive. Max wanted to create a more sophisticated model.

  • First, he added in customer behavioral data (i.e. how many times a user performs a certain interaction; the last time a user performed a behavior). 

  • Second, he also enriched each lead with firmographic data from Clearbit, such as information about a company’s funding, tool stack, and industry. 

Using these inputs, Proposify generated a new lead scoring model and piped it back into Marketo through Segment. As a result, the sales team could more quickly disqualify leads with incomplete profiles and low scores. 

Real-time data, without the engineering work

Proposify’s development team uses the Segment SDKs to add user/company traits or track events as the team develops and refines features. Now that Segment is implemented, Proposify is confident that any tools connected through Segment are piped the same data in the same format in real-time.

The time and effort saved through standardizing tracking against one API also helps Proposify iterate, extend, and improve upon its tracking significantly faster than was possible before; and the team doesn't have to worry about random APIs deprecating. 

Thanks to this:

  • The product team can use customer traits inside Heap to segment its user base more effectively to evaluate how features of the app impact conversion. 

  • The product team connects customer behavior data via Segment to find out the optimal number of proposal pages and which pages get the most visibility. 

Self-serve analytics for success and support

Additionally, Proposify’s sales and support teams benefit from better customer data and a streamlined process for their downstream tools. With the Intercom and Gainsight integration via Segment, Proposify’s Customer Success team can self-serve customer health information. 

With all customer info in one location (Intercom), the success team can provide quick support without having to dig around for details about each customer. Due to their fast and accurate service, the Success team has been able to maintain a negative net MRR churn almost every month.  

A stable data infrastructure to help future growth

Since implementing Segment, here are just a few of the results Proposify have seen so far:

  • With Proposify’s new data infrastructure, the sales team has increased the size of its sales pipeline and velocity by 152% and 312% respectively.

  • This directly leads to the ability of the company to scale, ensuring data parity across its various systems to effectively turn interested prospects into happy, paying customers. 

  • Knowing more about the app-usage of high-value customers, customer success has managed to maintain a negative net MRR churn almost every month.

  • In addition, the average data preparation for a Gainsight implementation is three months. Segment enabled Proposify to do it in just one month.    

Thanks to their work, the Proposify sales team is now able to spend time talking to their most valuable prospects, and ensure they turn into long term, successful customers once they do convert.

Doug Roberge on October 7th 2019

If you build it, they will come. While maybe true for amusement parks or $5 all-you-can-eat buffets, this adage does not apply to new software features. 

A lot goes into building, designing, and marketing a new feature in your app. If one piece of the equation fails, your stellar feature could quickly turn into a dud. 

Rahul Jain, longtime PM at experience optimization platform VWO, understands this better than most. He has rolled out countless features during his 5-year tenure and knows first-hand what makes some products more successful than others. But, it wasn’t always so easy.

In this story, we’ll share how Rahul used analytics to build a data-driven organization, improve product adoption by up to 15x, and prevent churn.


Building a data-driven culture

As with any SaaS platform, VWO is in a steady state of change. They build new features, get customer feedback, and then, naturally, build some more. Since Rahul joined the team, VWO has evolved from an A/B testing platform to a complete experience optimization platform that offers deep insights (like funnels, session recordings, and heatmaps) and push messaging, among a host of other things.

VWO product offering, September 2014

VWO product offering, September 2019

At first, VWO’s product analytics were less than stellar. Rahul was only able to track page views. Most of the actions users were taking in the app weren’t being tracked anywhere. For example, they weren’t tracking key user actions in the setup flow like URL selected and Audience created, which are critical in understanding product adoption and engagement. At that point, his team was only consistently tracking 10 events, which only covered one feature in the app.

“We knew that we couldn’t scale things on assumptions and intuition. We needed to be data-driven and let the data speak for us when it came time to manage stakeholders and make product decisions.”

-Rahul Jain

Rahul needed a solution that would work with his existing architecture, require limited engineering resources (they had a product to build!), and provide granular product analytics. He did this by using Segment to collect user behavioral data in the VWO app and send that data in to their data warehouse, BigQuery. He then was able to run analysis and set up reports in their product analytics tool, PowerBI

Improving activation rates for new features

With analytics in place, it became much simpler to get an accurate view of how well new features were performing. Instead of just being able to see what pages a user engaged within the app, Rahul and his team implemented analytics for every element of the product. For example, they could now track each step required for creating and analyzing an A/B test, such as segmentation and targeting. 

Elements in the VWO segmentation setup flow that can now be tracked

For VWO’s product team, one of their key KPIs is product activation rate — how many users are using a new product and at what frequency. To keep a pulse on that, they set up reports for every feature. In an ideal world, all features would just naturally land in the top right corner of the chart below. However, it’s rarely that simple.

Adoption vs. frequency graph of a product feature

For the less successful features — the features that wound up in the bottom left of the chart — they could start exploring the reasons behind it. Are customers able to find the feature? Do customers know how to use the feature? Do customers need the feature? 

For example, Rahul and his team built and launched a new segmentation feature that allowed users to set up a behavioral analysis of visitors converting/not converting for a set goal. They added the feature to a dropdown menu where it seemed like a natural fit. But, it wasn’t being used! Naturally, his team wanted to understand why. They decided to send a survey to get some candid customer feedback on the feature. The results were surprising. They realized that it wasn’t that the feature wasn’t useful, it simply wasn’t discoverable!

Rahul and his team set out to fix it. They added a widget in the app which made the new feature more accessible to everyone. They also kicked off a marketing campaign via Appcues that highlighted the feature to users that hadn’t yet used it. This increased adoption from less than 1% to over 15%.

VWO app without the widget

VWO app with the new widget

Being proactive about churn 

Successful onboarding and ongoing usage is essential to retaining customers. With a baseline of product analytics in place that gave them clear insight into both these metrics, the success and growth teams at VWO were able to start getting ahead of churn risks.

Rahul started by giving more visibility to CSMs by setting up custom dashboards in PowerBI. Each dashboard had in-depth information about a customer’s current product adoption. That information was also reflected in the customer record stored in Salesforce. CSMs could quickly glance at their customer dashboards before getting on calls and give more insightful product recommendations. 

The growth team at VWO also uses this data to build engagement buckets (high, medium, low, critical), which are based on the frequency of important actions a user takes inside the VWO app over a period of time. Customers with low and critical scores are churn risks. When a user is at risk, they’re automatically entered into a personalized email flow designed to get them re-engaged. Because this information is also in Salesforce, sales and customer success can take the appropriate action as well.

VWO’s data-driven approach to customer success and churn prediction has helped increase dollar retention rate (DRR) significantly.

Delivering better results for the business

The business impacts of this enhanced focus on product analytics are impressive:

  • Democratized access to detailed product analytics and adoption dashboards

  • Increased the number of events tracked from 10 to 1,000+, covering every possible customer action in the app

  • Improved adoption rates across all features, including a 15 percentage point increase of the new segmentation feature

  • Significant increase in dollar retention rates

In addition to all of that, Rahul and his team managed to create a more data-driven product organization. They now have a deep understanding of their customers and can make better decisions as a company. Growth can focus on promoting the right features to the right customers. Customer success can take action on at risk accounts before it’s too late. And, lastly, Rahul and his team can eliminate assumptions when it comes to prioritizing focus areas for the product.

Mark Hansen on October 3rd 2019

A few months ago, we shared an inspiring story from Mark Hansen, the co-founder of Upsolve and one of the first members of Segment’s Startup Program. In this post, Mark explains how Upsolve leverages Segment and SEO to drive thousands of high intent buyers to their product every month.

When you’re a cash-strapped nonprofit competing for attention against multi-billion dollar public companies, you have to focus on your mechanism for growth, and then become world-class at it.

At Upsolve, we help low-income families file bankruptcy for free, and during our time at Y Combinator, it became clear to us that search engine optimization (SEO) was going to be our most important growth channel.

The main mantra at Y Combinator has always been: “Make something people want.” But we also heard another mantra:

“Just because you built it, doesn’t mean they’ll come.”

We needed a cost-effective, scalable way to bring people to our service.

SEO could deliver that, but there was a problem: we couldn’t compete in traditional ways. We didn’t have the resources to hire dozens of writers or marketers, so we had to think of creative ways to supercharge our content creation and digital marketing.

There were two ways we could tackle SEO: Editorial SEO and Programmatic SEO. Both methods use SEO optimization (such as targeted title tags, headers, and subtopics) to drive organic traffic, but they're otherwise very different (though they work best in tandem and compliment one another if done correctly). First, let’s take a closer look at these two different methods.

Programmatic SEO vs. Editorial SEO: What’s the Difference?

The main difference between programmatic and editorial SEO is that programmatic SEO (as the name suggests) is driven by automation and is easier to produce at a large scale, while editorial SEO is more time-intensive and requires more detailed manual work. 

What is programmatic SEO?

Programmatic SEO is the practice of using automatically generated or user-generated content to create landing pages at a large scale that target high-volume search queries with transactional intent (with Pinterest and Zillow as the canonical examples). 

What is editorial SEO? 

Editorial SEO is the practice of creating high-quality, editorial, long-form landing pages focused on topics related to your audience. While also driven by keyword research like programmatic SEO, editorial SEO focuses more on creating quality content. Hubspot is the canonical example here.

We started with the editorial approach and created long-form landing pages that spoke to the most common questions people had when considering filing for bankruptcy.

A guide on rebuilding credit after bankruptcy, an example of our editorial landing pages

But after a few weeks in YC, we complemented those with programmatic, locality-specific landing pages. We couldn’t compete on keywords with high search volume like “filing bankruptcy online” so long-tail keywords with less competition provided us with a great way to make our mark in SERPs.

The first iteration of these was a New York bankruptcy guide, after which we rolled out similar pages for other states and smaller localities (a bankruptcy guide for Brooklyn, for example).

A bankruptcy guide for Brooklyn, an example of our programmatic landing pages

We saw some early promise with these approaches, but we were still in the dark as to which landing pages were performing better.

Which pages were bringing in committed users that finished the signup process? And which pages were bringing in people who were kicking the tires? We needed the answers to prioritize what content we should create going forward.

Uncovering the data that would help us answer this question was surprisingly hard to come by.

In the best-case scenario, the people I talked to would use content groups in Google Analytics. In the worst-case scenario, people had no idea how their content was performing. I couldn’t understand why people weren’t capturing meta-information about how people were interacting with their site, and then using that to help guide content creation. It seemed essential.

Eventually, I couldn’t wait to find the answer and had to try something that was stuck in the back of my head for some time.

A few weeks earlier we had spoken with Gustaf Alströmer, a partner at Y Combinator. During one of our office hours, he discussed his time leading Growth at Airbnb. To measure the impact of their work, his team had tracked the first interaction someone had with the Airbnb site and the last interaction before they hit the signup flow.

Multi-touch attribution for a hypothetical user journey at Airbnb. Credit: Airbnb

I didn’t ask him to go deeper, but this first/last interaction concept painted a wonderful picture in my head. It sounded like the perfect way to measure the effectiveness of our various landing pages.

At this point, we already had event tracking up and running throughout our product and were using Segment to handle Google Analytics (and, at times, FullStory) on our website. As a solo developer, I always make sure we don’t add additional tools for the sake of it and tie ourselves up in complexity.

So why not look at what we could do with the tools we already had?

As I was looking through Segment's identity docs, I stumbled into something interesting – a way to save data to users pre-signup via traits. Between page calls and tracking calls, a user’s actions were already stitched together in Segment behind the scenes. Adding a few user traits to those calls would be huge.

With traits saved to anonymous users, all that was needed was a GET request to the Segment’s Personas API. That meant we could pull their anonymous traits and store them with our new database of user records. Storing this information on a JSONB column made it easy to run analysis through Chartio and Postico and understand how our content was performing.

We started with the following set of traits. These were saved on each page redirection or transition a user made on upsolve.org:

{

"lastInteraction": {

"contentPath": "/la/lafayette/“,

"contentGroup": "cityPage”,

"contentTopics": [],

"interactionAt": "2019-08-14T04:21:37.797Z”

},

"numInteractions": 5,

"firstInteraction": {

"contentPath": "/la/“,

"contentGroup": "statePage”,

"contentTopics": [],

"interactionAt": "2019-08-09T02:07:13.302Z”

}

}

This gave me two charts and two crucial insights.

Turning application code into programmatic content

Of the content we’d produced, only ~10% of our conversions were coming from our editorial articles while ~70% were coming from state and city page templates (created programmatically).

A breakdown of which landing page types were converting best in Personas

The data was even more surprising given the effort we were investing in each. We had four people working on editorial articles around the clock. Meanwhile, the city and state page templates were written once and dynamically generated with additional content from other data sources and our petition generating application code.

Based on the data we saw in Personas, we all quickly saw where our growth was coming from and devoted the time previously set aside for editorial toward improving the quality of our programmatic content. It’s been so successful, we’ve now created over 95,000 landing pages!

The time when my laptop kept running out of memory building our website

This mirrored recommendations we were getting from our SEO agency, who showed us that some of our programmatic content was marked as duplicate. When content is seen as duplicate, it eats up the search engine’s crawl budget and the algorithm struggles to understand which pages are best to serve, preventing these pages from ranking well.

This helped us understand that the key to success was a hybrid approach to SEO – programmatic content that was highly scalable, but with enough editorial value to avoid duplication.

Making the most of transactional intent

Segment also helped us understand what actions our visitors were taking on our landing pages. This may come as no surprise to others building landing pages, but we were surprised that the vast majority of our website visitors were not consuming multiple pieces of content during their visit.

A breakdown of how many pages were visited before conversion

Since our programmatic city pages had more clearly defined intent, visitors were converting from the first page they landed on. For example, if someone arrives at Upsolve having searched “Iowa Bankruptcy Forms” they are much more ready to convert than if they searched “What happens to secured debt in bankruptcy?

The data told us we needed to treat every page of our site like our home page, which drove a series of design changes on our landing pages.

We now have a large, bold call to action at the top of every page so visitors can covert right away.


Being exceptional at organic growth is the only way our team of 6 can compete with publicly traded companies willing to spend an incredible amount of money on paid ads. SEO continues to be our primary mechanism for growth today and is something we’ve continued to improve on.

Months ago, we were only getting the hang of Google Search Console. Now with the help of Segment’s infrastructure and deeper features, we’re able to easily grasp the impact and revenue each of the articles is bringing in. There’s no way we could have grown our bottom line impact or revenue to support the organization without it.

Shout out to the Segment team for supporting us in our experimentation with Personas, the GatsbyJS community for helping me get a 95,000-page build working, and everyone on the Upsolve team for coming together to make this growth possible – Andrea, Nicole, Rohan, and Tina.

Kevin Garcia on August 12th 2019

Twilio Segment Personas is now part of Segment’s Twilio Engage product offering.

If it takes more than one pizza to feed your whole sales team… then you know the struggles of interfacing with your CRM.

Data is inconsistent and poorly tagged. Instead of being able to quickly filter by the reasons you lost business (price, competitor, timing), you’re greeted by long strings of free-form text:

Worst of all, sometimes new key accounts go unnoticed until days or weeks after they’ve reached out for a demo. You’re losing business. Not because of your product, but because your growth machine isn’t working as well as it could.

This was the future facing Axelle Heems, who runs growth operations at Gorgias. Gorgias builds tools for ecommerce stores running on Shopify and Magento2. They help those stores automate their customer support and track their overall spend. 

Over the past year, Gorgias has seen a soaring number of prospects requesting demos. A blessing you say? Not when your company isn’t hiring account executives at the same pace. 

Here’s the story of how she turned their CRM from a manually updated database to a smart machine that runs their business. In this post, Axelle shares how she automated her CRM (Hubspot) to drive a 174% increase in sales.

Fail fast and pivot

Gorgias had one goal in mind: turn inbound prospects into paying customers.  Axelle’s first instinct was to automate some of her sales team’s in-person interactions with smaller prospects. If she could automate the sales experience for smaller customers, her team could focus their efforts on the bigger ones. 

But there was one problem… it didn’t work.

Customers that received a demo from an account executive closed 73% of the time. Those that received the automated sales experience closed only 30% of the time. It became clear that focusing on automating the sales experience was a dead end.

The growth team then switched from automating the customer experience to instead automating the sales process. If human interaction was critical to the sales experience, then Axelle and her team would help clear the path for account executives to focus their time on delivering those moments for prospects. 

Their new goal was both ambitious and unprecedented: to offer every prospect a demo.

Automating the sales process

Step 1: Lead Qualification The Gorgias team wanted to automate away everything which stopped their AEs from getting in touch with a customer. So they started with lead qualification.

In most organizations, new signups are added directly to their CRM as leads. From there, a person manually looks through all the leads and “qualifies” them to understand how good of a fit they are, and then assigns them to a salesperson. This process might take anywhere from thirty minutes to 24 hours. 

But that leaves a massive problem: after 20-30 minutes, your lead has probably gone cold. So, Axelle and her team set out to solve that problem with software.

When new users click the “book demo” button, Axelle has added javascript that enables them to pre-qualify the prospect before they create an account on Gorgias. Based on their answers in the demo form, the automated process redirects the prospect to the right AE focused on their predicted value tier. All together, this means instant scheduling for a follow up meeting using Calendly

For the prospects that create an account, the lead is also automatically qualified. Gorgias uses data collected from Clearbit and Datanyze, and routed through Segment, to qualify the lead as soon as they sign up. Clearbit pulls in company information based upon the user’s email, and Datanyze analyzes traffic patterns and technology on the user’s website. Each lead is then assigned a score that is used to match them to the right salesperson.

Step 2: Deal updates Once her growth team automated the initial deal creation, Axelle turned her attention toward the next biggest win: updating a deal in their CRM.

Looking across industries, we find that salespeople typically update records in their CRM 30-50 times per day. This means a lot of wasted time—it can take 30-60 seconds each time a salesperson updates the CRM—and the data is wildly inconsistent. So Gorgias decided to take their salespeople out of the equation. 

Axelle built a system where updates about usage would flow automatically from Stripe (payments), Gong (sales conversations), and Vitally (account health/usage). All of this data flows in and out of Segment.

Vitally is Gorgias’s source of truth for understanding customer engagement in their app and passes Stripe data to the rest of their stack dynamically. It provides account executives with important information like “whether the prospect had signed up for a free trial”, and “whether the user has added critical integrations like Gmail or Shopify”.

Here’s a look at Gorgias’ Vitally view, complete with the elements that influence their success metrics:

As the deal progresses, Gorgias uses the events flowing through Segment to create “account properties”. As these account properties update, the salesperson is able to know more about the customer journey in real-time.

On top of that, salespeople don’t even have to manually move the deal along the pipeline. The translation of Segment data and its ingestion within workflows takes care of that for them. 

To give you an idea of what this looks like, here’s a view from their HubSpot workflow builder. In this workflow, they use Vitally to translate Stripe data from user-level to account-level, send that account-level data into a Hull segment, and use Hull to specify when an account becomes a paying account. This sets the deal to “won” automatically and brings in the exact deal amount directly from Stripe. No manual effort needed.

By setting up a workflow that ensures that users and accounts are set to the correct pipeline stage, Gorgias empowers their sales team to focus only on deals that have scheduled a demo or recently created an account. The rest is handled automatically. 

Step 3: Reporting Now all of this sounds good in theory. But without any sort of reporting, it’s hard for Axelle to determine whether her changes are actually having impact. So she used another tool in her toolkit: Periscope. Periscope lets the Gorgias growth team create sales-dedicated dashboards powered by their Segment data. 

Axelle can track deal evolution, the monthly pipeline, and seller activity all in one place that helps her identify potential improvement areas very quickly.

The best part? Setting this up isn’t complicated. Axelle connected their different data sources—their app, Vitally, Stripe—to Segment so all of their customer data was complete and accessible across many tools. 

She used Segment Personas and Hull to get account properties into HubSpot and then set up workflows in HubSpot, Zapier, Hull, and Segment Personas using Segment data.

Overall this should take about a day of work. You need to add a bit of time for data monitoring in the beginning, but then you are good to go. It is that easy! - Axelle

Delivering great experiences (and results)

The results of this self-driving CRM are downright impressive:

  • A 143% increase in the number of prospects a sales rep can reach out to

  • A 73% close rate for prospects who receive a demo

  • A drop in sales cycle from 20 days to 13 days

  • Cleaner, more consistent data across their different fields

  • Instant, real-time reporting on sales numbers and closes.

With a little bit of this automation, each Gorgias rep is able to interact with 80+ prospects per month—more than 2x the industry average! Axelle and her growth team have created wins across the business just by figuring out the right levers to automate. Thanks to their work, the Gorgias sales team can spend more time talking with prospects, and less time wasted on manual, error-prone data entry. 

Seth Familian on July 14th 2019

You’ve probably heard that having “high-quality” data is critical for enterprise success. It drives trustworthy analytics, reliable automations, and measurable business impact like revenue growth and customer retention. But what ensures good data—especially at scale?

As a Solutions Architect helping customers implement Segment, I’ve found that achieving high-quality data always boils down to three key ingredients: standardization, ownership, and agility.

In this post, you’ll learn why data is worth standardizing, two models of ownership for driving data standards at your company, and how to stay agile in the process.

Why standardize?

Let’s say your company runs a SaaS app on web, iOS, and Android. If you don’t pay attention to data standards, you run the risk of measuring the same events (like Signed In or Step Completed) with slightly different spellings, hyphenation, property names, and values on each platform:

There’s a lot of inconsistent data in the table above:

  • Website and Android use spaces in event names, while iOS uses hyphens

  • Website and iOS use camelCased property names, while Android uses snake_case

  • Website uses lowercase property values, while iOS uses Title Case and Android uses Title Case or integers

As a result of these inconsistencies, you can’t accurately compare the same event across platforms. To fix this problem you need standardized data—which ensures that…

While these issues can be automatically detected with Segment’s Protocols product, it’s still important that your organization stays focused on ensuring this consistency even during the data planning process. Doing so drives a number of benefits for yourself, your team, and your organization:

  • Data science and IT won’t waste hours or days performing “retroactive ETL” to normalize otherwise inconsistent property values.

  • Product, engineering, and BI will produce reports with greater clarity and consistency when exploring the data in analytics and dashboarding tools.

  • Marketing will build more accurate automations and audiences, which will lead to higher ROI and ROAS.

  • The C-Suite will view your product and performance metrics as trustworthy and reliable. And that trust will cascade down through all levels of the organization, erasing the suspicion that those great (or problematic) outcomes shown in reporting “must be due to bad data.”

As your standardized data gains trust throughout your organization, it’ll also become easier to onboard new brands and products onto your tracking framework. Ultimately, this paves the way for unified analytics across teams and business units. This shared framework will become a common language for employees across teams—whether in BI, marketing, product, finance, sales, or engineering—to more easily communicate and collaborate with one another. 

How to standardize?

So how do you achieve organizational data Zen? By standardizing ownership of your data framework through people and not just a data dictionary. Don’t get me wrong—data dictionaries and solid documentation are critical for driving successful adoption of any data framework. That’s why Segment encourages all of its customers to build a robust tracking plan. But having the right technology and people in place to advocate for that framework—and to enforce it—is what really makes all the difference in the world. 

Two models of ownership: The Wrangler & The Champions

In our experience helping thousands of companies onboard to Segment, we’ve found that two basic models of ownership can each drive successful adoption of data standards across an organization. Neither of these frameworks is inherently “better” than the other, and their efficacy all depends on the nature of your organizational culture. So with that in mind, let’s explore each.The Wrangler is the white hat standards sheriff in the wild west of your organization’s data management. This individual (usually there’s only one Wrangler) typically:

  • Owns the authorship of data standards, 

  • Instructs product, engineering, and marketing managers on those data standards, 

  • Oversees and approves the creation and revision of all tracking plans, 

  • Monitors the Segment workspace for violations, and 

  • Holds each team accountable for any data inconsistencies that might arise. 

The Wrangler is especially good for organizations who rely on a sole “Directly Responsible Individual” (DRI) to drive change management initiatives or for organizations with strongly hierarchical models and reporting structures. Within these organizations, the Wrangler reinforces accountability to a unified, standardized model of data reporting. And while the Wrangler might often be seen as the data “Bad Cop,” they can be quite effective in their role as long as all data standards and violations monitoring flows through them. 

The Champions model fosters the development of a series of more enthusiastic and positive-minded Wranglers throughout the organization. As a result, this model helps address the one big downside to the Wrangler model: that standards and violations monitoring rests upon the shoulders of one person. In contrast, Champions act to collectively educate on and enforce data standards. This model is more useful for matrix organizational structures or “flatter” hierarchies which have many teams reporting up to a large executive team. 

Each functional group within the organization—such as product, marketing, sales, and finance—has its own “Champion” responsible for buying-in to the organization’s data standards, and advocating for their team’s needs. In doing so their teammates are more likely to abide by the standards framework since they know their voice can be easily represented on the larger “council” of Champions. This council can also help collectively steer improvements to the company’s common schema and data standards, meeting periodically to review change requests. 

While the Champions model seems potentially idyllic, it’s a structure that only works for the most collaborative and interconnected organizations. Applying a Champions model to a more hierarchical company might result in slowdowns and frustration in efforts to build consensus. 

Embrace agility

Regardless of which ownership model you adopt, being agile and open to constant change is critical to your data governance and standardization strategy. The initial hypotheses posited by the first versions of your data standards might be disproven over time—and if they do, that’s okay! Here are some of the easiest ways to stay agile with your data standards development:

  • Periodically send a “data standard satisfaction survey” to all relevant stakeholders—from engineers and product managers to marketers and analysts—so you can take an organizational temperature check on the efficacy of the data standard. 

  • Conduct a quarterly data standard review either on your own (if you’re the Wrangler) or with all Champions to brainstorm and evaluate adjustments that will make your data increasingly useful and consistent. 

  • Consider the implications of changing the standard before introducing those changes, so you’ll avoid wasting engineering time on retroactive ETL or other potential headaches.

Ready for good data?

Here at Segment we’re always looking to deliver useful products, tooling, and processes to help customers standardize and optimize their data. Our infrastructure helps organizations of every size take a proactive approach to good data by helping them plan standards thoughtfully, monitor easily, and enforce effortlessly. That’s why we believe good data is Segment data. Ready to standardize your data with Segment? Reach out. We’re happy to discuss how we can help! 

Eric Kim on July 1st 2019

You don’t need to look very hard to find research supporting the argument that bad data is detrimental to companies and organizations. Comprehensive studies tell us that bad data is no longer simply a matter of improving operational efficiency but also a mission critical requirement. 

If the median Fortune 1000 business increased the usability of its data by just 10%, it would translate to an increase in $2.01 billion in total revenue every year. — University of Texas

Bad data costs the U.S. $3 trillion per year. — Harvard Business Review

The executives surveyed by PwC said cleaning up their data will lead to average cost savings of 33%, while boosting revenue by an average of 31%. — Wall Street Journal

I’m part of the Solutions Architect team at Segment. Our team ensures that our enterprise customers successfully implement our platform. Through my experience over the last several years, I’ve seen firsthand just how severe the impact of bad data can be, and it’s not specific to any one industry or organization size.

My team gets an intimate view into how bad data hinders many companies’ abilities to communicate with their customers, power reliable analytics, and drive measurable business impact. When left unaddressed, bad data has both short and long-term effects on a company’s bottom line. 

In this article, I’ll share some observations about what bad data looks like and give you tips on how the best companies prevent bad data in the first place. 

What is bad data?

Bad data doesn’t always start off bad. Many times it was good data that had something bad happen to it. (The poor data!)

If we consider bad data to be an outcome, or a byproduct, then what are the causes of it? Here are the markers of what we’ve come to identify as “bad” data.

Stale data is bad data 

Stale data sounds like: “This is last month’s data. Where’s today’s report?”

More and more critical business use-cases powered by customer data require the data to be readily available in almost real-time. This is a need across most modern organizational functions—from Marketing, to Sales, to Analytics, to Product, to Finance. 

Teams need fresh customer data as quickly as possible so they can make informed decisions or deliver personalized experiences to their customers. Here are a few scenarios when data needs to be ready fast.

  • Personalization: In this context, “personalization” refers to the application of data to maintain a highly relevant relationship between a company and its customers. This has become an entry-level requirement for businesses in many industries, especially in eCommerce and Media, to remain competitive. Personalizing a customer’s experience—from initial contact on the web or mobile, all the way to customer re-engagement through email or push notification—requires data points be updated and refreshed as often as possible. 

  • Analytics: Fresh, timely data has also become necessary for informing good decision-making at organizations across industry sectors. For example, real-time analytics informs supply-chain logistics and both short and long-term product development. It’s used to drive real-time product merchandising, content planning, and much more.

Inaccessible data is bad data

Inaccessible data sounds like: “I have to wait three months to launch the campaign because I can’t get my hands on the right data.”

Another key indicator of bad data is when it’s inaccessible to the teams within a company that need it the most. I’ve found that the inability to access the right data becomes an increasingly important issue the larger and more distributed an organization becomes. This is often referred to as the “data silos” problem, where information is not available across business units or teams. 

Data silos can be created by system permission issues, incompatible schemas, sprawling tool stacks, or different data sources populating different environments. To unify the silos, many companies have embarked on data lake projects over the past few years that pull together all data across a company into one warehouse. However, this approach doesn’t address the accessibility issue because only specialized technical teams can extract data from a data lake. A common infrastructure powering each department’s tools—and the data lake—with the same data can be a good solution.

Confusing data is bad data

Confusing data sounds like: “Is there anyone in our department who knows what the data points in our tables actually mean? I just want to run a quick update to this report.”

In order for data to be useful, it needs to be clearly understood. I’ve partnered with companies where existing data capture methods were set up haphazardly without a clear system anyone could use to understand what the data actually means. This approach results in only a limited number of people knowing how to interpret the data. 

Having a clear, declarative format for naming and describing your data helps reduce the additional process of “cleaning” or “scrubbing” it before internal teams can use it. A simplified, easy-to-understand tagging framework for naming customer data is essential for democratizing data-driven decisions. It’s also important to make a central repository that houses this information accessible to anyone that needs to use the data. 

Disrespectful data is bad data

Disrespectful data sounds like: “An angry prospective customer is asking why we sent this message when they never asked to receive information from us.”

Consumer privacy has become critical for companies of all sizes. A decade ago, abundant consumer data collection by internet services and applications was viewed as an asset. Today the sentiment has shifted, and customer data collection without the right controls has turned into a liability. As a result, bad data looks like data that’s not collected with consent and is not used in accordance with a customer’s expressed preferences. 

To comply with regulations like the GDPR and CCPA, not only do you need to collect and use data with consent, you also need an easy way to delete or suppress a customer’s information if they ask. This is hard to wrangle without a consolidated data infrastructure.

Using third-party data that’s purchased via data brokers and intermingled across companies exacerbates this problem because it’s hard to accurately collect consent for third-party data. Optimizing experiences with first-party data, or data only used between a customer and the company they interact with, is a more respectful approach.

Untrustworthy data is bad data

Untrustworthy data sounds like: “Are you sure these numbers are correct? They don’t match up with my analysis.”

There are many instances in an organization that can cause data distrust. A report might feel off, and after some digging, you find the data source has been corrupted or stopped collecting altogether. Different tools and dashboards might read different results for the same question, prompting more questions, and grinding business to a halt. Any data discrepancy can cause a huge business impact, not just in the time spent to track it down, but also potential revenue lost by triggering poor customer experiences or making the wrong business decision. 

The best approach, described by DalleMule and Davenport in their Organizational Data Strategy HBR article, is to have one source of truth for the business that allows each team and department to create a view of the data that works for them.

Turning around your bad data

Now that we know what bad data is and its consequences, how exactly do companies begin to improve their data practices?

First, it’s important to acknowledge that having bad data is actually the default state of a company. Without proactive processes and infrastructure in place, your data will degrade into chaos.

Consider the sheer amount of data generated today—over 90% of the data in existence today was created less than 2 years ago. While an abundance of customer data might give an organization a potential leg-up for things like machine learning and individual personalization, it’s also become a Herculean task to organize it, clean it, process it, and catalog it. This is why, here at Segment, we advocate for our customers and partners to take a deliberate and opinionated approach to their customer data.

Here are some of the best data strategy and management patterns I’ve observed in partnering with some of the world’s most forward-thinking enterprise companies. 

Treat data like a product

When companies truly operationalize data, they treat their data infrastructure like a product that’s properly staffed, monitored, and maintained like a production-grade system. They appoint a single responsible executive-level individual such as a CIO or CDIO (Chief Digital Information Officer), and that person has a dedicated cross-functional team of product managers, engineers, and analysts. 

These organizations implement a unified, central data governance strategy across business units and product teams. 

Balance standardization and flexibility

Strong organizations seek to achieve a data strategy that balances standardization and flexibility. Standardization is key, so that all teams can coordinate using a shared understanding of that data’s truth (i.e., we trust the data we have to work with). Flexibility, on the other hand, is necessary to accommodate individual teams using the data to suit their needs with the tools they prefer to use.  

If you are completely rigid in how your teams use data without accounting for their needs, departments will go rogue and create siloed data as previously discussed. However, if you don’t give them any parameters for how to use data respectfully and with a common framework, you’ll never be able to do higher level analysis across products, platforms, and business units. 

Routinely audit your data stack   

New solutions to address data problems crop up almost weekly (see this analysis on the martech landscape), and the best companies can easily test and try new tools rather than focusing on cleaning up their existing mess. Organizations that I consider industry leaders implement a data infrastructure that enables them to adopt new technologies and have long-term flexibility as requests for new tools and systems get introduced into the market. 

These organizations are also adopting practices and tooling to automate auditing data. This includes flagging and blocking bad data at the source and enforcing a predefined data specification so they can trust the data in each tool they use.

Build a culture around documentation 

Strong knowledge management, sharing, and accessibility is more critical than ever. At the end of the day, a company is simply an aggregation of people working together toward common goals. The larger the company, the more information is generated, and the harder it is to communicate what’s important and what’s not while the organizational stakes often get higher. 

Clear channels for sharing what your data practice looks like, guidance for how to capture data, and rules on how to effectively and securely share data with those who need it, are all critical components that lead to success. 

It’s time to say goodbye to bad data

Here at Segment we’re always looking to deliver useful products and tooling to help customers observe, evaluate, and act to fix their bad data. Our platform helps organizations of every size take a proactive approach to good data by helping them monitor the data they capture, enforce standard data practices, and offer every team and tool access to the same, consistent first-party data.

We believe good data is Segment data. In our next blog post we’ll dive even deeper into how you can achieve good data using Segment. 

Want to learn how to turn your bad data around so you can make business decisions you trust? Reach out. We’re happy to discuss how we can help!

Mark Hansen on May 20th 2019

This is a guest post by one of our most inspiring customers, Mark Hansen, who is a co-founder of Upsolve and an early member of Segment’s Startup Program. He shares his story of turning his fledgling bankruptcy non-profit into a data-backed organization built to scale.

“I’m too broke to file for bankruptcy,” is a phrase that should never be said. Yet nearly 20 million Americans every year find themselves in this position. Our organization Upsolve is on a mission is to help low-income Americans in financial distress file Chapter 7 bankruptcy at no cost. We do this by combining the power of technology with attorneys. Up through March 2019, we’ve helped hundreds of families clear over $37,000,000 of debt. But there are 20 million more that could use our help.

In this article, I’ll share the story of our organization’s Y Combinator W19 experience and how we found a pathway to scale our nonprofit. With Segment’s advice early on, we went from running a few SQL queries a month to analyzing user behavior and reacting to it within the same day. As the solo designer and developer for Upsolve, I thought being data-driven was a luxury. I hope this shows you that no matter how small you are, it’s an absolute necessity.

I hope our story brings ideas and lessons to you as you carry on in your own journey!

I use the term radical in its original meaning—getting down to and understanding the root cause. It means facing a system that does not lend itself to your needs and devising means by which you change that system. That is easier said than done. But one of the things that has to be faced is, in the process of wanting to change that system, how much have we got to do to find out who we are, where we have come from and where we are going....

— Ella Baker

Bankruptcy: A social safety net out of reach

Upsolve is a tech nonprofit that helps low-income families file bankruptcy for free. Bankruptcy is a tool that is often sought out by businesses, but individuals can declare bankruptcy, too! Declaring bankruptcy is often the social safety net of last resort when other systems have failed, allowing individuals to completely clear away any debts that they have except student loans.

For decades, poverty in the United States has been associated with social stigmas. Unfortunately, the reasons why people find themselves in financial trouble are often outside their control. Falling into financial ruin can happen to anyone, no matter the age, demographic, or education. The most common reasons people turn to bankruptcy are because of job loss, medical emergencies, predatory loans, and divorce. Those who are unable to take advantage of bankruptcy are at a high risk of falling into the cycle of poverty, homelessness, and going hungry. It’s also been proven they will have a shorter life expectancy by several years.

Despite the critical benefits of bankruptcy, only 2% of people who could benefit from filing for bankruptcy actually make it through this process. That’s because people imagine bankruptcy requires a lawyer, which can cost more than $1,000. If you have $30 in your pocket, and you’re living out of your car, this is not an option. But when you’re served a letter saying you will be sued if you don’t pay your debts, filing on your own becomes the only option.

Doing this process by yourself is intimidating. The first challenge is figuring out whether Chapter 7 bankruptcy makes sense, which forms are relevant, and how to fill them out correctly. The second challenge is to find the money and a working debit card to pay for two required online courses. The third is printing out hundreds of pages of documents, which is another big cost and time sink and might require traveling to find a printer. Finally, you’re expected to find time during a work week to make a multi-hour drive to the closest court. If you don’t have a car, you must coordinate borrowing one or getting a ride from a friend or public transportation. If you have kids, you must coordinate having someone take care of them.

In other words, good luck and pray you didn’t make errors. If you do have errors, the three most likely things to occur are:

  1. Your case is dismissed and closed, meaning you wasted an incredible amount of time and several hundred dollars (which you likely borrowed).

  2. You’re asked to make corrections, dragging out the process several more weeks.

  3. You receive a discharge, but when listing assets you incorrectly assigned exemption laws so some of your assets are seized. 

Exemption laws are state by state guidelines for what assets can and cannot be taken. To give an example of the complexity: a person in Missouri trying to protect their wedding ring has to understand exemption law Mo. Rev. Stat. § 513.430 1.(2), exemption law Mo. Rev. Stat. § 513.430 1.(3), and possibly federal exemption law 11 U.S.C. § 522(d)(4), all of which have their own caveats and limits for protecting assets.

This has to be done with every asset a person owns from clothes, to cars, to retirement funds. Mislabel one and you can potentially lose it.

What might a different future look like?

If the bankruptcy code is made up of a bunch of rules, couldn’t software make this easier? Couldn’t software tell you if you’re a good fit? Couldn’t software just tell you what forms are relevant? Couldn’t software automatically get your course and court fees waived? Couldn’t software tell you about relevant updates so you didn’t have to check the mail?

Yes, it can! And that’s what we did to give people back their access to this important social safety net. Up through March 2019, we’ve cleared $37,000,000+ of debt for hundreds of families.

The path to this future wasn’t clear 

In December, we only had received $559 in donations from users who filed as a thank you, and the grant money in our bank accounts would only get us 9 more months down the line. We really didn’t want to see the headline: “Poverty fighting nonprofit shuts its doors.”

Additionally, to help every person who could benefit from Chapter 7 would cost millions ($5 fixed cost per case times the 20 million people in the United States who could benefit from filing for bankruptcy). No foundation could support that sort of cost in the long run.

As we grappled with how we could give 20 million families access to bankruptcy, we looked to a place where growth and scale was the absolute goal. Enter, Y Combinator. We took a leap of faith to put in our application, and before we knew it we sat in a room with the YC batch. The expectation:  A nonprofit should be as effective and ambitious as all the other organizations they took in.

YC’s challenge— Become sustainable

Y Combinator was a tough environment, not because we had to work harder, but because all our norms would be challenged. As much as we loved to create a better and better product to help people through bankruptcy, the financial self-sustainability of the organization needed attention. Our second week, the partners drove it home for us with a startling request:

Partners: Become break-even by demo day.

Us: But regulations don’t allow us to sell the service, our users have no money and deserve it for free, volume is so low …

Partners: Airbnb did it. Can you earn $500 by next group office hours?

Us: *Pause*…Yes

The phone call ended, and we just started firing off ideas. We knew we couldn’t finish enough cases by office hours to get more than $500 in donations.

Flying blind on the path to growth 

Before even considering revenue, we were focused on how we could do 10% more case filings each week. SEO-minded content was the strategy, but we couldn’t really tell how well or poorly we were doing. The only indicator we had was the number of accounts created. We didn't know how to setup and measure our goals, and we didn't know how many people were dropping off in the funnel or where! We needed to get these stats to know where to spend time improving the product if we wanted to have any chance meeting expectations for the next group office hours.

With all the other fires raging, measuring the flow of our product was getting the back burner. I had given teammates BI tools and SQL queries, but they were still uncomfortable with self-research. We had a volunteer try to centralize BI and logs by transitioning us to the ELK stack (ElasticSearch, Logstash, Kibana) but it was breaking down almost daily and they had disappeared.

I felt like I was floundering. It felt like we were all flying blind, and my increasingly crazy looking SQL queries were all that gave us insight into what was happening. When deciding which office hours we should go to the day of YC’s growth workshop, we clearly knew we needed guidance with our analytics.

“Stop doing that!”

As a nonprofit, we were trying to save every penny we could. With AWS credits out the wazoo and knowing that big tech companies ran ELK stacks, it seemed worthwhile to invest in the future proof solution. But it’s hard when you’re the only dev to evaluate when your time spent is far greater than some other alternative.

That Saturday, my team and I decided to attend the YC Growth Bootcamp. We were excited by a lot of the sessions, and I was especially amped to learn about analytics. Segment’s co-founder Ilya Volodarksy gave a presentation that helped us understand what was possible if we understood our customers, and how a certain set of tools and practices would continue to work if we scaled. I still remember the slide showing goal tracking and the behavior flows in Google Analytics and being amazed that was a feature. Afterwards, we were able to sign up for an analytics office hours session with Ilya, and dig deep into our stack. It was clear it was time to ditch the ELK stack.

A few hundred dollars on helpful tools could be the difference of a growing or failing business, and if we couldn’t pay for these tools, we had bigger problems. It made sense to go all in and invest for YC. Ilya gave us a set of recommendations we implemented that weekend, and we were off! My co-founders weren’t distracting me with complex questions I need to write custom SQL queries for, we finally had visibility into behavior of our users and parts of the funnel we were blind to, and we had peace of mind that the tools in place were all we’d need to get to the next level.

Ready to go!

Data/analytics stack before the bootcamp:

Only I can access: ELK Stack (broke every day), Google Analytics (Misconfigured), Postico (PSQL client)

Stack after the bootcamp

Teamwide access: Amplitude, ChartIO, CustomerIO, Fullstory, Google Analytics, Google Search Console, Segment

Stack we use today today:

Teamwide access: Amplitude, ChartIO, CustomerIO, Fullstory, Google Analytics, Google Search Console, Segment

12 Days, $500 Left — Business Model #1

The first and obvious way to earn $500 was improve our donation flow. For every person we helped, it cost us $5, so helping all 20 million would require $100,000,000. We had been asking users to donate after we’ve helped them file because we do not charge people for our service. This is called a “Donate What’s Fair” model.

The three levers we had for trying to move this 10X were to increase how many people filed bankruptcy, increase how much we asked for, and change where in our flow we asked for a donation.

With 12 days to make $500, we couldn’t get 10x people to the filing step. We finally had visibility into our product funnel and all drop off happened in the information collection phase, which was necessary and could only be marginally improved.

So we looked at how we could increase the amount given per user by changing the design in two ways: 1) ask for a donation when giving users paperwork, the moment our users experience the greatest value and 2) be more explicit about our costs helping people and show future impact. With these modifications, we were seeing above $5 in donation revenue per user. Here were two iterations.

V2 - Increase Visibility

V3 - Explain Where The Money Goes

We were amazed that this worked. We now broke even with every case we did, making it feasible for us in the long run to help millions of families! But with that small a margin, we had to do 360,000 cases in the next 9 months to survive and by this point we did 400. We still only had 9 months to live.

6 Days, $164 left — Business model #2

Up until this point we had received $136 in donations and another $200 from revenue experiments Rohan and Jonathan had done that in the end that cost more than they were worth. With 6 days left, things weren’t looking good.

In between meetings, writing code, going to events, etc. we had our metrics tab up.

We were marinating in the numbers over the days when Rohan had a realization that would change the trajectory of Upsolve forever. That orange line represented dozens of people who were coming to our service that we couldn’t help, and attorneys were spending a ton of money trying to acquire users on Google ads to just get users to their website. Would our high-income visitors to Upsolve prefer we put them in touch with an attorney? Would the attorneys be willing to pay for that introduction? With a few phone calls, spreadsheets, and experiments we realized both were a yes, and we partnered with Legal Zoom.

$500 and a path to long-term impact

At first, we just printed phone numbers from the screener to our Slack and gave folks a call to validate interest. Then we put a button that didn’t do anything just to see if people would click it and called up attorneys ourselves to match with people who clicked. Click rates were incredibly high with our warm and trusting copy, showing us there was some promise.

When we formalized an agreement with a third-party, they gave us a phone number which if users called, would lead to double the referral revenue for us. Making this CTA a “call 1-800...” and seeing it drop to a 2% conversion rate made us really internalize just how toxic phone numbers were to our community. So much so we removed the phone number question from our screener for qualified user sign ups.

After a few more iterations, we came to an onboarding flow that provided meaningful conversion. The conversion numbers showed us that all we needed was to 2-3x website traffic and we could become self-sustainable.

By the time of group office hours, we not only were able to meet a goal we thought was wild a week earlier, but we had a formula for becoming a self-sustaining non-profit. The partners and everyone in our group gave us an applause and were excited to see us to get self-sustainability.

What is this feeling?

This was becoming a weird transition moment. Up until this point, I had placed all my actions through the lens of making something people love. Now we are able to place our work with a surrounding formula that delivered on both revenue and impact. Was this what it feels like when you have “product market fit”? I guess so.

When data and charts were available constantly to everyone, I felt we started to act differently. With the driving goal so clear, we were able to operate more efficiently and autonomously. With this data washing over us so often, we also started to understand our users in non-obvious ways. 

But the biggest change in my mind was how we started to evaluate each others’ priorities against our own.  Now, we  judge how our priorities align with organizational goals. Second, we compare our own needs against others in a much clearer way. We all felt the pains of growth, but when we began to see a big uptick in sign-ups, we knew Tina was going to getting hit hard with reviews. We all could quickly identify and agree upon where our limited development time needed to be spent.

$500 cleared! All hands on deck for top of funnel!

With success hitting our $500 challenge, we looked outwards with our new formula in hand and a real challenge to ourselves to become financially self-sustaining by the end of YC. The feedback from the partners at this point was to move on to top of the funnel. The conversion rate was great and for the amount of work we were putting in, it didn’t seem like a 10x possibility was possible in conversions.

With SEO as the strategy, we want on a feature building spree.

We restructured our website to be locality based and improved internal linking.

Then we placed CTAs everywhere.

Then we boosted our page speed and more until we started to see diminishing returns to our programmatic and technical SEO efforts, and finally a bit of impact from an algorithmic change focused on financial information websites.

YC in Data

While words are great, nothing summarizes our work better than two charts.

It’s clear to see how we went from getting data in, to finding a business model, to improving conversions, and finally add in top of the funnel. The result was 60% self-sustainability by demo day of W19.

How we change

This experience felt like something Lewis and Clark would have gone through. In getting to the coast, they had to constantly evaluate where they were and how to go forward. You can only see so far, so you can only just choose the best path from where you are.

Segment was our version of Lewis & Clark’s celestial navigation tools. The ease of use and data it provided allowed us to evaluate our paths forward, helping us see where the 10x opportunity could lie. We weren’t optimizing our product, we were finding ways to push ourselves to meet the challenge of helping millions of families in America. 

Thank you!

Thank you for reading. Our journey has many peaks and valleys, but the past few months were a very special for us, and there were many people along the way that I want to note as being part of the journey:

Thank you to my Upsolve family that I’ve been honored to be a part of that learned, grew, and challenged itself: Jonathan, Holly, KT, Nicole, Rohan, and Tina.

Thank you to our YC family that helped us grow every week: Kevin, Michael, and Tim.

Thank you to the Segment family that gave us time and space and showed us what a great company looks like: Calvin, Courtney, Ilya, Kerianne, Leah, and Peter.

And finally, thank you to the Ohlone people and their descendants who lent us the land our industry has been able to build upon for decades and launch world changing ideas from.

If you or someone you know is in financial distress, send them to our website to learn if they should file for bankruptcy.

If you're excited about Upsolve's mission, we're looking for amazing engineers!

If you’re interested in using data to help you know where to focus during the overwhelming early stage, check out the Segment Startup Program and sign up for its bi-weekly "Analytics for Startups" office hours.

Ian Blair on May 13th 2019

This post is a guest submission by one of our customers at BuildFire for Segment’s Chain Letter blog series. The Chain Letter series profiles clever uses of Segment Connections partner tools that, when chained together, lead to some pretty advanced programmatic models, custom messaging strategies, and more. Thanks to BuildFire and Proof for sharing their story!

—Segment


From small businesses to Fortune 500 leaders, developing a mobile application is an arduous and often confusing decision for any company. Building and maintaining an app can be time-consuming, expensive, and requires a ton of ongoing maintenance. Plus, the world of app development is extremely competitive with thousands of agencies and independent developers competing for clients. That leads to a lot of noise around what can be reasonably produced within your budget and timeline.

My company, BuildFire, has carved out a niche in the space by creating a streamlined software platform to help our customers build beautiful apps quickly and affordably. Over the past 5 years, we’ve developed 10,000+ apps for over 10 million users across a variety of industries.

As the CEO and marketing leader of the business, I am tasked with acquiring these high-value customers in the most cost-effective way possible. We’ve had great success with our existing funnel, but we’re always looking for new ways to improve our signups and make the on-site experience more relevant for visitors.

Personalizing every customers’ experience

I frequently deploy A/B tests and other CRO experiments, but personalization at scale (while maintaining proper analytics and tracking) was never feasible since we lacked a dedicated engineer on our growth team. So we set out on a search for a personalization partner to help identify who our visitors are and improve their on-site session. After studying the market and seeing what solutions existed, we partnered as an Early Access customer of Proof Experiences to tackle personalization in our signup funnel.

We chose Proof Experiences in part because they were a new Segment integration. We use Segment to capture event data on our site, store enriched first-party data, and create a data infrastructure to cater to customers in every interaction they have with our brand. Using Segment has been critical to our growth strategy and allowed us the footing to launch many experiments, so finding a personalization partner that could directly integrate with Segment was a must-have for our team.

Proof Experiences is a B2B personalization tool that allows us to create audiences for different customer segments and then deploy custom website experiences for them. We use it to personalize headlines, swap out testimonials, autofill form fields, and much more. It easily hooks into our favorite A/B testing (Google Optimize) and analytics tools (Amplitude) through Segment. Experiences also includes the ease of use and point-and-click visual editing systems I’m used to using in other landing page and website builders.

The Proof Experiences visual editor allows you to click and edit live pages

Integrating Segment and Proof Experiences

Honestly, we were skeptical about the results we could get from personalization, but we wanted to give it a thorough shot. Proof Experiences allows us to collect data about our on-site visitors to use in audience creation. Then, we can enrich the contact with data from Clearbit or from our data in Segment. Finally, we can launch custom audiences for deploying personalized experiences quickly and with ease. We use their platform to bucket our visitors by industry (E-commerce, Education, Non-Profit, etc) and then we use that segmentation as a starting point to deliver more relevant content to our visitors. In the Experiences editor, we can visually swap out headlines, images, value props, CTAs, testimonials, and other on-page elements — without having to launch new landing pages.

Plus, we can conditionally hide live chat, add social proof to pages, and prefill forms from the Experiences platform. It’s powerful and it can all be done without having to get my engineering team involved—a huge factor in our decision to personalize in the first place.

And since Proof provides deep direct integration with Segment, setting up the connection was easy.  In Proof Experiences, you simply generate an API key and name it Segment.

Then, you head to the Proof Experiences Destination in the Segment catalog and enable it. You paste your API key into the configuration window and click Save.

And voila! You’re all set up to send data from Segment into Proof and have data flow from Proof into Segment.

Personalizing our signup flow to increase MQL’s by 46%

When we first started working with the Proof Experiences team, we looked through our funnel together and identified the metrics that we cared most about improving. Ultimately, as a growth team, we’re measured by the number of customers we acquire and the revenue those customers add to the business. For that reason, we decided to target an increased number of MQLs as it is a key leading metric to our revenue growth goals.

To increase MQLs, the Proof Experiences team recommended deploying several “playbooks” to improve the current performance of our signup flow.

BuildFire homepage

Our first major playbook focused around the inclusion of live updating social proof on our homepage and signup flow. By including in-line social proof under next-step CTAs, we were able to increase the percentage of visitors continuing onward to the next page. By mentioning the number of customers that signed up in the last month (8664 in the screenshots above and below), we were able to address a common customer objection and provide data to incentivize our visitors to continue to the next step.

The second playbook focused on using email addresses to autofill demographic and firmographic data in the signup flow. Before using Experiences, we collected emails in a form field on the last page of the signup flow. In order to personalize the rest of the signup flow, we moved the email field to the first step of the signup flow to allow Proof Experiences to call Clearbit’s API to find out a person's identify if the data wasn’t already stored in our Segment warehouse. If it was, we initiated an Identify call to pull in other fields to personalize the following 3 pages of our flow.

Finally, we’ve identified that we are much more likely to close a deal if we can quickly get a customer on the phone after they sign up. After successfully completing our signup flow, we push registrants to schedule a call with our Sales team. To humanize the messaging on this page and to increase the likelihood of a call, we used Proof Experiences to adjust a headline to match a visitor’s responses to earlier questions.

Rather than having a one size fits all headline, we used personalization to adjust it for each audience bucket. Since the visitor indicated they were interested in building an app for their business, the headline in the screenshot below adjusted to indicate the meeting is with a Corporate App consultant.

We’re an extremely data-driven company, and we watched our metrics closely as we launched this experiment with Proof. We A/B tested our initial personalization experiment, and we were able to conclude with 95% significance that using Proof Experiences and Segment in our signup flow increased our MQLs by 46%. We were blown away by these results.

Based on our data from this experiment, we are working to deploy even more ways to use the Segment and Proof Experiences integration to personalize our site. It’s been a great way to create a more human website for our visitors while improving our most important growth metrics.

Become a data expert.

Get the latest articles on all things data, product, and growth delivered straight to your inbox.