Today, we announced the availability of our Config API for developers to programmatically provision, audit, and maintain the Sources, Destinations, and Tracking Plans in their Segment workspaces. This is one step forward in Segment’s greater strategy to transition from API-driven to API-first development and become infinitely interoperable with companies’ internal infrastructure.
Our shift reflects a greater market shift over the past 30 years in how technology has impacted where and how companies create value. In the 80s, most industries were horizontally integrated, and few companies could afford to interact directly with customers. They created competitive advantage through operations and logistics and relied on additional layers of the value chain to reach customers. Software has made it easier to deliver services and goods more efficiently all the way to end consumers. As a result, today’s companies crave APIs that are extensible and responsive to their modular infrastructure and enable them to differentiate on customer experience for the first time.
In this post, we’re excited to share our motivations for becoming an API-first company and the historical context for how to think about why APIs are eating software that is eating the world.
Identifying where companies create value
So why go API-first? Because in every industry, the value chain is transforming, and APIs are the only way to keep up.
The idea of a value chain isn’t new. Businesses have been using this tool, first coined by Michael Porter of HBS, since 1985. He decomposed businesses into their various functions and arranged those functions as a pipeline, separating the “primary” activities of a firm from “supporting” ones. Primary activities are how you create and deliver value to the market, and supporting activities are those that, well, support these endeavors.
The value chain of businesses in 1985
Thinking of a firm or business unit as a value chain is helpful for understanding where a firm has or can create a meaningful competitive advantage. In other words, it’s a system for determining where to double down on building unique, differentiated value and where to outsource to create cost advantages.
Businesses themselves are only one link in a broader market or industry value system: the outputs of a firm’s value pipeline will subsequently pass through additional links in that chain, such as distributors and/or retailers, before they’re purchased by “end user” customers.
The value system from supplier to end user
Before the internet — and still today in heavily industrialized or regulated industries — vertically integrating your business to own the end customer experience incurred high marginal costs and was prohibitively expensive. For consumer goods or healthcare, conventional wisdom holds that this is largely still true, though companies like Dollar Shave Club or Spruce Health might beg to differ!
The skills and competencies to differentiate in retail are different from distribution, which are different from manufacturing, etc. In focusing along these logistical steps, companies become horizontally focused, and become distant and removed from their true end customers. All too often, our everyday customer experiences still reflect this!
The critical path in a pipeline business
Such businesses, links in a linear value system from raw material to a real-world product in the hands of customers, might best be described as “pipeline businesses.” For these pipeline businesses, the links in their chain where they could best differentiate — where their moats were widest — were inbound logistics (sourcing inputs), operations, and outbound logistics (delivering outputs). Together these comprised the “critical path,” or chain of primary activities, that created value for a pipeline business.
Porter was careful to put customer-facing functions, including sales, marketing, and customer support, inside what he called primary activities. However, only very few large companies, and generally only the luxury brands affordable to the few — think Nordstrom, Mercedes, Four Seasons, or American Express — actually differentiated on these dimensions. For most large companies, these customer-facing activities were better described as secondary activities, and they expanded their profit pools by viewing them as cost centers and outsourcing or deferring to further specialized firms. (Hello, Dunder Mifflin).
But when the internet happened, the critical path was reshaped forever.
Digital reformation: software enters the value chain
When software first emerged as a viable business tool, most enterprises considered the technology an opportunity to do what they already did more efficiently. Hence the inclusion of “technology development” as a supporting activity in the original value chain composition.
As vendors popped up to offer software products to help support these value chain reformations, pipeline companies were most open to buying applications that could streamline their secondary activities like sales, marketing, and support. These were less risky, and most of the direct investment in building technology was thought to be better allocated in further differentiating the existing primary activities in the critical path. Because the software buyers were less invested in results — these were secondary activities, after all — they had low expectations for app usability.
The B2B vendors got away with long, onerous implementations and forced their customers to adapt the way they work to the vendor’s way of doing things. They charged extra for services that were needed to extract any value from their software. Because APIs made it easier to work with and integrate their software, these vendors saw APIs as a “nice-to-have.” Or, they charged extra for the use of these APIs to capture more from the IT budget.
Platforms over pipelines: software eats the value chain
But today, software is no longer viewed only as a tool to optimize existing things; it’s combinatorially interconnected, and it permeates everything. In this networked world, customer experience is the only true competitive advantage.
As the marginal cost of customer interactions trends to zero, companies can now afford to reach large audiences at scale and integrate their value proposition around customer experience. And in order to provide excellent customer experiences, what we used to think of as secondary activities are better framed as belonging right in the critical path through integration.
The predominant model of how businesses are organized shifts from “Pipeline” to “Platform,” and the mental model of a request/response lifecycle becomes more useful than that of a value chain.
In consumer-facing businesses, the embodiment of the request/response model is an omnipresent “mobile, on demand” company like Uber or Instacart.
In B2B, it’s an API-first one like AWS, Stripe, Plaid, or Twilio.
These companies have digitized and vertically integrated every link of their value chain. They have slick websites and apps — on every platform — on the inbound side, and free, two-day shipping with no-worries returns on the outbound side.
Because inbound and outbound logistics are ever “thinning” experiences, increasingly mediated via HTTP requests from mobile phones, tablets, laptops or servers, operations become everything behind those applications, and APIs make those experiences effective, relevant, worthwhile, and endearing. Request/response becomes the new pipeline.
The new critical path: Customer experience is the new logistics, and rapid learning, iteration, and integration are the new operations.
For consumer companies to differentiate on customer experience, they have to integrate their sales, marketing, and customer support functions — links that were once thought of as secondary. These customer-facing departments and customer-facing digital experiences should converge on a shared, ever-updating understanding of who their customer is to tailor their experiences accordingly. Moreover, companies must operationalize the learnings and insights from these interactions to contextualize and tailor subsequent experiences.
For firms that do this right, everything from content, to product recommendations, to promotions should be based on a real-time, integrated understanding of the factors that drive great customer experiences. This process of self-tuning requires indexing massive amounts of data and then the infrastructure to iterate, optimize, and personalize on the basis of it.
Our humble revision of the Value Chain model for 2018 — the company as a request/response lifecycle
While this model of a request/response firm may not look surprising to platforms, aggregators, digital native retailers, or API-driven middleware in B2B, the stalwart companies who drive the economy are catching on. And as the modern enterprise looks more like these request/response firms every day, the nature of enterprise software is changing with them to fit the model.
Streamlining the critical path: the emergence of API-first for a request/response world
As software became networked, and those networks hit a critical density in the 2000s, technology shifted the value chain composition again. After adopting new technology in secondary business units, consumer companies realized that software could improve processes and margins by outsourcing in their primary focus areas, as well. At this point, they started to introduce technology to their critical paths.
This is where the first B2B API-first companies emerged. They turned the “pipeline” model on its head by removing the heft, ceremony, and friction associated with their own critical path. They optimized this experience with software, then productized the software itself. As a result, they helped B2C companies outsource micro-components of their value chain and enabled these companies to enter into new primary focus areas.
The API-first companies’ entire end-to-end value proposition is integrated between the lifecycle of an HTTP request and response. Need to process a payment? Just make a request to Stripe, and by the time they respond—a few hundred milliseconds later—they’ve handled a ton of complexity under the hood to issue the charge. Send a text to your customer?
Companies like Stripe and Twilio set themselves apart not only by the sheer amount of operative complexity they’re able to put behind an API, but because of how elegant, simple, and downright pleasant their APIs are to use for developers. In doing so, they give developers literal superpowers.
As these companies became the de facto mechanism for accomplishing these operative tasks, they’ve aggregated happy customers along the way. What started as humble request/response companies, have morphed into juggernaut platforms, expanding the scope of their missions and offerings. Before we knew it, “payment processing” became “empowering global commerce,” and “send an SMS” became “infrastructure for better communication.”
Reducing the cost of integrating these functions via APIs propelled the creation of countless startups with lower barriers to entry.
Building B2B software in a request/response world
For B2B companies selling into enterprises that are increasingly embodying the request/response model, modularity and recognizing that you’re only a part of a much greater whole is key.
IT is an increasingly embedded function driving interconnection and integration. Companies and their partners — be they base infrastructure providers like AWS and GCP, advertising platforms like Facebook and Google Ads, or the smartest players in the SaaS space — are embracing interoperability through common infrastructure, APIs, and technical co-investment.
Rather than view the software they buy as end-to-end solutions that they’re going to train their teams up on, these enterprises are shifting to a “build and buy” model of private and public networked applications, where security and privacy are necessarily viewed as a shared responsibility.
The components of their infrastructure that they do choose to buy are part of a broader, sprawling network composed of on-prem deployments, as well as private, public, and third-party cloud services. As a result, they emphasize the need for data portability and the ability to bring a new tool “into the fold” of their existing governance and change control policies and procedures. In fact, it’s generally preferred that the tool acquiesces to those existing procedures than to force the team to adapt their procedures to the tool.
The worst thing you can tell your customer is that they should conform to your opinions about how to do something.
Sure, you built a beautiful user experience atop the data in your SaaS tool. But there’s an edge case you didn’t think of. And without an API, your customers have no recourse. With one, they can channel their needs into an opportunity for them to further invest in your ecosystem. More importantly, they can take “enterprise readiness” into their own hands and enact it on their own terms. In fact, I’ve been personally involved in several of our enterprise-facing initiatives, such as SSO integration with SAML IDPs and fine-grained permissions. While developing requirements for these features, far and away the most common refrain I’ve heard is, “just give me an API.”
Why is that? Amongst software developers, operations practitioners, and IT administrators alike, the concept of Infrastructure as Code (IaC) has taken hold. This means writing code to manage configurations and automate provisioning of the underlying infrastructure (servers, databases, etc.) in addition to application deployments. The reason we were so excited to adopt this practice ourselves at Segment is that IaC inserts proven software development practices, like version control, continuous testing, and small deployments, into the management lifecycle of the base infrastructure that applications run on.
In the past, “base infrastructure” had a relatively static and monolithic connotation. Today companies are deploying their application not just to “servers” or VMs in their VPCs, but to a dynamic network of cloud-agnostic container runtimes and managed “serverless” deployment targets. At the same time, they rely on a growing network of third-party, API-driven services that provide key functions such as payments, communications, shipping, identity verification, background checking, monitoring, alerting, and analytics.
At Segment, our own engineers refuse to waste our time and increase our risk profile by clicking around in the AWS console, instead opting to use terraform for provisioning. They go so far as to home-roll applications, like specs for “peering into,” and station agent for querying our ECS clusters. None of these workflows or custom applications would be possible without the ECS control plane APIs.
And it goes beyond AWS. We want to make it functionally impossible to deploy a service that doesn’t have metrics and monitoring. To do this, we threw together a terraform provider against the Datadog API and codified our baseline alerting thresholds right into our declarative service definitions.
Now, we’re offering that same proposition to our customers through our Config API for provisioning integrations, workspaces, and configuring tracking plans. We’re excited to see a terraform provider pop up. (And, we have it on good authority the community is already working on it.) Using the Config API and terraform, customers can codify and automate their pre-configured integration settings and credentials when provisioning new domains or updating tracking plans.
…and that’s where we get back to Segment
Because I know what you’re thinking. Wasn’t Segment already API-first?
Well, partially. Segment, historically, has been API-driven. Which is to say that we’ve been API-first, but only in a few key areas, and hopefully the models and context we explored above can help to explain why!
When we first launched analytics.js, we introduced an elegant and focused API for recording events about how your customers interact with your business. So you made requests to Segment — but did you wait on a response? No! You just let us handle sending the events to your chosen integrations.
That’s because, then, it was a better inbound link to a secondary value chain activity— “analytics.” Companies didn’t want to wait any milliseconds to hear back from Segment because we weren’t in the critical path of their value delivery. (Side note, we went to great lengths to avoid any waiting at all — all our collection libraries are entirely asynchronous and non-blocking.)
And while engineers loved the simplicity of our Data Collection API, the real reason they love Segment is that integrating with that API is the last analytics, marketing, sales, or support integration they ever have to do. That value proposition is what lies between our “API-driven” inbound and outbound value chain links. The operative link in Segment’s Connections Product is the act of multiplexing, translating, and routing the data customers send us to wherever those customers want.
What exploded underneath our feet when we released analytics.js was the realization that the larger the organization, the more likely it is that the person who needs to access and analyze data is different from the person who can instrument their applications to collect that data. By adopting Segment, companies decoupled customer data instrumentation from analysis and automation, disentangling “what data do we need?” from “how are we going to use it?”
In effect, Segment became the “backbone network router” in charge of packet-switching customer data inside a company’s data network.
Becoming Customer Data Infrastructure
We got this far without thinking API-first when it came to our control plane. Even with all our high-minded prognostications about the end of traditional value chains! So why make the shift now?
The reason to make such a change, as ever, is strong customer pull.
Since introducing our data router, Segment has evolved substantially. Today, the original Segment Data Collection API you know and love is the inbound link in the customer data infrastructure request/response lifecycle.
With each big new product release this year, be it our GDPR functionality, Protocols, or Personas, we’ve heard emphatically from Customers that they want to “drive” these features programmatically, and we’ve shipped key APIs with each to deliver on those needs.
All the while, we’ve also noticed more than a few customers — and even partners looking to develop deeper, workflow-based integrations with Segment — poking around under the hood of the private control plane APIs that drive these products.
What’s clear is that while our original, “entry-level” job to be done — analytics instrumentation — may have been a “send-it-and-forget it” API interaction. However, companies have come to rely on their customer data in the critical path of delivering value through their applications, products, and experiences. Now, data collection has moved from fueling “secondary” links to a first-order priority.
In fact, this thesis (and the accompanying customer pull) has driven Segment’s product portfolio expansion to help companies put clean, consented, synthesized customer data in the critical path of their customer experiences.
And this is where we bring it all together. Because it’s not just consuming the data that fits the mold for an API-first model. As our customers build and adopt applications that fit into a broader network, and they bring once-“supporting” value chain links into their critical path, they want to program the infrastructure that enables that as well.
With the APIs, our customers have built Segment change management into their SDLC workflow. They run GDPR audits of data flow through their workspace with a button click. They’re keeping their privacy policies and consent management tools up-to-date in real-time with the latest tools they are using.
It’s incredibly humbling to have customers who push the boundaries of your product and are sufficiently invested to want to integrate it more deeply and more safely into their workflows. We’re proud to be enabling that by opening up our Config API, which we welcome you to explore here.