Engineers & Developers

Monoliths, Microservices, and Containers. Oh my!

This blog covers the history of monoliths, which made way to microservices and ultimately, monoliths in the container.

Mar 6, 2023

By Ben Link


I’ve been told that one of the hottest new trends in computing is “monoliths in containers”. As an old curmudgeonly developer, I’m old enough to remember that this conversation has swung wildly back and forth over the past ten to fifteen years. Kids these days, their newfangled toys are really just repackaged old toys!

On a more serious note, it’s important to look back at how we as an industry have come to this conclusion - being a student of history is always valuable, even outside of technology! Examining the events leading up to this year’s trend will give us an idea of the technological trajectory we’re on, and maybe help us avoid some potential pitfalls. So let’s talk about how monoliths, microservices, and containerization have interacted over the years, and see what we can learn from our past.

In the beginning were the Monoliths

Naturally, the industry started with monolithic architecture. Why was it natural to start there, you ask? Well… because computers were huge and expensive, so we built our applications and ran them all on the one big computer that we couldn’t afford. It was so expensive, in fact, that we had to timeshare that computer among all sorts of applications - sometimes, even among all sorts of companies - in order to make it worth the investment.

image5

Since we didn’t have the luxury of dedicating computing hardware to various business functions, it was necessary to integrate all the various applications in that single environment. We built massive application systems and scaled our teams to match. Hundreds of people contributing code to a single system rolled out loads of new features, but we started to notice some problems.

First, our deployments got really complicated. Releasing new code was extremely risky and entire teams of people whose only goal was to “manage the next release” sprang into existence, adding bureaucracy and complication in hopes of minimizing the risk incurred by changing the application. We decided to mitigate the risk by reducing the number of deployments. The idea of the “monthly” or “quarterly” or even “semiannual” release event became part of the Information Systems team’s lexicon, and those events became wildly complicated rituals requiring all hands on deck, often for protracted periods of time and happening at uncomfortable times of day in order to minimize the effect on the business’s uptime.

Those complicated deployments caused debugging to become a mess. Complicated, unexpected (...dare I say “weird”?) interactions arose between code fragments that were developed in isolation and then joined together in “The Great Monolith,” and were often not caught until released to production where everything could finally be tested under load with nothing in isolation any longer. 

Testing processes became more brittle and less reliable, at precisely the same moment that its necessity increased exponentially; in particular, companies with large data concerns struggled here because their data sets become so convoluted that the only way to provide any sort of test data in lower environments was to copy the databases down from production, a practice that came back to bite us later when the test environments became a large security risk, since they were often easy places for the bad guys to find this data.

Companies on this low-frequency release cadence also found that not only did their deployments have to be delayed, they also saw the speed of development slow down. Valuable engineering time was used to find and squash those bugs that had escaped into production, often monopolizing time that management had budgeted for development of new features. The operational needs took priority, of course - we can’t leave the system down! But that caused completion dates to slip. 

Managers revolted against it and cordoned off their engineering talent, insisting that the Operations teams figure out the production problems all on their own. The Development Team’s primary focus became “ship more things” while the Operations Teams struggled to keep the systems alive even as more new features were tossed over the fence to production. The word “silo” became a common refrain when discussing the relationship of one team to another. Operations management became an exercise in blocking the release of new features in order to stop burning out the on-call technicians and hopefully let them rest once in a while.

Thank you, Mr. Moore…

While the conflict between development and operations was festering, hardware was getting faster and less expensive… and not in tiny increments, but dramatically so! 

The price shift caused our organizations to experience the technological equivalent of “suburban sprawl”. More servers that weren’t part of the original monolith began to spring up around us - ancillary systems that weren’t part of the core business goal… at least at first! These servers might be experiments in a new channel (such as the company’s first website, back in the early days of the internet) or they might perform some specialized function that wasn’t conducive to the use of the monolith’s hardware. And though they didn’t start off as critical business systems, they rapidly became so. Can you imagine a world where you could just leave your company’s website completely offline overnight, saying “we’ll fix it in the morning”? 

Operations was distinctly unhappy about this sprawl of newly-mission-critical systems - it meant having to cultivate a wider range of expertise to match all the development knowledge that was experimenting with these new technologies. It also meant that there were more systems whose errors could keep Ops teams up at night. 

Even as systems sprawled horizontally, they still grew vertically as well. We didn’t change from our old design pattern, and even as these new server-based systems became more mature and critical to the business, our distributed systems became little (increasingly larger) monoliths of their own. And, as you might expect, all the problems of managing The Monolith came along for the ride as we worked to manage our new collection of monoliths. 

image3

Since we’d only ever worked one way before, we solved the problem by making it just like the ones we had already solved: we subjected these new development teams to the same old rules as the ones that the monolith teams had to meet. After all, a code change is a code change and everything should be done equally!

Unfortunately we’d left a key variable out of the solution: In a world where Developer teams are incentivized to release new features constantly and Operations teams are incentivized to prevent the release of new features constantly, the person left stuck in the middle is the customer. 

Our delivery times were slow and the time required to resolve defects was long. We argued, “Well this is the way that has worked for us from the first days of the monolith! It’s the nature of the business of software development, and Customers will just have to wait.” But clearly, they weren’t going to wait. Something had to change.

A revolution begins… and Microservices are born

Around 2009, the DevOps movement started with a simple request - can engineers and operations techs communicate better and help their companies win? 

This new premise led to some major discoveries - once the two organizations were in cooperation rather than diametric opposition, a great deal of the delay in the deployment process was avoidable! 

Further, new technologies arrived on the scene to automate the deployment actions, minimizing the burden on Operations. Suddenly, it was possible to deploy in a fraction of the time that it used to take!

A new buzzword arrived on the scene: “Microservices”. We were used to the idea of a Web Service call to share data between our distributed applications, or even to a public API at another company for data-sharing needs. But the code for those services was subjected to the old, slow deployment rules… so we tried to maximize the value of each deployment by pushing larger and larger things all at once - a middleware-monolith of services. But there was a confounding variable here: when all your middleware is in the middleware-monolith, you now have to redeploy your ENTIRE middleware structure with every change. We had accidentally tied ALL the systems that consumed these services together with one large bottleneck! 

As we examined ways to solve that bottleneck, the theory behind building microservices showed up as an extremely attractive alternative. Somewhat like little Lego bricks of software, we could use these composable units of work, each with its own specific purpose and loose couplings to the other necessary services, to build our applications in a way that made our monolith easy to break up. Further, the same principle could be applied to frontend systems and backend systems. Enterprise developers dreamed of decomposing our monolithic architecture into a mesh of services that would be easy to modify and deploy without affecting the parts around it.

Containers gonna contain

One of the most interesting discoveries from the microservices world was the assembly and deployment of the microservice. Using container systems like Docker, it was possible not only to create a microservice, but to have many instances of that same service, all identical, running in the same environment. 

As long as sufficient CPU and memory were available, you could have more copies of your microservice running. This was a tremendous boon for operations teams - scaling a system when there’s a traffic surge is incredibly hard to manage in a static environment, and you often have to be a bit wasteful - providing more CPU and memory than your system “usually” needs because you want to survive the surge. 

These container systems, in conjunction with new cloud-based compute options like AWS, Google Cloud, and Azure, meant that our business could easily and rapidly scale whenever it needed to, and when the surge had passed we could return to normal levels, reducing the amount of infrastructure expense we incurred.

Microservice ALL THE THINGS

image4

The allure of this paradigm shift to software engineers was irresistible: we wanted to decompose all the monoliths RIGHT NOW, put them ALL into containers, and allow our teams to deliver rapidly. It’s hard to say whether we were genuinely trying to improve the environment, or just experiencing a little technological FOMO, but we nonetheless sought out permission and funding to transform our monoliths into the microservices we saw in the success stories. 

Most companies that explored the new concepts experimented with transformation on their less-critical workloads, and many had early wins. Microservices could make the lives of developers easier since the risk incurred by a change was lower! 

This freed up developers to make changes more rapidly, and everyone was thrilled by the faster speed of business. And so the developers, seeing a glimmer of hope for a more efficient way to build and deploy, were doubly excited: they wanted to apply this everywhere so that they could reap the benefits of faster and safer deployments.

Uh oh, we’ve created a monster

The next challenge came from Finance. Developers sharing this new paradigm with each other spread the idea like a wildfire across their organizations. Everyone wanted in on this - ops wanted more pipeline-driven automated deployments with the ability to fail deployments that didn’t pass inspection, the QA team wanted automatic code testing and reliable test harnesses in place, and even the security team showed up in the discussion, looking for ways to scan for vulnerabilities at build time. 

Just about the only people who weren’t happy were the folks in Finance. Suddenly there was a massive increase in demand for modernization work… and it happened so quickly that the slow-moving annual budget planning processes hadn’t had the opportunity to plan for it! The world was changing faster than projects could be funded to keep up with it all. What’s more, the requests were becoming more and more ambitious. Was it really going to be possible to decompose the monolith entirely? 

Like any good finance team, they stopped to ask an important question: is it really beneficial? Do we NEED to apply this new concept to EVERYTHING?

The cost models were run. The estimates were produced. And the results… were mixed. Yes, there’s tremendous benefit in being able to manage systems differently. And yes, teams that can take advantage of the automation capabilities will definitely be more productive in the long run. The problem is that our monolith is SO BIG… the amount of effort to rewrite our business-critical software in this microservice style is overwhelmingly large. We simply can’t afford it. 

That leaves us in a uniquely odd situation - there is a better way but the path from here to there is just too far for us to reach. Bloggers jumped on this early and started writing headlines like “The Death of Microservice Madness” and “Containerization is at a dead end”. 

A compromise: Monoliths in a container

image1

But many good ideas operate on a bit of a pendulum. There was an old way of doing things, and once a new way was discovered, everyone rushed to adopt it. The realization of overzealousness sets in, and they swing back toward the original. But what if we didn’t just accept the physics of this? What if we looked for middle ground?

That’s the proposal that we’re now considering in 2023: run your monolith in a container! We’ve never doubted that we needed some modernization, but we’ve also reached the conclusion that we won’t be able to fully decompose our monoliths. And maybe the lesson is that we shouldn’t have to…

The good

Running your monoliths in containers helps you because it simplifies the process of delivering the code to production - you have to build a container for our monolith to live in, but once that’s done your Ops team will have a much easier time of keeping the lights on.  

You can start managing things like our software dependency supply chain as code, where it can be reviewed easily and new tools like GitHub’s Dependabot can even automate some of that toil. 

Getting your monolith running in a container is minimal effort, and once it runs there you can also horizontally scale your environment easily by spinning up more containers and shutting them down when demand subsides. 

Your developers will have an easier time deploying to test environments while working in containers, reducing environment drift and simplifying the lives of your support teams in test as well as production.

Your security posture may improve, because you can limit the access to the whole application to just the approved paths using your container’s exposed ports.

The bad

Loading a monolith into a container can be very memory intensive - you can end up with really large containers to manage. While it’s not necessarily going to break anything, it will consume large chunks of memory when scaling, so being aware of that and planning for total memory available is important.

Your developers still have to build the large monolithic application, and that means that you still run the risk of unforeseen code interactions that come from all those different components living in the same namespace.

The step of packaging the container is an additional bit of overhead on your packaging and deployment process. While the packaged container is much easier to manage operationally, there is additional overhead time and effort spent to build and maintain the container library.

So what does the future hold?

image2

Most companies (if they were starting over from scratch today) would use tools and tech that are dramatically different from their current stacks. However, very few companies have the means (or even the desire) to scorch the earth and start fresh. There will be some architectural improvements realized by breaking specific functions out of the monolith and rebuilding them as microservices, but it’s highly unlikely that we’ll see large-scale rewrites. 

Meanwhile, moving to a containerized delivery system generates big wins for Operations teams who will have a more standardized means of managing workloads. While it wasn’t the original intent of containers to house large-scale monolithic applications, they’re more than capable of supporting them and while it will add some work to the packaging processes, the benefits definitely outweigh the additional costs. 

Further, moving the large monolith into a container is a good first step to expand the learning of developers who are moving into the containerization space; as budget is available and architectural justifications arise, those developers will develop the wisdom to know what parts of the monolith need to break away and which ones are better served remaining where they are.

Microservices are certainly not “dead”. Neither are monoliths. But the good news is that this isn’t an either/or proposition: we now have two tools in the toolbox, and judicious use of each will likely produce an optimal solution… at least until the next big revolutionary idea comes about!

Test drive Segment CDP today

It’s free to connect your data sources and destinations to the Segment CDP. Use one API to collect analytics data across any platform.

Get started
TS-CTA-Developer-Focus

Test drive Segment CDP today

It’s free to connect your data sources and destinations to the Segment CDP. Use one API to collect analytics data across any platform.

Get started
TS-CTA-Developer-Focus

Share article

Want to keep updated on Segment launches, events, and updates?

We’ll share a copy of this guide and send you content and updates about Twilio Segment’s products as we continue to build the world’s leading CDP. We use your information according to our privacy policy. You can update your preferences at any time.