If you’re a technology business leader and your company builds any kind of software, you’ve likely heard some buzz around microservices from your team.
They might say, “We need to break the monolith,” or “Microservices will let us really leverage the cloud,” or even, “We need to re-write our products using microservices.”
You might be left scratching your head. Why microservices? Is this microservices thing a truly revolutionary concept? Or is it more evolutionary? Will leveraging microservices dramatically cut costs? Time to market? Boost revenue?
The short answer is, “It depends.” Microservices are evolutionary, increasing in popularity over the last few years as the sophistication of cloud-based infrastructure has grown. However, microservices are not a silver bullet, and like many new technologies, they might cost you money if misapplied.
A Brief Explanation of Microservices
At the simplest level, microservices architecture is when a software product is broken up into separate components, each with a specific responsibility. In this way, the microservices concept is an extension of service-oriented architecture (SOA) introduced in the late 90’s. What’s new with microservices are the use of containers and the ability to more easily leverage disparate technologies.
Before I talk about that, let me first talk about what seems to be turning into a four-letter word in software circles: the monolith.
“Death to the Monolith!”
If you’ve walked the halls of your software development department, you may have overheard talk about “the monolith” and its evils. This term typically refers to the traditional way of building software, often breaking up the product into layers, but with one single set of code. At the highest level, the architecture might look like this:
First let me say, there’s nothing inherently wrong with this idea. Teams have been successfully building software for a very long time using this approach.
However, there are a few drawbacks to the monolith:
When multiple teams work on the product, a lot of coordination is required.
The product must be released as a whole, and every new enhancement must be working at that point in time.
When there is more demand on the product, the computing environment hosting it must be scaled, regardless of whether every part of the product is in high demand.
In contrast to the monolithic approach, the microservices approach takes a different path: breaking up the product into multiple components, each acting as its own mini application. This allows building a network of cooperative services that collectively provide the necessary functionality to make the product work.
As an example, let’s say we were building the next Amazon. We might find that splitting the product up into different components might make a lot of sense.
For example, we might have:
Product Catalog and Search
Shopping Cart and Checkout
Fulfillment and Order Tracking
Each of these areas of the product is focused on one thing and may have different scaling needs based on customer usage. If we build each of them as a separate microservice, it can give us some benefits:
We could have a different development team focused on each, limiting dependencies between teams.
Each team could release their microservice when it’s ready, increasing responsiveness to customer needs.
We can scale the computing environment for each microservice independently, based on the demand of that area of the product.
It would look something like this:
In this example, I’ve shown each microservice having a user interface. This is one of many different possible design approaches. Another variation is to have the UI be its own microservice.
To make this technique work and truly take advantage of its power, we need a way to effectively manage all these different microservices. If we have to do a bunch of manual work, or if each computing environment is really expensive, the value of this approach diminishes quickly.
20 years ago, software typically ran on one type of computing platform, most popularly Windows on the desktop and Windows Server or Linux on the server. Client/server architectures were popular, where much of the heavy computing was performed on each user’s computer, with centralized storage and additional processing on the back end. This back-end code was typically hosted on a server, usually located in a server room somewhere in the building.
A specific type of cloud service is one that leverages containers. Backed by an extremely efficient technology—typically Docker—you can think of a container as a super-lightweight virtual server, optimized to the point where it can be started and stopped very quickly, and at a low cost per container. This allows scaling up and down based on demand, in a more flexible and effective way than with the cloud technologies that came before.
For example, if one microservice is being heavily utilized, multiple containers for that microservice can be “spun up.” Using my example above, if the Product Catalog and Search and Shopping Cart and Checkout microservices are under a high load, they might be scaled to look something like this:
In this example, Product Catalog and Search is replicated across three containers to triple the horsepower for that service, and the Shopping Cart and Checkout microservice has double the capacity since it’s replicated across two containers. This approach saves costs by scaling just the portions of the application that are under heavy load, with less computing power than otherwise would be required for a monolithic product.
To prevent a bunch of manual work to deploy microservices and spin up and spin down containers, an orchestrator is typically leveraged.
The most popular technology is Kubernetes, an open source technology available on all the major cloud platforms. Kubernetes handles the mundane work of deploying microservices, monitoring the usage of Docker containers and automatically launching or shutting down containers to respond to user demand.
Kubernetes can even auto-heal a microservice that might have become unresponsive, by restarting its container.
Leveraging Disparate Technologies
Historically, a development team on a specific product would leverage one server-side software development technology. For example, .NET, Java, Node.js, etc. In one sense this kept things simple; the development team didn’t have to become expert in more than one thing. In other ways it was limiting, due to the use of only one tool for the job.
Because each microservice is its own mini application, there is no requirement that every microservice is built using the same technology. Because each microservice is solving a different type of problem, there might be one technology that’s a better fit than others. And today, with all the cloud services provided by the major cloud vendors, there may be something that already exists to provide the functionality for one or more microservices.
As an example, one part of an application might need to analyze uploaded images and categorize them automatically by performing digital image processing and extracting data. That might be a great target for a microservice. We could write that technology ourselves, using the same development tools as the rest of the product. However, it would be dramatically simpler if we created a microservice that leveraged the Computer Vision API on Azure or Amazon Rekognition on AWS.
When to Avoid Microservices
As exciting as microservices are, they don’t come without a cost. Each time a new microservice is added to the product, that means an extra communication pathway between it and other microservices. Since each is a mini application responsible for one part of the overall application, to accomplish the big-picture goal microservices need to talk to one another to get things done.
Each microservice typically manages its own data, which means that information will move from a centralized model typical with a monolithic-based approach to a distributed model. This means data consistency will not be immediate, and that adds complexity.
I’ll leave you with a few words of wisdom based on our years of experience building microservice-based applications for our clients:
If it ain’t broke, don’t fix it. If you have a product that’s working well, has great performance, and is making customers happy, think carefully about whether you want to re-write it from scratch using a microservices approach. This is usually not a great idea. If there are areas of the application that aren’t working great, consider peeling those parts off into microservices, but not the whole thing.
Start simple. Some clients we’ve worked with asked us to start by creating multiple microservices right from the start, even though just one development team would be working on it. Instead of this approach, which increases complexity at the beginning of development, I suggest starting with a monolithic approach. Split out microservices only when a) it would be beneficial to have another team work on one major aspect of the product, b) when it’s obvious a different technology would be useful, or c) one area of the application requires dramatically different scaling ability than other areas.
As the famous quote says, “With great power comes great responsibility.” Remember that with each microservice you add, complexity is increasing. There needs to be a solid business reason to increase complexity, so think carefully. It’s possible your first foray into microservices might have just two or three microservices. Don’t go crazy out of the gate. It’s almost always easier to add complexity than to take it away.
Microservices are an exciting development in the world of software engineering and have great promise to help you create better applications that delight your users. I wish you the best on your journey!
If you are still wondering, “why microservices?”, get in touch with us directly by filling out our Contact Us form. Our team would be more than happy to talk through it with you.