For some, the ideal picture of a modern application is a collection of microservices that stand alone. This perfect design isolates each service with a unique set of messages and operations.
They have a discrete code base, an independent release schedule, and no overlapping dependencies.
As far as I know, this platonic ideal of a system is rare, if it exists at all. It might seem ideal from an architectural perspective, but clients might not feel that way.
There’s no guarantee that an application made up of independently developed services will share a cohesive API.
Regardless of how you think about microservices vs. SOA, services should share a standard grammar, and microservices communication is not always a design flaw. The fact is, in most systems, you need to share data to a certain degree.
For example, in an online store, billing and authentication services need user profile data. The order entry and portfolio services in an online trading system both need market data.
Without some degree of sharing, you end up duplicating data and effort. This creates a risk of race conditions and data consistency issues.
In this post, I’ll talk about microservices communications and how letting your services talk to each other can lead to a more robust architecture. I’ll cover some of the critical advantages of creating a shared communications protocol and shared data sources.
Then, I’ll discuss the most common patterns for exchanging data between microservices, and how to make sure that letting your services talk to each other doesn’t lead to unforeseen problems.
What Is Microservices Communication?
Microservices communication is a mechanism for services to communicate with each other, rather than restricting them to communicating with clients and data stores and data sources.
This communication can happen over many different mechanisms, but you can group them into a few broad categories:
- Using messaging to exchange lightweight data structures, often via a message broker that manages sessions and data queues.
- Via a shared data store, where the service might not communicate directly, but share a common source of information.
- Exchanging RESTful data, similar to the way they communicate with clients.
Microservices Communications Benefits
What are the advantages of enabling communication between your microservices? Let’s go over a few before we delve into different mechanisms in detail.
Enterprise systems tend to use a great deal of common data. For example, in the online store mentioned above, the billing and authentication systems need to share information about clients.
But that’s only the beginning.
User profile information might also extend to order entry, website preferences, and email marketing. Inventory data is needed by at least a few different systems, too.
Each service needs a common way to represent this shared data.
Users need a unique identifier, and if one system relies on unique email addresses while another uses first name and last name, problems will eventually arise. Shared communications make it possible to specify a single source-of-truth for shared information.
Shared communications make it possible to specify a single source-of-truth for shared information.
Code Reuse and Architectural Efficiency
Shared anything usually means shared code. So it makes sense that creating mechanisms for sharing data would lead to sharing code, too.
This starts with the standardized representation of data discussed in the previous section. A common representation of shared data often means a common codebase.
Of course, one of the primary advantages of microservices is that teams can work independently, with their tools and in a different language from other teams. So, code reuse might not be possible.
But that doesn’t mean that shared communications mechanisms aren’t possible, it just means that sharing needs to move “up the stack,” into messages that code be JSON or XML.
This isn’t a compromise or even a disadvantage. A shared source of data is still a valuable tool. It simplifies the architectural design by limiting access to data to a single source.
At this point, you might be asking:
- How do you share data without building a distributed monolithic service instead of a micro?
- What’s an effective and safe way to implement microservice communication?
Let’s take a look at a few different mechanisms.
First, we’ll go over different sharing scenarios. Depending on how you use the data, you can share it via events, feeds, or request/response mechanisms. We’ll take a look at the implications of each scenario.
Then, we’ll cover several different mechanisms for microservice communication, along with an overview of how to use them.
What data do your services need to share? How often do you update that shared data? Is one service the primary provider of the data? Does the nature of the data suggest the creation of a dedicated facility to share it? Before deciding how to share data, it’s essential to identify the information you’ll share and how each service will use it.
You need to ensure that sharing data does not result in tightly-coupled services. It’s one thing to have microservices that communicate with each other. It’s another to build a distributed monolithic service.
Microservices are often developed by different teams, and the teams need to communicate if the services are going to share data.
A system like domain-driven design can often help with defining boundaries that make sense at a business level.
There are many different ways to slice up and categorize interactions between services. Let’s break them down into three broad categories.
Sometimes the shared data is a stream of updated or new records. For example:
- Capital markets are often represented as a stream of prices and transactions.
- A feed of orders for an e-commerce business.
- A stream of inventory changes.
Services most often consume data feeds as a “real-time” stream of records. A stream requires clients to cache the data that they are interested in, which can be expensive.
However, a significant advantage of feeds is that the client and server are very loosely coupled. They only need to share data types and message formats.
An alternative to a stream is request-response messaging.
When a service needs external data, it requests it from the data provider. By requesting the data when the service needs it, the need for a cache is eliminated.
But, it adds latency to transactions that need shared information. The advantage of using a smaller cache is often greater than that latency, though.
Depending on how you implement it, request-response can create a tight coupling between data clients and servers. In addition to sharing data formats, they may need to communicate directly with each other.
Domain events are similar to a data feed, but with additional application-specific semantics.
They represent a change in application state. Some services process the events, depending on the context of the event and the role of the receiver.
For example, a sale in an e-commerce application consists of several steps, each of which may result in an event. So, you can break a sale down into steps:
- Add the item to a cart.
- Input a coupon code.
- Select a shipper.
- Submit a credit card transaction to a payment processor.
Services publish domain events to an event log. Clients retrieve the events via request/response semantics.
So, services consume them as both a data feed and with request-response messages.
Domain events deserve special note since they reflect careful planning and design of how data is shared between services.
Mapping out how the services need to share data makes selecting the sharing mechanism is easier. Now that we have an idea of how to share data, let’s take a look at three of the most common paradigms.
Messaging is a common choice for sharing data between services.
Applications exchange messages, typically via a message broker. The broker routes messages to interested parties using topics and queues.
Some common systems are RabbitMQ, Kafka, and ActiveMQ. Most systems offer both publish-subscribe and point-to-point message semantics. So, services can use them to implement all three of the scenarios we covered above.
A messaging system solves several design problems at once.
Services can be anywhere about each other, simplifying the use of containers and cloud instances. Since data providers and clients connect to a message broker and exchange data over topics, you don’t need service discovery or orchestration.
Most messaging APIs also offer asynchronous interfaces, which many application developers prefer.
The messaging system also acts as a natural demarcation point. Applications connect to it and need only share messaging formats.
Publish-subscribe messaging is an efficient choice for data feed sharing.
Publishers broadcast feed updates once, and the messaging system delivers the data to all interested parties. Topics are an effective way to segment data, making it easy for clients to limit the updates they receive to the data they need.
Messaging systems also support point-to-point messaging, and guaranteed message delivery. So you can use messaging to implement request-response semantics and ensuring that critical messages are always delivered.
Sharing Data Stores
Another way to share data is to share a data store. Services can share a relational database, NoSQL store, or another data storage service. One or more services publish the data to the database, and other services consume it when required.
Most databases and data stores provide data via request/response mechanisms.
A client application connects to the store and queries for the data it needs when it needs it. However, many stores also provide asynchronous mechanisms such as triggers or publish-subscribe interfaces.
But applications that shared a data store tend to be more tightly coupled than when they use messaging, since they need to share more than just message formats.
Sharing data stores is a valid strategy, but only if the design takes into account the business context and not technical concerns.
Many architects and developers bristle at the idea of using REST, or any HTTP-based mechanism, for microservices communication.
Web services have several disadvantages, but there are circumstances where they make sense.
RESTful services are easy to create and maintain.
In circumstances where both external clients and internal services need to access the same information, “stacking” APIs makes sense. Reusing interfaces avoids duplication.
Even if you don’t make the service available to external clients, you can reuse the data model, which means higher efficiency.
But REST is not without risk.
While creating a RESTful service is easy, implementing a robust one can be difficult. Any service that sits at the center of a system needs to be reliable and resilient.
Implementing a fault-tolerant REST service requires more effort than for messaging.
Since RESTful services are synchronous, they are not suitable for sharing data when low-latency is essential.
Load Balancing and Resiliency
All these mechanisms add an element of risk to your architecture.
If your microservices need to communicate with each other, then they rely on each other, too. You need to ensure that the services are prepared to handle the new load, and can recover from outages.
Load balancing is a robust mechanism for distributing load over your services. Fortunately, there are many ways to do this:
- Messaging system support patterns like round-robin requests.
- RESTtful services can be supported by load-balancers.
- Systems like Kubernetes have load balancing built-in.
Similarly, your communications mechanism should include a way of recovering from failure.
It might be a clustering mechanism, like the ones commonly found with databases. Or, it could be a mechanism for removing a failed service from a load-balancing set.
Regardless of which mechanism you choose, be sure that adding communication between your services doesn’t create a new point of failure.
Planning Is Key for Microservices Communication
Hopefully, you now have a better understanding of how services share data.
Developers and architects have a wealth of options for microservices communications. The challenge is to ensure that services remain sufficiently isolated and not tightly-coupled.
Careful planning and analysis of how data is shared is a critical first step.