How to Divide & Conquer API Management
Author: Kostas Papanikolaou
How to Divide & Conquer API Management
If you are a person whose everyday life revolves around programming, you probably already know that providing a definitive answer to questions regarding which type of architecture to use, what kind of API management tools to implement, and to similar questions, does not exist. The world of programming is vast, complex, and detailed. It requires specialists of several different types to transform a vision into an idea, and that idea into digital reality. APIs are an integral part of that process, and the way they are managed is equally vital since it defines the overall experience of someone who sends requests to APIs and expects answers.
API management can be handled in various ways, which have the same purpose, and are aiming to the same result: receiving requests and returning results to those sending these requests, making sure this is done in the best and quickest way possible. They are tools that sit between a client and a collection of backend services. Just like in the case of monolithic and microservices architectures, there is not one “clean” answer in the debate: API gateway, Message Queue, or Service Mesh? First, because each one has its pros and cons, and second, because each API is built on different principles, for different purposes, and therefore requires a different approach. Of course, some APIs do not use medians for management, and requests are handled directly from the API itself.
An API Gateway is usually a web-facing server that is scalable. It receives requests from both internal and public internet services and forwards them to the best-suited microservice or monolithic instance. API Gateways come with several features, which include but are not limited to:
- Load balancing
- API versioning
- Health checks
- SSL termination
- Data transformation
- Authentication & Authorization requests
They intercept all incoming requests and send them through the API management system, which handles several functions. As with all API management tools, what an API Gateway does varies depending on its implementation. API Gateways decouple client interfaces from the backend implementation. When clients make requests, the API Gateway breaks it into multiple requests, routes them to the correct places, creates a response, and also keeps track of everything.
Benefits of API Gateways
Above, we mentioned some of the features that come with an API Gateway, which is usually powerful in features offered. API Gateways are also low in complexity and are easy-to-understand and easy-to-use by web-veterans. They provide a substantial layer of defense against the public internet and they offload repetitive tasks, such as user authentication or the validation of data. Moreover, API Gateways protect APIs from malicious attacks such as SQL injections, DoS attacks, and more.
Downsides of API Gateways
When compared to other forms of API management, API Gateways have the downside of being centralized. This means they can be deployed in a horizontally scalable fashion, but require a single point to register new APIs or change configuration. From an organizational perspective, they are usually handled and maintained by a dedicated team. Besides, configuration an application and API through an API Gateway requires an increased amount of orchestration, which will add a level of difficulty for the developers responsible for that configuration. Finally, the number of scenarios API Gateways can handle may impact the speed and reliability of an app.
Message Queues/Message Brokers
Message Queues, also known as Message Brokers, are intermediary computer program modules that help with the translation of a message from the formal messaging protocol of the sender to the formal messaging protocol of the receiver. They are elements in telecommunication or computer networks where applications communicate by exchanging formally-defined messages. Message Queues allow the establishment of complex communication patterns by decoupling the sender and receiver.
To achieve that, Message Brokers use several different concepts, which include but are not limited to:
- Topic-based routing
- Publish-subscribe messaging
- Buffered task queues
Decoupling is the process that allows two or more systems to be able to transact/communicate without being connected or coupled. In our case, the sender and the receiver are decoupled thanks to Message Queues and do not interact with each other. A decoupled system allows changes to be made to anyone system without affecting any other system.
Why use Message Queues
The ability of Message Queues to decouple is by far the greatest benefit one will enjoy should they decide to implement Message Queues as their selected form of API management tools. Decoupling not only ensures that changing one system will not affect any other system, but it also makes health checks, routing, endpoint discovery, load balancing, and other concepts, become redundant. When auto-orchestration and scaling decisions are based on the message count in each queue, decoupling shows its true strength, leading to very efficient systems.
Do they “get the message across”?
We’ve pointed out time and time again that there is no such thing as “the perfect solution” and Message Queues have their own… thorns. For example, Message Queues do not excel at request/response communication. They are not made for that purpose, and additionally, due to their buffered nature, they may add significant latency to a system. Like API Gateways, they are centralized and horizontally scalable, and finally, they can cost much if they are to be run at scale.
Unlike API Gateways and Message Queues, Service Meshes are decentralized. They are self-organizing networks that are found between microservice instances and can handle:
- Endpoint discovery
- Health checks
In essence, Service Meshes attach a small agent called “sidecar” to each instance. The sidecar mediates traffic and handles instance registration, metric collection, and upkeep. Even though they are conceptually decentralized, the majority of Service Meshes come with one or more central elements to collect data or provide admin interfaces.
“Shapeshifters” of programming
By far the best aspect of Service Meshes is their dynamic nature, which allows them to easily shapeshift and accommodate new functionalities and endpoints. Besides, their decentralized nature makes them great for working on microservices within fairly isolated teams. Moreover, Service Meshes allow DevOps to gain visibility into the microservices, TLS can be applied instantly, deployment can be controlled thanks to granular traffic control, and finally, Service Meshes detect failures and avoid them using circuit breakers.
Young and complex
Service Meshes are the latest developments in the API and microservices management world. Therefore, they are rather young, and still under… review. Also, they are quite complex and require a lot of moving parts. To understand this better, we will use the example of Istio, a Service Mesh. For Istio to be fully utilized, the deployment of a separate traffic manager is required, as well as deploying: a telemetry gatherer, a certificate manager, and a sidecar process for every single node.
As it has become clear while analyzing each popular method of API management, there is no one definitive answer on which one to choose. To be exact, this is not a matter of choice, since more than one of the choices above can be implemented at the same time. for example, a Service Mesh can be deployed to handle inter-service communication, an API Gateway can be used for front ones to face API, and a Message Queue can be used for task scheduling.