Monoliths vs Microservices
by Emily Hier
Challenger bank Monzo (formerly Mondo) recently announced a Very Important Decision: their engineering team has, rather unusually for an early-stage start-up, built the bank’s backend system as a collection of distributed microservices.
Microservices are lightweight, semi-autonomous, self-sustaining pieces of logic. They operate in parallel and collaborate through network-based message passing. Microservices are actually quite an old idea - but it’s an approach that’s being used more and more in today’s enterprise architecture.
Larger enterprises normally do the bog standard thing of building their applications on a single technology platform, called monoliths. These are easy to build, but harder to manage over time.
But that’s not to say they don’t work - in fact, blogger Martin Fowler argued that “successful microservice projects almost always start out as a monolith that got too big and was broken up.”
By splitting a centralised system into microservices, companies like Monzo have a choice when it comes to building a service on their applications. Developers are able to work on different areas of an application without interrupting whatever is going on elsewhere. This sets up an ongoing, day-to-day relationship with the technology and its users.
The main principles of microservices are:
- Loose-coupling - there should be limited dependencies between microservices, so changes made to one service shouldn’t have an impact on other services, nor should there be chatty communication between services.
- Smart end-points - the “keeping the smarts (i.e. logic) in the end-points whilst keeping the middleware dumb” principle of microservices. Services should be built on minimal assumptions and constraints concerning the nature of the environment in which they operate.
- High cohesion - related logic should be placed within the same microservice, whilst unrelated logic should not be. The cost to replace small services with a better implementation, or even delete them altogether, is much easier to manage.
At SPARKL, we call microservices “black boxes”. They do something, but it doesn’t really matter how.
The difficulty comes when you combine them, and that’s why the unique SPARKL Clear Box® approach is the answer.
It’s important to understand the various behaviours of the various pieces of your system as it gets bigger, whether it’s a monolith or not. That’s a typical problem for any bank, which can have thousands of applications running over tens of thousands of systems.
Banks are obliged to report on every aspect of their operations, yet today’s technology’s stack simply doesn’t provide the detailed, comprehensive log data required to satisfy the demands of regulators and shareholders.
As a result, a bank’s architecture can be fragmented and inconsistent. Data about the same customer or product are often divided in between systems that aren’t able to talk to each other. This is the black box swamp, and very few are able to control it.
With SPARKL, you can. It’s a complete legacy solution that manages the behaviour of distributed machines, and ready to be implemented into the enterprise immediately:
- Supports fine-grained (per-user) provisioning, load balancing and auto-scaling of microservices, as well as the enforcement of SLAs
- Supports typical devops and continuous delivery patterns, including circuit breakers, green/blue deployments, A/B testing and canary releases
- Completely scalable and able to implement a distributed intelligence pattern where a SPARKL router is baked into every microservice
- Natively supports Erlang, Python and script/browser-based nanoservices
Image courtesy of Warner Bros.