Aporeto Logo
Aporeto Logo

Getting your first legacy monolithic application up and running at your cloud provider of choice is the beginning of a multi-step journey. You’re on the way to taking advantage of the latest advancements in computer science to increase your competitive standing and better serve your customers and employees. Once the monolithic application has been successfully deployed into the cloud, it’s time to think about breaking it up into microservices, to maximize the advantages that can be had. This process will have many iterations, and there will much organizational learning along the way! So let’s get started.

Microservices let us take advantage of the proven technique of writing our applications as a collection of modules, each of which is focused on a single purpose. Productivity goes up and logical errors go down when a small team of programmers create and maintain simple services that relies on additional services provided by their peers. With microservices, it becomes possible to use multiple programing languages, which play to the skill sets of the programmers, and empower them with different styles of expression that each have their advantages for certain classes of problems. Deployment becomes easier, as an individual service can be upgraded without having to wait for all other services – provided, of course, that its external APIs do not remove behaviors that other services are depending on.

However, the move to microservices introduces new challenges in areas including service discovery, performance, health monitoring, security and trouble-shooting – to name a few. From the very beginning you’ll need to decide on a way for services to advertise that they exist, and for clients of those services to find them. Will microservices be automatically discovered, or will they each have to register with a directory service when they come up?

Since microservices each exist in a separate isolated address space, and potentially on separate hosts or even in separate geographies, a call-return interaction is likely to take much longer than it did previously, when one procedure called another in the same address space on the same host. With the thinking that function calls are not free, you need to be very aware of how often various services will interact. Is it once an hour, or 100 times per second? Similarly, the databases need to be strategically placed, and may need to have buffering or caching put into place to ensure access to them does not create unacceptable bottlenecks.

Clearly, monitoring and troubleshooting a distributed microservices-based application can be difficult, and practically impossible unless planned for in advance. Most monitoring or troubleshooting tools are built with monolithic applications in mind. It’s one thing to pick some parameters and a calling sequence off a call stack using a modern debugger, and quite another to get tracing information from the HTTP packets traveling between distributed microservices, while factoring in time to map and correlate them to reconstruct the picture of what is taking place.

And then, lastly but most notably, there is security. The model used for monolithic applications simply will not work for microservices-based applications. In the past, monolithic applications were secured based on IP address ranges, using a technique often referred to as perimeter security. Whether using firewall rules for applications hosted on on-premises, or security zones for those deployed on clouds, the problem is that security rules do not provide fine-grained enough protection for the microservices. Therefore, they cannot track dynamic environments where services are spun up, shut down and moved frequently.

Each individual microservice needs to be protected, yet we don’t want to slow down the developers, so we wouldn’t want to require changes to source code. The best solution would select individual microservices based on their labels and enforce security policies that are tied to each microservice, rather than try to use IP address ranges as the basis of security.

For all their challenges, microservices are very compelling, delivering significant benefits for organizations that decompose their monoliths in an orderly fashion to take advantage of the advancements they provide.

Did you miss Part 1 of this blog? Find it here.

Recent Post Notes and Impressions from this year’s Red Hat Summit Enterprises Are Becoming Cloud-Native, But It Is A Journey – Thoughts from KubeCon The Security Blast Radius in Cloud Native Applications

Subscribe to Our Blog

x