Aporeto Logo
Aporeto Logo

Today, we’re continuing our journey to the cloud. Our monolithic 3-tier application has been deployed onto a public cloud provider such as AWS, Azure or Google. We have already begun the process of decomposing the monolith into distributed microservices, which may run on on-premises or in clouds.

Hybrid or Multi Cloud Security Zones

Securing microservices in hybrid or multi-cloud environments is challenging because firewall rules may need to be created to allow two microservices running across different IP address ranges to interact. Should a microservice be moved by the orchestrator into another IP address range (to load balance, or for HA), communication paths to other microservices may get cut off, due to the absence of the new required firewall rules. The strategy of securing microservices with firewall rules requires constant network gymnastics, and cannot keep up with the dynamic runtime environment at scale. At the heart of the problem is the mental model used for security, which is too unsophisticated to make the task intuitive.

Cryptographic Identities

If each of the microservices were to be cryptographically identified, security policies could be associated directly with microservices – following them wherever they run. A user-land agent on each physical or virtual host would become the enforcer of the security policies by standing in front of the IP stack. Because security would no longer be tied to IP address ranges, but instead tied to the identity of the application component, it would always be current; would protect individual microservices rather than dozens or hundreds of them in a group; and would be easy to administer across sites and clouds utilizing different IP address ranges.

Dynamic Runtime Environment

Microservices may be running in any number of different IP address ranges and may only be at a known address for a second. When there are dozens, hundreds, or thousands of microservices running, keeping track of their current addresses and programming firewalls with the correct sets of rules becomes impossible, even if microsegmentation tools are used – because creating lots of smaller IP address ranges does not help with scalability or the burden on operations personnel. In contrast, discovering running microservices, their dependencies and flows between them can be performed automatically and all of the security policies regulating the flows between them can be automatically created. One the security policies are approved, security is complete in its coverage and will always remain current, even in a multi-cloud dynamic environment.

Disparate Security Models

In Amazon’s Introduction to AWS Security they outline how to use built-in firewalls to control network access. There is an encrypted TLS service that is accessed through AWS-specific APIs, but using it causes vendor lock-in and requires changes to source code.

Not surprisingly, Microsoft has a different set of security services and guidelines for Azure where they advise customers to assume that a breach will happen. Azure networking provides the infrastructure necessary to securely connect VMs to one another and to connect on-premises data centers with Azure VMs. Their concepts are different than those of AWS.

Google Cloud Platform has yet a different set of concepts, services and guidelines, as described here, including data encryption and secure networking via GCP firewall rules.

Visibility and Control

Getting a clear view of your application security posture and performing troubleshooting across cloud providers is impeded by the fact that each infrastructure provider implements security differently. Ideally, telemetry data could be collected from each physical or virtual host on which microservices are running and captured in audit trails. Anomalous events – such as an unknown service asking for access to a microservice – could be recorded, an alarm raised, and the access attempt automatically denied.

One Consistent Model

As we have seen, the best strategy for securing an application across a hybrid or multi-cloud deployment is to elevate the model: moving from a model tethered to location-based security to one that uniquely identifies the microservices and uses security policies that are enforced by an agent placed onto each physical or virtual host. As new microservices are created, they will be protected consistently based on labels applied to them by the CI/CD pipeline tools or the runtime environment. With this strategy, the journey to the hybrid or multi cloud becomes significantly easier and more secure.

Read part one and part two of this blog series.

Recent Posts Key Security Concerns for a Kubernetes Deployment How We Prevented the Kubernetes API-Server Vulnerability Security Groups and their Pitfalls

Subscribe to Our Blog

x