Aporeto Logo
Aporeto Logo

As a response to the cloud, the networking industry reinvented itself once more and borrowed concepts from VPNs by moving isolation to tunnels and creating micro-segmentation. GRE, VXLAN and Geneve were invented to create virtual VLAN-type structures that could span multiple racks within or or across data centers without the routing and management complexities of L2 technologies. SDN controllers, protocols (VTEP, OVSDB, etc.), and software gateways became necessary as the requirements of segmentation solutions increased. The term “micro-segmentation” was coined to show an ever-larger number of segments within a larger network.

If we take a step back, we can quickly see that all of these segmentation solutions are based on the same concept. They start with a fundamental assumption that applications, services (or end-points) are identified by their IP addresses and potentially port numbers. Then, they introduce one level of indirection by mapping sets of end-points to an identifier (VLAN, VXLAN ID, MPLS label, etc.) and they make policy decisions based on the identifier. Finally, they use a control plane to disseminate state about these associations and corresponding policy rules. In virtualization and cloud worlds, mapping of end-points to identifiers is programmed through orchestration tools. Secondary control planes optimize state distribution information for each virtual network.

The adoption of containers, microservices, and serverless architectures once more challenges the assumptions behind these solutions. Applications are disaggregated to smaller and often ephemeral components that are launched based on events. Segmentation needs to account for the dynamic nature of these applications; troubleshooting tunnels and control planes is becoming increasingly complex. The right tools are still evolving, and the quick convergence of a network is an ever-larger problem.

A new set of solutions (still at their infancy) appear to be challenging network virtualization benefits by removing the need for network-level segmentation. These solutions assume a simple, flat network, where every process or workload is still uniquely identified through its IP address and port numbers. They attempt to solve the segmentation problem by distributing ACLs to every host. The concept is quite simple: If there are two domains of workloads (call them A and B), the policy problem can be solved by installing an ACL on every server that hosts workload A with a rule that allows communication between A and B. Assuming a control and management plane that allows identification of workloads and distribution of state, segmentation problems are pretty much “solved.”

But, flattening the networks comes with its own penalties. Let us consider a simple scenario where workloads A and B are composed of 20 containers each and placed in 40 different hosts. A flat topology requires creating and maintaining 400 ACL rules. Each server that hosts a container of type A must have an ACL to allow connectivity to any of the containers of type B (20 per server). Now, if we are to consider a data center with 1000 workloads each with 20 containers, we quickly realize that there is the management nightmare of hundreds of thousands of ACLs.


There are two key issues at this juncture:

First, there is a question of the convergence. Even if we are successful at distributing ACLs every time a workload is activated, when a new container is added to a target set, the control plane would need to update the ACL rules on all servers that host related containers. In application environments with ephemeral micro-services or events-based mechanisms (see AWS Lambda), where a process (container) is only instantiated to complete a specific task and terminates afterwards, the control plane will have to sustain an activation rate of several thousand ACLs per second.

Second, on the performance front, introducing 1,000s of ACLs on every host has its own challenges since ACL lookup algorithms are essentially point location problems in a multi-dimensional space with significant per-packet complexity. Current Linux systems with ipsets give the illusion of simplicity with a space-time trade off, but still require several CPU cycles for every packet.

You can find out more by reading our recently published Cloud Security Gaps Whitepaper here, or register for our webinar on Cloud Security for Financial Institutions here.

Recent Posts Aporeto Keeps Customers Protected: A Report by Gartner Analysts Aporeto’s Partnership with VMware Cloud PKS (formerly VKE) Why Using IP Addresses to Secure Applications is a Terrible Idea in the Cloud-Native Era

Subscribe to Our Blog

x