It has been a common wisdom that good security depends on a solid wall. That wisdom is reflected in impressive medieval city walls, the Chinese Wall, and even in the Game of Thrones Dragnonstone. These walls offer a sense of “security” and protection. So, it is natural for the “wall” concept to creep into the computer world. Since 1988, when the first packet filter was conceived, we collectively have spent centuries designing the “next generation firewall:“ YABW (yet a better wall).
Given the effort, it is worth asking if YABW has really helped us or if it is just “security theater” and a way to “intimidate the enemy.”
Following the principles of the old walls, a firewall separates two trust domains: The outside scary world and the inside serene environment. Identity of end-points is often implicitly tied to the IP addresses and port numbers. Firewalls then use this identity to make their decisions on who can enter the castle. This assumption is risky as it makes a judgement call based on appearance.
Since the first introduction of packet filters, where mapping was static, we have seen a series of sophisticated technologies with the main goal of making stronger associations between service semantics or end-points to IP addresses.
Stateful firewalls were invented because static associations were not enough; they introduced a series of mechanisms to predict the state of the end-points and make more intelligent decisions. This led to complex algorithms that use packet sequences to replicate end-point state.
But applications evolved and this was not enough. Application Layer Gateways (ALGs) tried to understand relationships between connections, such as control and data channels in FTP or control and voice channels in SIP. Application level firewalls started looking deeper in packets to guess the nature of end-points and differentiate between them. Along the way, ALGs broke basic principles of end-to-end security and encryption by decrypting TLS traffic as a man-in-the-middle, injecting insecurity into the security solution.
Despite their complexity, these technologies have not proven to be sufficient. Attackers find new ways to bypass this guessing game as applications are becoming increasingly complex and difficult to predict.
The transition from servers to virtualization challenged IT infrastructures by increasing the number of end-points by an order of magnitude and reducing their lifetime to hours instead of years. The security industry responded by increasing the sophistication of detection mechanisms for these associations. Centralized firewalls became distributed and complex protocols were developed to distribute IP address and port number state information. Welcome to the era of micro-segmentation!
The container/Docker, micro-services, and cloud native revolution with ephemeral workloads further challenges assumptions borne by YABW. The number of end-points (Docker containers) increases again by orders of magnitude while their lifetime decreases to seconds. The distributed nature of micro-services creates a large number of new interactions and dependencies. Security teams are scrambling. Micro-segmentation simply fails to scale to meet these demands. Intelligent engineers will most likely produce YABW in the form of new firewalls and state dissemination systems that evolve the same old mechanisms with new algorithms and increased complexity with the goal of rapidly detecting the basic association of addresses to a service.
Apart from the technology issues though, there is another “wall” between domains. This wall is the human or organizational trust barrier. Can security teams trust application teams for providing enough information to make the security process better? Or are security teams playing the “It” in a cybersecurity “Marco Polo” game? And is this guessing game in firewalls and appliances really the best option to solve these trust issues? Can application teams trust that their security counterparts not to break operations? We have all seen variations of “well, it works in my environment, but it fails in production because Security did XYZ.”
It is time to take a step back and revisit the assumptions that led us to this unproductive complexity and let us remember that complexity is security’s number one enemy. Fundamentally, we have a trust problem both at technology and organizational levels. On the technology front, trust cannot be solved by dealing with problems in the network and trying to “guess” application’s context. At the organizational level, the DevOps movement has taught us that trust has to be built throughout the organization by “communication mechanisms.” It is time to destroy silos between security and application teams.
Our fundamental principle is that security is a problem of trust. We need to introduce trust-centric security in the way we build, deliver, and deploy software. We need to build trust in the way complex software interacts with its components or data. We need to avoid the fallacies of the past that introduce complexity and then spend years of “next generation YABW appliances” to fix the newly introduced problems.
We believe that King Agesilaus actually got it right: Teams and applications are the real security walls.