Simple by design; Automating per-namespace isolation with Aporeto and OpenShift

By: Jeff Kelly 08.16.2019
Simple by design; Automating per-namespace isolation with Aporeto and OpenShift

Zero Trust Container Networking

Simple by design; Automating per-namespace isolation with Aporeto

As organizations move closer and closer into cloud-native application implementations, a Zero Trust network security architecture (rather than traditional castle wall / perimeter security architectures), is critical to ensuring the appropriate level of security at the pace and granularity that developers are releasing applications. This is the first in a series of blogs inspired by some of the application design patterns in the BC Government’s OpenShift environment, and how Aporeto is enabling these secure communication patterns within an existing environment.

A zero trust network security policy has the capability to describe network communication at a very fine level of detail, which will also drastically increase the quantity of policy information generated. Managing and maintaining that level of detail can quickly become overwhelming. Aporeto’s grouping and labeling of objects, as well as its ability to leverage OpenShift labels as grouping identifiers, allows policies that can encompass a fine level of detail and still be human readable.

With the increase in scope of change and speed of change in the DevOps lifecycle, being able to simplify the policy requirements of a zero trust network and bring automation to the control plane becomes even more necessary. Being able to codify Aporeto’s access policies enables and supports using existing CI/CD tools and processes as an integral component for secure application delivery, and pushes development teams another step further along the path of integrating security into the application codebase.

Automating OpenShift Namespace Isolation with Aporeto

The previous OpenShift multi-tenant SDN plugin was simple and provided decent network namespace isolation that wasn’t available in other Kubernetes flavors. This plugin was one key element in building a multi-tenant container platform and was very attractive to customers. As this plugin is deprecated and NetworkPolicy takes it’s place (as the standard across many kubernetes flavors), controlling network security can become very complex. Aporeto helps reduce the operational overhead and, combined with automation, can create fully automated, sensible defaults for enterprise container workloads in a codified manner.

Automated configuration at the namespace level would be used to apply any organization-wide default policies (that may not already be inherited). Individual project teams can then maintain their own specialized access policies that are specific to their needs. Including all of this configuration as code and possibly using a GitOps approach enables an open security review process, enforces automated configuration within continuious deployment pipelines, and promotes a collaborative secure development relationship between development and SecOps teams.

Aporeto Base Policies

By definition, our zero trust network enforcement with Aporeto will deny all traffic everywhere. As a starting point, a few base policies are in place and flow down to each namespace as inherited configurations. Aporeto supports a hierarchical method for layering policies which is very helpful for delegaging “base” policies to security and operations teams, while enabling more granular control to be delegated to development teams. The base policies in our example includes internal access requirements for the platform to function and allowing egress internet access.

The Sensible Default: Permit-All Within Namespace

Rolling out a sensible default policy for all project namespaces will reduce the ramp up time for projects, and would ideally be enough for those that do not require more advanced communication policies. The assumed namespace requirements are:

  • Each pod/deployment in a namespace will require outgoing internet access
  • All applications in a given namespace are closely related and can communicate with each other.

This policy replicates the experience provided with the legacy OpenShift multi-tenant SDN plugin.

Environment Setup

This environment will consist of a couple sample application pods running in a single namespace/project within OpenShift. It includes 2 separate pods/deployments:

  • Front End Application Names
    • app-1
  • Database Application Names
    • db-1

A successful deployment will use curl to validate connectivity to the primary service ports with the following expectations:

  • app-1 can connect to db-1 and egress internet addresses
  • db-1 can connect to app-1 and egress internet addresses

OpenShift Application Deployment

In this step, create a new namespace/project and deploy the sample application pods into the namespace/project:

oc new-project alpha
oc new-app postgresql-ephemeral --name="db-1" -p DATABASE_SERVICE_NAME="db-1"
oc new-app nginx-example --name="app-1" -p NAME="app-1"

 

Validate Default Cluster Access

Validate all components can communicate externally but not to each other:

oc rsh $(oc get pods | grep Running | grep app-1 | awk '{print $1}')
sh-4.2$ curl http://db-1:5432
...
sh-4.2$ curl www.google.ca
<!doctype html>..

oc rsh $(oc get pods | grep Running | grep db-1 | awk '{print $1}')
sh-4.2$ curl http://app-1:8080
...
sh-4.2$ curl www.google.ca
<!doctype html>..

From the above output, we can see that each pod cannot communicate with any other pod. The pods do, however, have egress internet access.

Aporeto Intra-Namespace Allow

Apply the following policy to the namespace to allow traffic between all pods in a single namespace. Automation through a cluster operator and/or project creation automation could be used to ensure that the policy is applied to the appropriate project:

apoctl api import --file manifests/networkpolicy_allow_intra_namespace.yml \
      -n /bcgov-devex/lab/alpha

Contents of manifests/networkpolicy_allow_intra_namespace.yml:

APIVersion: 1
label: networkpolicy_allow_intra_namespace
data:
  networkaccesspolicies:
  - name: networkpolicy_allow_intra_namespace
    action: "Allow"
    propagate: true
    subject:
    - - "$namespace={{ .Aporeto.Namespace }}"
    object:
    - - "$namespace={{ .Aporeto.Namespace }}"

 

Validate the Changed Policy

Confirm that access between app-1 and db-1 is now successful, and the other access remains the same.

oc rsh $(oc get pods | grep Running | grep app-1 | awk '{print $1}')
sh-4.2$ curl http://db-1:5432
curl: (52) Empty reply from server
sh-4.2$ curl www.google.ca
<!doctype html>..
oc rsh $(oc get pods | grep Running | grep db-1 | awk '{print $1}')
sh-4.2$ curl http://app-1:8080
<!doctype html>..
sh-4.2$ curl www.google.ca
<!doctype html>..


View through Aporeto dashboards:

Extending Communications Across Namespaces

A recurring request is to allow access to an internal service(s) hosted in another namespace. The following example will permit access between 2 namespaces while still respecting a delegated control model for each namespace.

The communication requirements are:

  • Each pod/deployment in a namespace will require outgoing internet access
  • All applications in a given namespace are closely related and can communicate with each other.
  • All applications in namespace alpha can communicate with all applications in namespace bravo.

Environment Setup

This environment will consist of sample application pods running in one namespace/project within OpenShift, as well as another sample application pod(s) running in a second namespace/project within the same OpenShift cluster. It includes 3 separate pods/deployments:

  • Namespace 1 – alpha
    • Front End Application Name
      • app-1
    • Database Application Name
      • db-1
  • Namespace 2 – bravo
    • Database Application Name
      • db-1

A successful deployment will use curl to validate connectivity to the primary service ports with the following expectations:

  • app-1.alpha can connect to db-1.alpha, db-1.bravo, and egress internet addresses
  • db-1.alpha can connect to app-1.alpha, db-1.bravo, and egress internet addresses
  • db-1.bravo can connect to app-1.alpha, db-1.alpha, and egress internet addresses

OpenShift Application Deployment

We will re-use the namespace and applications from the previous example and add a second namespace with the same sensible default policy. We will also need to join the pod-network for the namespaces/projects as we have implemented the OpenShift multitenant SDN.

oc new-project bravo
oc adm pod-network join-projects --to=alpha bravo

oc new-app postgresql-ephemeral --name="db-1" -p DATABASE_SERVICE_NAME="db-1"

apoctl api import --file manifests/networkpolicy_allow_intra_namespace.yml \
      -n /bcgov-devex/lab/bravo

 

Validate Default Cluster Access

Validate all components can communicate externally and to each other, but not to components in the other namespace:

oc -n alpha rsh $(oc get pods -n alpha | grep Running | grep app-1 | awk '{print $1}')
sh-4.2$ curl www.google.ca
<!doctype html>..
sh-4.2$ curl http://db-1.alpha:5432
curl: (52) Empty reply from server
sh-4.2$ curl http://db-1.bravo:5432
...

oc -n alpha rsh $(oc get pods -n alpha | grep Running | grep db-1 | awk '{print $1}')
sh-4.2$ curl www.google.ca
<!doctype html>..
sh-4.2$ curl http://app-1.alpha:8080
<!doctype html>..
sh-4.2$ curl http://db-1.bravo:5432
...

oc -n bravo rsh $(oc get pods -n bravo | grep Running | grep db-1 | awk '{print $1}')
sh-4.2$ curl www.google.ca
<!doctype html>..
sh-4.2$ curl http://app-1.alpha:8080
...
sh-4.2$ curl http://db-1.alpha:5432
...

From the above output, we can see that each pod can communicate with any other pod in the same namespace, but cannot communicate with any other pod in the other namespace.

Aporeto Intra-Namespace Allow

Apply the following policies to both namespaces to allow traffic from all pods in one namespace to pods in the other namespace:

apoctl api import --file manifests/networkpolicy_allow_ns_alpha_to_bravo.yml \
      -n /bcgov-devex/lab/alpha
apoctl api import --file manifests/networkpolicy_allow_ns_bravo_to_alpha.yml \
      -n /bcgov-devex/lab/alpha

apoctl api import --file manifests/networkpolicy_allow_ns_alpha_to_bravo.yml \
      -n /bcgov-devex/lab/bravo
apoctl api import --file manifests/networkpolicy_allow_ns_bravo_to_alpha.yml \
      -n /bcgov-devex/lab/bravo

Contents of manifests/networkpolicy_allow_ns_alpha_to_bravo.yml:

APIVersion: 1
label: networkpolicy_allow_ns_alpha_to_bravo
data:
  networkaccesspolicies:
  - name: networkpolicy_allow_ns_alpha_to_bravo
    action: "Allow"
    propagate: true
    subject:
    - - "$namespace=/bcgov-devex/lab/alpha"
    object:
    - - "$namespace=/bcgov-devex/lab/bravo"


Contents of manifests/networkpolicy_allow_ns_bravo_to_alpha.yml:

APIVersion: 1
label: networkpolicy_allow_ns_bravo_to_alpha
data:
  networkaccesspolicies:
  - name: networkpolicy_allow_ns_bravo_to_alpha
    action: "Allow"
    propagate: true
    subject:
    - - "$namespace=/bcgov-devex/lab/bravo"
    object:
    - - "$namespace=/bcgov-devex/lab/alpha"

 

Validate the Changed Policy

Confirm that applications can now connect between namespaces:

oc -n alpha rsh $(oc get pods -n alpha | grep Running | grep app-1 | awk '{print $1}')
sh-4.2$ curl www.google.ca
<!doctype html>..
sh-4.2$ curl http://db-1.alpha:5432
curl: (52) Empty reply from server
sh-4.2$ curl http://db-1.bravo:5432
curl: (52) Empty reply from server

oc -n alpha rsh $(oc get pods -n alpha | grep Running | grep db-1 | awk '{print $1}')
sh-4.2$ curl www.google.ca
<!doctype html>..
sh-4.2$ curl http://app-1.alpha:8080
<!doctype html>..
sh-4.2$ curl http://db-1.bravo:5432
curl: (52) Empty reply from server

oc -n bravo rsh $(oc get pods -n bravo | grep Running | grep db-1 | awk '{print $1}')
sh-4.2$ curl www.google.ca
<!doctype html>..
sh-4.2$ curl http://app-1.alpha:8080
<!doctype html>..
sh-4.2$ curl http://db-1.alpha:5432
curl: (52) Empty reply from server


View through the Aporeto dashboards:

Verify Defaults with New Application Deployments

Add another application deployment to one of the namespaces/projects and use the same validation method to confirm that the new application can communicate with an application in the other namespace/project.

While this use case could have been implemented by a pair of rules in the Aporeto parent namespace, we can leverage OpenShift namespace/project automation to keep the rules close to the objects that they are affecting, as well as enable delegation to the namespace admin teams using automation.

Granular Access Control Within a Namespace

For this example we will not have the “sensible-default” policy applied to our namespace and will focus on layering more detailed constraints into an existing OpenShift namespace. Our first assumption is that the cluster policy allows cluster services to communicate where required for scheduling and other services. Each namespace will have no additional access policies (resulting in zero permitted communication). We will implement an access policy that adds egress internet access for all applications (pods). Communication between applications/pods will require an explicit configuration policy.

Environment Setup

This environment will consist of a few sample application pods running in a single namespace/project within OpenShift. It includes 3 separate pods/deployments:

  • Front End Application Names
    • app-1
    • app-2
  • Database Application Names
    • db-1

A successful deployment will use curl to validate connectivity to the primary service ports with the following expectations:

  • app-1 can only connect to db-1 and egress internet addresses
  • app-2 can only connect to egress internet addresses
  • db-1 can only connect to app-1 and egress internet addresses
  • new app (app-3) is only able to connect to egress internet addresses by default

OpenShift Application Deployment

In this step, create a new namespace/project and deploy the sample application pods into the namespace/project:

oc new-project devops-platform-security
oc new-app postgresql-ephemeral --name="db-1" -p DATABASE_SERVICE_NAME="db-1"
oc new-app nginx-example --name="app-1" -p NAME="app-1"
oc new-app nginx-example --name="app-2" -p NAME="app-2"

 

Validate the Default Policy

Validate all components can communicate externally but not to each other:

oc rsh $(oc get pods | grep Running | grep app-1 | awk '{print $1}')
sh-4.2$ curl http://db-1:5432
...
sh-4.2$ curl http://app-2:8080
...
sh-4.2$ curl www.google.ca
<!doctype html>..

oc rsh $(oc get pods | grep Running | grep app-2 | awk '{print $1}')
sh-4.2$ curl http://db-1:5432
...
sh-4.2$ curl http://app-1:8080
...
sh-4.2$ curl www.google.ca
<!doctype html>..

oc rsh $(oc get pods | grep Running | grep db-1 | awk '{print $1}')
sh-4.2$ curl http://app-2:8080
...
sh-4.2$ curl http://app-1:8080
...
sh-4.2$ curl www.google.ca
<!doctype html>..

From the above output, we can see that each pod cannot communicate with any other pod. The pods do, however, have egress internet access.

Aporeto APP-1 to DB-1 Allow

Apply the following policy to the namespace to all traffic from app-1 to db-1:

apoctl api import --file manifests/networkpolicy_allow_app-1_to_db-1.yml \
      -n /bcgov-devex/lab/devops-platform-security


Contents of manifests/networkpolicy_allow_app-1_to_db-1.yml:

APIVersion: 1
label: allow_app-1_to_db-1
data:
 networkaccesspolicies:
 - name: allow_app-1_to_db-1
   action: "Allow"
   propagate: true
   subject:
   - - "app=app-1"
   object:
   - - "app=db-1"

 

Validate the Changed Policy

Confirm that access between app-1 and db-1 is now successful, and the other access remains the same.

oc rsh $(oc get pods | grep Running | grep app-1 | awk '{print $1}')
sh-4.2$ curl http://db-1:5432
curl: (52) Empty reply from server
sh-4.2$ curl http://app-2:8080
...
sh-4.2$ curl www.google.ca
<!doctype html>..

oc rsh $(oc get pods | grep Running | grep app-2 | awk '{print $1}')
sh-4.2$ curl http://db-1:5432
...
sh-4.2$ curl http://app-1:8080
...
sh-4.2$ curl www.google.ca
<!doctype html>..

oc rsh $(oc get pods | grep Running | grep db-1 | awk '{print $1}')
sh-4.2$ curl http://app-2:8080
...
sh-4.2$ curl http://app-1:8080
<!doctype html>..
sh-4.2$ curl www.google.ca
<!doctype html>..


View through the Aporeto dashboards:

Verify Defaults with New Application Deployments

Add another application deployment and use the same validation method to confirm that the new application pod cannot communicate with the other application pods, but can still access the internet egress.

Automation for Project/Development Teams

The above steps illustrate a straightforward communication policy that can be implemented through codified policy written by the application teams, vetted and approved by their security officers, and implemented through automation. In this use case, we show how the project teams could have access and control over the network communication policies within their project space, as well as integrating the security policies directly into their application development process. Our next step would be to use automation to have the codified object applied to their namespace scope within the Aporeto control plane.

What’s Next?

Our next use case in this series will look at layering Aporeto’s Zero Trust network solution on top of an existing enterprise network to add additional security layers without replacing the existing infrastructure or processes.

 

Written by Jeff Kelly, Arctiq 

Recent Posts Of Ranchers and iPads: How British Columbia Replaced Paperwork with OpenShift and Aporeto The End of the Monolithic Application Five Things to Check Out at VMworld 2019 and Visit While in San Francisco