Cloud-Security-for-Financial-Institutions

[Start of recorded material 00:00:00]

Amir: Good morning everyone. This is Amir Sharif. The webinar will start at three minutes after the hour. We will start the webinar in just one minute.

Good morning everyone. This is Amir Sharif. I’m a co-founder of Aporeto, and this morning I want to talk about how we can get better application visibility and security and therefore easier regulatory compliance with Aporeto. So for the sake of this webinar, we’re going to focus on PCI DSS and Swift CSP. Just a minute please. It looks like there are some issues.

Let’s start again. This is Amir Sharif, and this morning we’ll work on better application visibility and security with Aporeto, therefore having easier regulatory compliance. For the scope of this presentation I’ll focus on PCI DSS compliance and Swift CSP compliance. And here I’ll talk about Aporeto and how it provides better application visibility and provides stronger security at the same time. Because we store our data in [time series] database, we create an audit trail for compliance purposes that can be used for PCI and Swift. At the same time I’ll show you how we can make your operations simpler with Aporeto.

So to start with, let’s take a couple of use cases. The first one is exact transactions. They are a payment processing servicing company servicing some of the top banks in North America. And what they needed was a scalable transaction system that was PCI DSS compliant, precisely because they’re dealing with credit cards. What you see on the right hand side of the screen is a setup where a credit card would get swiped in a multi-tenant reader. And that reader would then interface with a web front end that was backed up by a Kubernetes in this case, running in the public cloud. And in this case they were running three different availability zones for redundancy and backup.

What Aporeto helped exact with was, A, reducing the compliance scope through workload isolation. Despite everything being on a flat network per availability zone, Aporeto, through its technology and product, was able to isolate workloads and segment communications so that the right instances communicated with each other. And even though you had a multi-tenant reader, none of the tenants could read each other’s data. We did this through end-to-end authentication, authorization and encryption of the traffic. So every time somebody would come into the system their transactions would get encrypted. And policy governance would make sure that the right instances were talking to each other.

Moreover, they had uniform API access control – meaning that regardless of where the API access were done, Aporeto, through its use of end-to-end authentication, authorization made sure that the right APIs and right scopes were handled at each transaction. And because we also do image scanning and had the capability of monitoring the application behavior at run time, Aporeto also protected against malicious application discovery and any kind of probing. In fact, as soon as exact transactions went up, within seven seconds an attack started coming in. But that’s an attack that Aporeto was able to prevent, and the system is up and running today.

Second I want to talk about Swift CSP. Now Swift CSP has 16 high-level requirements that I’ve tried to distill into three different segments. The first one being securing the environment, which is restricting access to the internet, making sure that only the right instances can communicate with the internet. And secondly, reducing the attack surface and vulnerabilities – making sure that scanners are there, and if an image is infected that it has restricted access. Second is knowing who is accessing the system and limiting their access. This is done through identity management and managing privileges so that the right users interact with the right systems. And moreover, the right systems interact with each other based on their credentials. And having the capability so that those credentials are not compromised.

As I will explain in the webinar today, Aporeto provides a full PKI infrastructure and automatically rotates keys and certificates, thereby offloading the operators from the significant burden that’s usually error-prone or preventing developers from hard coding in credentials and therefore leaking them inadvertently.

Third, there’s a detect and response portion and a Swift CSP requirement, which is, A, detect anomalous behavior and second, once that anomalous behavior is there, plan for incident response and sharing information widely. We do this through monitoring application behavior and alerting – providing the right alerts to the sim systems or whatever alerting system that the customer requires on the back end.

So looking at our solutions, A, we provide application identity, so it’s not just the user identity. It’s also the application identity through a cryptographic signature, second vulnerability scanning, making sure that we know what’s running in the images. And if the images are known to be bad, isolating them or precluding them from doing any damage, based on existing policy. Third segmenting workloads – making sure that the right instances communicate with each other – and fourth a full PKI infrastructure, having the cryptographic capability to authenticate end-resources based on the root of trust that can be provided by the customer or by Aporeto. All of those get rolled up in our product. And we monitor the application, and whenever there’s anomalous behavior we provide an alert so you can take the right remediative  action.

So let’s take a step back and see why we are in our current security predicament. And here we have to go back in history a little bit and talk about the open internet design. ARPANET, the grandfather of the internet as we know it today was designed so that every node would be discoverable. That was a design criteria. That was so that if the U.S. were to be attacked by an adversary, and two thirds of the country were wiped out, the remaining third could still communicate through this mechanism.

And as we matured in the late-‘80s, early-‘90s, we started using the internet for commerce, and secrets became important. So we started injecting things like firewall rules. And this topology – it’s fairly small – but as a topology gets larger finding where we have to exert those firewall rules or the segmentation rules becomes increasingly difficult. In fact, it’s an n2 operation, and that n2 operation has to happen every time there’s a new node that’s up or any time there’s a change in the network topology.

So looking at the level of computations that we have to run is that with six nodes, as I have up there, it’s fairly simple. You only have to do 16 calculations. But as you scale up, and you go from six to eventually 6,000, those 16 calculations become 18 million calculation. And you can see how 6,000 instances is potentially possible in an environment. In fact we have seen them in some of our customer environments. But running 18 million calculations per change becomes error-prone and problematic. The way that Aporeto approaches security is through a whitelist approach, meaning that you know what the application ought to do and how it ought to communicate.

Or if we don’t know that we can discover it through our learning mode. And once you know what the communication patterns are then that becomes the basis of your policy, and you can reduce the complexity and provide better predictability. So because Aporeto approaches security based on end-to-end authentication authorization that’s done transparently to the application and on the fly, we’re now in linear space. So as you scale up, the relative complexity is quite a bit smaller. So at six nodes you’re basically computing six identity calculations versus 16 in the other model. And at 6,000 we’re still at 6,000 computations versus 18 million in the old model. This linear approach provides a much simpler security model that’s stronger, precisely because of the end-to-end authentication authorization that Aporeto does.

So what is Aporeto, and how is it that it works? So the five key features are that, A, we distribute the policy where the applications are. We have access control. We encrypt traffic, transferring it to the application. We generate attribute fingerprint per application, so we know exactly what that application is. And if there’s any kind of modification we pick it up because the fingerprint changes. And additionally we do threat and vulnerability analysis. So if there’s an image scanner that you have, or you want to use Aporeto’s image scanner, we factor that in into the identity of the application. And when a vulnerability pops up we can take the right approach with policy that’s distributed among your workloads.

So looking at it whole, first we generate contextual ID that’s automatically contributed. That contextual ID comes from wherever the workload is running, who has started it and what is being run. For example, Joe started an application that has libraries A, B and C on Amazon West. That would be an example of the various attributes. We also get metadata from the CI/CD pipeline, so if there’s an orchestration layer or as developers are creating the containers with tags or any kind of tags that are in the orchestration layer, we consume those, and that becomes part of the application identity data.

Finally, and a very important component, is what treat analysis exist within the containers that you’re running? As we get CVE data, those CVE data also gets included into the application identity and become the basis by which policy can be enforced. That policy is then distributed, and it’s decoupled from the infrastructure. By that I mean I never mention the word IP addresses as part of their identity. IP address is a proxy for location. Rather, IP addresses are a location, and they’re a proxy for identity. And in a containerized world where applications come and go, IP addresses change frequently, and it becomes very hard to maintain.

That’s why it’s very important to go on an identity-based approach instead of an IP-based approach. And one of the benefits of that approach is that you can now abstract away the infrastructure and work across environments. So you could have a workload that’s running in GCP and AWS and Azure simultaneously. Their security policy is going to be consistent across all environments, and you don’t have to worry about the networking shenanigans that come across that you have to do, including setting up VPNs, setting up firewalls, access control lists and so forth. It is truly decoupled from the infrastructure.

And when we have that decoupled security enforcement, aligned with the contextual ID, we orchestrate this using a whitelist model and monitoring the application behavior. So if the application is typically opening port 80 for communication, and all of a sudden it starts opening port 673, for example, we detect that change, and we block that communication, at the same time throwing an alert. And again, it all comes down to authenticating and authorizing all communications and transparently encrypting traffic without application modification.

So let’s look at the solutions that Aporeto provides. Because we monitor all the communications that are going on, we have full visibility on what the application is doing. We can visualize it and provide that visualization in real time, as I mentioned, in the time series database. So basically you have a record-replay capability of the application. If you want to know what the application was doing yesterday at 11:32 A.M. or the day before at 2:47 P.M. you could easily do that.

Basically, what the application was doing at the time of alert becomes known, and it can go back in time and visualize it and do forensic analysis on what went wrong. And furthermore, you can use the same data to show that you are compliant at any given time because we track application interactions over a longer period of time. And with that comes the automated flow and telemetry logging. We can show you what protocols were used, which ports were open, what API calls were made, what CRUD operations on the API calls were made, who made them. And all that becomes part of your security toolset to make sure that your application is coherent, and it’s up to compliance, and you have full visibility and control over it.

Second, as I mentioned, is encryption. We do service-to-service encryption. By service I mean it could be a microservice. It could be a full VM. It could be a container. But what we do transparently to those workloads, to the services, is a mutual TLS. So the developers don’t have to compile in any SSL libraries. They don’t have to worry about key management systems. They don’t have to worry about secrets and so forth. Aporeto handles that transparently to the application and does the key rotation and secrets management for you.

The third important pillar that we have is vulnerability analysis. So because containers are immutable we scan the container image to know what vulnerabilities are included in those. And we advertise those for you. So what that enables you to do is declare certain policies. For example, if there’s a particular vulnerability that you want to block, and you know what the CVE code for it is, you can say that, “Whenever I see this particular CVE, do the following action,” such as block traffic on the internet. Or it can say that, “Whenever I have a vulnerability that’s rated high, simply throw up an alert. Let me know about it so I can take immediate remediative action.

But our vulnerability analysis goes further. We’re actually monitoring the application at run time and seeing how it interacts with the underlying system by monitoring the system calls. And that allows us to do intrusion detection and file integrity monitoring. If the application behavior changes from the baseline that we have, we can flag that for you. And if it starts accessing certain files, like the [HCD] password file, for example, that becomes an alert, and we can provide that status for you so you can take immediate remediative action. And although I mentioned this before, but it’s also important to pull out independently, which is the service identity. We generate automated service identity whenever the container comes up.

So as soon as it comes up we generate a cryptographic signature for it and make sure that that identity is unique and doesn’t get replicated. In fact, if somebody comes in with an identical container, with an identical set of tags, because we have a multi-attribute system, and we talk to the orchestration layer, we can identify that the container that came up is a masquerading container and not part of the application. And therefore we can block it.

As a very important part of this whole system is certification, rotation, revocation automatically per your policy. So if you wanted to rotate a certificate, let’s say, every five minutes or every few hours, that can be done automatically without human interaction, which makes the system a lot more reliable and less error-prone.

So let me talk about how Aporeto works in operations. First, we do have a distributed security enforcement layer. And wherever your workload runs, Aporeto runs with it. That’s part of a deeper conversation. But for now, take my word that there is an Enforcer that’s living close to every workload that you’re running. And whenever an orchestrator fires up a new workload, the Enforcer detects it. It talks to the orchestration layer. It sees a new process or process tree coming up, and it says, “Okay, this particular set of processes or that particular workload is associated with the application that I’m protecting.” So at that point it’s under the protection of the Enforcer. The Enforcer then goes and collects metadata about that particular workload that’s come up.

It goes to the orchestration engine, into the operating system, and it collects as much metadata as it can about it. For example, give me the Docker tags that are associated with it. Give me the Kubernetes tags. Or if you’re running Mesos, give me the Mesos tags or Chef tags and so forth. From the operating system it picks up the set of libraries that are associated with it. It picks up the Hash that’s associated with it, so it knows what its identity is. And in fact, if somebody changes the binary we can detect that. And we now know that there’s a drift from the baseline, and we could basically stop it.

So those are two sets of attributes that come from the orchestration engine and the operating system. We also have the optional capability of picking up cloud identity doc information. For example, where is the workload running inside AWS? Is it running in the Oregon datacenter? Is it running in the Virginia datacenter? All that becomes part of that identity. The other optional set of identity comes from image scanners, which we ship by default as part of our platform, or we could integrate with your existing systems. So that’s where the CVEs come in. So all that metadata then gets incorporated as a set of key- value pairs that uniquely describe that particular workload that just came up.

And all this action is done in milliseconds, so latency is basically negligible. Those set of key- value pairs are then cryptographically signed with a certificate that the Enforcer has. And that certificate is derived from the root of trust that you assign for Aporeto. And as I mentioned before, those certificates and secrets can get rotated frequently to prevent others from trying to break into the cryptographic library, making it harder through that rotation.

So now we have a cryptographically signed identity for Enforcer. And the question that you would have is, “How do I use this identity to create policy?” The way Aporeto approaches policy is quite simple. We use a subject, verb, object construct. Our verbs are quite simple. They’re things like allow, read, encrypt, block, log and so forth. So a set of verbs that are useful and are known to you. And the scope of the subject and object are defined by the set of key- value pairs that we generated from the workloads. So those are the contexts that I mentioned before. So if you want to have a very permissive policy, such that everything within your production environment can talk to itself, your policy statement would look like this, “env=prod connect env=prod.” Your subject and object are the same. The verb is “connect,” and anything with a production tag can now connect with itself.

But let’s say that you want to get more restrictive and have a more narrow policy. At that point what you would do, you would say that, “My payroll application and production can have access to data that’s labeled as confidential in production environment, so long as that connection is encrypted.” And you can create as narrow policies as you want using this simple subject-verb-object construct or as wide as you want. So it’s a very simple human readable policy language. And in fact, Aporeto automatically generates these policies for you based on its observation of the application and allows you to modify them or construct your own – really simple language, really easy to get operationalized.

So early on I talked about how we can simplify your operations as well. And in fact, going back to the exact data case, this is an architectural diagram that comes from them. And what you see up top is what they had before Aporeto was in the picture. In this case I’m only taking two of the three availability zones that they have, but what they were doing within each availability zone was they had two different VPCs that they had interlaced together with an SDN solution – so with a network overlay that would connect those two VPCs. And to segment traffic between those two VPCs, they had east-west firewalls. They had access to an access control list, and they had some network access translation rules.

To manage all the secrets they were using a key management system – [AKMS] – and they were replicating the same setup in every availability zone. To manage traffic between the availability zones for database replication they were using a VPN tunnel with gateways on both sides and north-south firewalls on either side of those gateways. So it’s a fairly standard setup, but again, it’s complex. And in this case, because they were using containers that was orchestrated by Kubernetes, IP addresses were changing all the time. And maintaining that state and uniformity was quite difficult.

So what Aporeto did was effectively wrap every single component in a force field. And that allowed them to go to a flat network per availability zone. So the two VPCs collapse into one. Everything went to a flat L3 network, and Aporeto then provided the governance of how those applications can communicate with each other. So if you look at this diagram, you notice that not only do the VPCs go away but east-west firewalls as well as access control lists, netting and the key management systems all disappear. That’s not only a reduction in operational complexity but also a reduction in the number of licenses that they had to maintain. So it’s also cost savings.

Now between the availability zones they used to have a VPN tunnel. But because Aporeto does transparent end-to-end encryption that VPN tunnel was no longer needed. So they could basically send the pre-encrypted traffic over what appeared to be a clear text channel. But in fact, if anybody is snooping in the middle they would have just seen garble. But Aporeto would then, because of its distributed PKI infrastructure, would then decrypt traffic end-to-end and make sure that the applications could communicate with each other, despite anybody who would be listening in, who would then just see encrypted traffic and not make any sense. So that’s an example of the clarity and the complexity that we provide while giving you full application visibility that allows you to ensure that you’re compliant with security policy at any given time – whether it’s PCI or whether it’s Swift.

So now what I’m showing you here is how that is manifested in a UI. This is a snapshot of what you would see. What you see on the top end are the various attributes that the application workloads have. So in this case we have a front-end unit that’s a Kubernetes workload. And the metadata that’s associated is displayed here. And on the background you connectivity patterns. The red lines – communications that were initiated that were blocked. Green lines – connotes communication patterns that were allowed.

So you get full application visibility, and you can consume Aporeto on premise, meaning that you can install this service in your datacenter and run it yourself as software, or consume it as a service on Console.aporeto.com. So two easy models – whether you want to maintain the software yourself for reasons that you might have or you want to have Aporeto maintain that backend for you and manage the upgrades and basically roll out the service for you. And that’s analogous to, let’s say, running Salesforce or something. It’s security as a service in this instance.

So how do you get started? Despite the robust capabilities of Aporeto, it’s deceptively easy to get up and get going. First, basically, you pick an app. Pick an app and visualize it and see what the connectivity patterns are. And Aporeto creates a full application map that’s real-time and historical based. And that application map allows you to simulate security. Aporeto will automatically generate security policies for you, allowing you to read them, modify them and simulate them, without actually having any impact on the application itself.

So in that mode, basically, Aporeto would say that this would be a violation of policy, and I would block communication traffic. But it doesn’t actually block the application at run time. It therefore allows you to tweak your policies to make sure that you have exactly what you need. Once you have full confidence in the system and the policies that you have, you can then fully operationalize what you have and put it into action. And the application at that point is secure. And once you do this with one application, you basically repeat and rinse and go to the next application.

In an existing environment, installing Aporeto Enforcer is quite easy. At a shell line it’s basically two commands. You do curl to download the Enforcer on your OS image and you do the sudo yum install and up and going. That’s all it takes. For instance, in a full Kubernetes environment we deploy as a container. So whenever the application comes up, Aporeto can come up automatically. Or if you have a golden image the Aporeto Enforcer can be baked into the golden image, and you don’t have to do this two-command installation process. As soon as that AMI wakes up, basically it dials into the service, and you have full protection. So quite simple, quite easy to operationalize and quite easy to read the policies that are human readable, understandable, based on the subject-verb-object construct that I showed earlier.

So looking at the Aporeto benefits, basically it comes down to four pillars. One is reduced network complexity and cost, as I showed in the exact use case. Saving developers time so they don’t have to worry about key management systems, encryption and so forth. And having the full choice of development framework that they do – so zero change to what the developers do while giving you full visibility and full control of the application from a security perspective. And then third, having uniform security across all environments, whether you’re running on your own private datacenter, whether you’re running on GCP or on AWS or in a hybrid environment or a multi-cloud environment.

The security policy becomes uniform because of our distributed approach to enforcement and to policy. And putting everything inside a time series database and providing alerting so you can have compliance and scope reduction – knowing when something has gone wrong. And if somebody audits you, you can show that you were in compliance at a particular time. So that ties back into your PCI requirements or Swift requirements from a financial perspective.

So that’s a very high-level view. I would encourage you to sign up for our demo, and to do that you could go to Aporeto.com/demo. And one of us will get back to you in short order and show you a full demo of the capability. And getting up and going on the POC is quite easy. In fact, if you want to run your proof of concept on an AWS service, because of the agreement that we have with Amazon it’s of no charge to you, completely free. And you can test it out. And we are out to support you every step of the way. So with that, let me wrap up the webinar, and I appreciate your attention this morning.

And while we’re at it, let me see if there are any questions that have come in. So there is one question that has come up. Basically, the question is, “Where does Aporeto fit within the operational environment for the customer?” And if I can show you the architectural diagram, here is a good view of where Aporeto fits. So at the lower level what we have is any infrastructure. One of the statements I made was that Aporeto abstracts away infrastructure. In fact, it could run on ESX farm. It can run on AWS, Azure or DCP. Aporeto is operational on nearly all of the logos that you see down there. So infrastructure itself is transparent to us. It’s abstracted away. And you could run on one or many of these types of environments. At a top layer where you see the Aporeto Enforcer – the middle tier – basically, Aporeto runs in two different modes depending on the setup that you have.

If you had a VM environment, Aporeto runs as a user-space process, so that’s what you see here on the right-hand side. If you’re running containers or a Kubernetes environment, for instance, Aporeto runs as a Docker container on Docker engine – so two different operational modes. And the orchestration layer can be anything you want on top of it. So those Enforcers dial back to the Aporeto service and provide the telemetry invisibility to the application as well as the security controls that I mentioned. So when a user signs in and starts interacting with one of your services, we absorb that user identity as well, depending on what you use for your sign on services. You could use active directory, LDAP, Okta. It doesn’t matter to us. But that’s where we get the user identity and understand what the scopes that are associated with that particular user.

And when the user starts interacting with the system, we maintain the scopes as the user interacts with the API layer, with the system, and the system then percolates out the commands to other services. So the user scopes are maintained throughout the end-to-end as the application interacts on behalf of the user. And on the backend what we have is the ability to tie into vulnerability scanners. Again, I mentioned that Aporeto does ship with its own vulnerability scanner. But if you have your own scanner – whether it’s Qualys or Tenable – we can easily plug into those and consume the information that they provide and bring that into the application ID. Our alerting is done through a number of means. But typically the requirement is that we integrate with sim system. That could be Splunk, IBM Curator or a third party of your choice, as long as they support restful APIs.

And we do integrate with the CI/CD pipelines, whether it’s Jenkins or GitHub or another tool. We integrate with that, and we learn from the application behavior as it goes from dev to operations. And in fact, if you have a full CI/CD pipeline, and you deploy Aporeto at development time the security policies then get promoted from development to production, and you’re basically up and running as soon as that application is promoted with a full set of security rules associated with that application. And we’ve taken great care to make sure that we are cloud API compliant, so you don’t have to learn a whole new set of APIs to interact with our system.

So let me see if there are any other questions that are coming up. There is a second set of questions that basically asks about how we make the cloud environment ready. In fact, the question is, “Can you outline how Aporeto helps companies migrate to a cloud-native environment?” And for that what I can show you is the following slide. What we have in a traditional datacenter is everything that you manage. It’s a physical infrastructure that you manage, and then overlay the network on top of it with an L1 through L7 infrastructure that you have to maintain. Security then is deployed on top of that network using firewalls, using a software-defined networking ACLs, all tied to the networking layer. And the applications traditionally are rigid and structured.

So they’re the classic N-tiered applications. What we see as we move forward in the journey to the cloud – and this is where some of our customers are – is that infrastructure is now virtual. This is an AWS infrastructure, for instance, where you don’t have to worry about the physical infrastructure. The network is still L3 through L7, and that is tied directly to the infrastructure. It’s part of the rigidity that comes, but it is easier than it is in the traditional networks. We are seeing DevOps transition at the application layer, where traditional applications are running with containerized workloads. But the security layer is still pretty much what it used to be, which is tied to the network – in this case L3 through L7 – and we’re still using network constructs like firewalls, SDNs, ACLs and so forth.

As we look forward and we go to the cloud-native journey, basically the infrastructure is completely abstracted, and it’s virtually managed as it is in AWS. But so is the network. You really don’t care what the network looks like. And that puts the onus on the security level to also be abstracted away from the network. And this is why at Aporeto we have an identity-based zero-trust security solution. That’s been basically the whole thrust of this conversation. How do we generate application identity? How do we marry it with the user identity? How do we segment traffic and provide visibility for an application suite that’s going to be stateless, dynamic and completely scalable and could run on any infrastructure? So if you look at this model, what Aporeto is doing is helping you to move from a cloud phase one scenario to a multi-cloud/cloud-native solution with an abstracted security layer that’s independent of the network and works with your legacy applications as well as with your more cloud-native, stateless, dynamic, scalable applications.

And let me see if there are any other questions. No, that looks like I’ve got it all. So with that I thank you for your time, and I look forward to seeing you on Aporeto.com/demo. Thank you, and have a good day.

[End of recorded material 00:39:51]