Transcript

Thelen: Hello everyone. Welcome to the webinar “Unleash the Power of Identity-based Cloud Security with Aporeto.” My name is Thelen Blum and I will be your host for today’s webinar. I’m the director of product marketing at Aporeto and I’m here with Anand Ghody, Senior Technical Marketing Engineer at Aporeto. We will be presenting today. A couple of housekeeping notes before we get started. This webinar is being recorded and will be made available on BrightTALK and our website afterwards. All attendees are in listen-only mode. We will close with Q&A at the end of the session. If you would like to ask a question, please type it in the question window of BrightTALK and Anand will address as many questions as possible time permitting.

 

We have also uploaded four reference handouts. Feel free to download this content at any time during the webinar. Additionally, we want to make sure that you can hear us so if someone would go ahead and, in the questions window, just type that you can hear us, we would appreciate that. I would like to present a little background on Anand before we get started.

 

Anand Ghody is Senior Technical Marketing Engineer at Aporeto with 20 years of experience in information technology. Prior to joining Aporeto, Anand worked for several years as a solutions architect with a focus on networking and application performance within large, complex, enterprise environments while at Riverbed. As a technology enthusiast over the past few years, Anand has shifted into the cloud security and microsegmentation space at Aporeto, helping organizations understand the benefits of an identity-based security model to reduce security complexity, prove application risk posture, and maintain robust compliance. Now we will launch into our webinar discussion today.

 

As more organizations migrate to the cloud, many are confronted with the challenge of securing workload environment both on-prem and in the cloud. The increasing use of cloud native technologies like containers, Kubernetes, service mesh, and serverless, add more complexity to effectively providing security for these very different environments. These shortcomings are emphasized with a traditional network security model that relies on IP addresses and lacks scalability and protection against the various threats. What is needed is a new identity-based approach to cloud security.

 

The Aporeto platform extracts security away from the IP infrastructure to address application segmentation requirements and improve application risk posture. Aporeto’s identity-based segmentation approach transcends the security constraints of on prem infrastructure while securing all cloud services by effectively implementing a zero trust security posture a product infrastructure that adds the ultimate segmentation for modern applications based on cryptographic workload identity rather than IP addresses. Security policies remain persistent no matter where the application resides in your cloud or hybrid environment. Without further ado, I will now pass the ball to Anand. Let’s get started.

 

Anand: Morning, or good afternoon as it may be. My name’s Anand Ghody and I’m a Technical Marketing Engineer at Aporeto, as Thelen has just said. So I’m just going to get into our slides and explain sort of the security challenges that have presented and the challenges today.

 

So we’ve had this hardware based servers that we eventually moved over to VMs. And that has sort of escalated the speed at which we need to address security. Firewalls were there and your VMs where being able to be stopped faster. That problem in terms of security being able to keep up with the speed at which the infrastructure is being created has been a problem, but it was somewhat manageable. But as we sort of moved into more cloud native architectures, moved from VMs to containers, moved over into cloud infrastructures, there’s a greater speed at which we need to address security and it becomes almost impossible. We need to automate and our traditional method of trying to secure infrastructure has become very problematic. It’s slowing us down in a lot of ways. So we need a new way to sort of address these sort of new methods to how our infrastructure is extenuated.

 

So we have to do is think about a new method that’s not necessarily based on IT, but something else. And what we’re doing is using identity to do that and I’m going to explain a little bit more about that. But I just want to address that our solution is completely backward compatible with the legacy way of doing things as well as being able to address some of the newer methods that people are extenuating their application using. Containers, micro-services, APIs, as well as serverless.

 

So in traditional architectures, we’ve had our firewall as our perimeter. And it’s been somewhat porous. We know about all these different methods in which threat actors have been able to get into our infrastructure and move laterally. We can think about companies like Equifax where you’ve got more of a webserver and the threat actors are able to make their way to the more critical applications. Now we viewed different methods to sort of segment our applications to try to protect them, but we still see that they’re too granular and threat actors are still able to get in.

 

So we need to think about things in a different way. As we move to a zero trust model, we have to realize that the internal threat is just as great as the external one. Once you’re in, if there’s nothing really preventing you from moving around laterally, you’re going to be vulnerable. You need to think about how you’re dealing with security in a new way. The firewalls, they can help you, but they’re too granular, or too broad in terms of how you apply them.

 

So as we’ve sort of adopted cloud, this sort of definition of what the perimeter is has sort of become fuzzy. It becomes even more difficult to secure our applications in a way to prevent them from becoming compromised. As we are then even moving towards micro-services, and trying to adopt cloud native architectures, and dealing with containers, and APIs, our traditional way of using firewalls and VPNs, for example, to do segmentation, they don’t apply anymore. So again, we have to think about a new way to sort of protect ourselves from ways in which application developers are developing their applications today.

 

So what we’re doing is we’re also dealing with that workload access problem, but also the user, users sort of coming into the environments. Again, it’s not just about workloads, but internal people as well. Once they get into the VPN, there’s very little to control where they’re going. So what we’re trying to do as well as sort of address this user problem. It’s just gets even more complicated as you move into the cloud and you’ve got much larger, more complex infrastructure and the ways in which you’re able to control where these users are then going has become even more complicated. So we’re addressing these types of issues.

 

So what Aporeto is, we provide a non-IP based method to secure workload communication and application access across any infrastructure using cryptographic identity. Now IP can be spoofed. You’ve got to deal with natting. There’s a lot of different ways in which IPs will fail you so what we’re doing is a different way of being able to secure this communication across more modern types of applications and application development. And what I mean by cryptographic identity, what we’re doing is we’re taking, basically, key value pairs from, it could be from the cloud provider, it could be from a vulnerability scanner, it could be something related to OS, from your single sign-in provider, the claims that they’re providing, your key value pairs. We’re taking all this information and what we’re doing is we’re creating a cryptographically signed JSON web token that we’re packing between two workloads or a user and a workload to verify who they are.

 

So rather than IPs, we’re passing along this identity token to validate who we are. So we don’t have to rely on IPs anymore. What we are able to do with that identity information, is actually create policy based on any of those key value pairs. So now when I want to actually create a rule to say I want two workloads or a user to be able to talk to a workload, I can take any of those key value pairs that are present and I can create a policy around that. So, again, not based on IPs, but based on these cryptographically signed forms of identity. So any one or all of those forms of identity can be used to create a policy to control access to different resources.

 

So what this ends up meaning for us is we are essentially becoming the Okta of workloads. And what we’re able to do is create very complex, or even very simple policies. It depends on how you want to address your policy model. Here I’m showing a very simple example using a couple of labels. But, again, this can be more complex. You can use an orring function as well to create our policies. That allows us to do authentication authorization for all communications between workloads as well as users. Independent of the infrastructure that’s actually sitting on top of where they’re actually coming from.

 

So everything gets a cryptographic identity. We’re able to distribute that policy that we created previously and we’re able to push it down to the work loads. What we’re doing is there’s an actual agent that’s sitting on those workloads that’s able to ingest this policy information and interpret when we’re actually seeing that JSON web token on what the identity of the other person or workload is. And based on the policy that we’ve designed, based on these labels, we’re then able to determine whether or not the communication is actually authorized to actually occur.

 

What this allows us to do is create sort of these virtual zones of trust. So this virtual zone of trust can be two workloads that are sitting right next to each other, or they can actually be across the planet. Or it can be not just workloads, but this virtual zone of trust can also be for users as well. So this concept of using identity allows us to find sort of the common values between workloads and users and how we then create that policy then create this virtual zone to determine whether or not communication between workloads and users are actually allowed to go forward.

 

So in this slide here, what you’re seeing is it’s just an object, and it has different values. As you can see, these are different forms of identity and we’re just showing you how you can create these virtual zones based on these different identity flags independent of IP, independent of any infrastructure. So it doesn’t matter whether you’re on prem, whether you’re in the cloud- it’s just going to be based on your identity information. I’m going to get to, in a little bit, how we’re able to pass this identity information along seamlessly without having to do any extra work.

 

So now what this virtual zones actually allow you to do is they allow you to address several problems dealing with cloud network security. So, again, that whole east-west traffic segmentation is across these heterogeneous platforms, whether they’re DMs, or containers, whether serverless functions, Kubernetes. How are we sort of doing the segmentation today? Typically, we’re using multiple tools to do this. But what we’re offering with this identity-based model is offering you a homogeneous way to secure all this traffic, regardless of where it’s running or what it’s running on, and secure it effectively without having to use multiple tools to secure that traffic effectively. So this identity-based model allows you to do things that IPs would just take forever and could not necessarily be done reliably.

 

The other parts are also dealing with remote access and privilege access management without actually having to also rewrite applications. So what we can do is you have web applications that you want your users to access and you don’t want to have to rewrite your apps, we can actually offload authentication for you without your developers having to do anything. And also if you have to deal with SSH keys as well, we do get rid of that and help you with compliance by logging activity of your users as SSH can. So what I’m showing you right now in this slide is actually showing you how we’re actually passing this identity information along. So that enforcer you’re seeing right there is our agent that’s sitting in the host. And what it’s doing it’s intercepting the SYN and SYNAP packet and it’s able to then insert that JSON web token, that cryptographically signed JSON web token and passing the identity information along.

 

Now based on the policy that you’ve configured, the enforcer will then determine whether or not that communication is actually authorized. And if it is, it’s going to allow the communication to continue along. What we’ll do is then, we’ll write into the same mechanisms that make the IP tables staple, which is the connection tracking table. So every host essentially becomes a staple firewall as well. Again, this is able to work across any cloud, any host. It doesn’t actually matter where they’re located. The identity information is just passed along and based on policy we’re able to enforce that whether or not the communication is actually authorized. If it’s not, the package is silently dropped, and the communication won’t succeed, and you’ve protected yourself because unauthorized access has been prevented.

 

So what we’re enabling you to do, with our solution, is allowing you to visualize this communication. So you can map the components that actually need to communicate. So you can actually build your policy. You don’t actually have to enforce it right away. What we enable you to do is see it, figure out what the flows actually are without blocking anything, so then you can actually simulate some human readable zero rust policies based on that identity information that we’re referring to. So once you’ve actually built that policy, you can actually go into enforce. At that point, we’re actually going to be essentially a white list model. What we’re doing is, unless there’s a policy to allow that communication to occur, we’re going to block that communication.

 

So what use cases do you have for this identity powered security solution now? Again, you can do this networkless microsegmentation, meaning completely independent of any IP information. You’re able to do old Kubernetes security as well. So when I talk about Kubernetes security, it’s not just about scanning container images. It’s about being able to protect the APIs, being able to protect the actual cluster nodes as well, as well as the pods running in them- all with one single solution. For privileged access management, what we’re allowing you to do is get just in time server access. What I mean by that is an on-demand SSH certificate that has a customized expiration period. That allows you to not have to deal with SSHTs and at the same time what we’re enabling you to do is identify users based on identity not necessarily just the system account. So they can login as HHTU user, for example, and two different people using, we’re able to differentiate between the identities of the users and then log their individual activities as well.

 

And then the third use case we have is around service authentication offloading. So when I’m talking about service authentication offloading, again, it’s for VPN-less access to your private web apps. You don’t have deal with VPMs. What we can do is we can, as our software agent, it can sit in front of your web application itself and it will offload the authentication to say identity provider Okta- whoever and it will get that authentication information, it will get the claims information from them and then we’ll be able to granted to your application. Again, this is without having to rewrite your apps. And the other part is also doing API authorization. So not just for layer three and layer four, it’s also for determining whether or not different workloads or users should be able to access your APIs as well.

 

So I know there’s a lot of slides here. But we’re actually going to get into how this actually works, and we’re almost done with the slide deck actually. So the other thing I just want to point out is that this is a zero trust, end to end visibility solution with encryption. So one of the things that we also allow you to do is, with a click of the button, enable mutual key lapse authentication between the workloads as well.

 

So where are some of this stuff being used today, and why are they being used? So when you have this concept of cloud security, you have this challenge of constantly changing IPs, and multiple tools, and having to figure out how do I use my firewall model to secure all my communication. It’s not really meant for that necessarily. You would need to find different methods as applications and application development has changed and moved into more of the microservices realm and moved from this on prem to multiple private clouds, or hybrid cloud, or multi-cloud environment. It becomes very difficult to figure out how you’re going to secure your application. And also a way to sort of centralize that visualization of what’s talking to what and not have to deal with slowdowns from having to try to figure out how do I secure this.

 

You want to grow fast? We can actually enable you to grow fast. But the problem with having to go fast is a challenge that many organizations are facing and they are looking for solutions to help them move along and keep up with the pace of the application developers. So, with Aporeto, we’re giving you a distributed firewall. So, again, an identity-based microsegmentation solution for east-west traffic based on the workload identity. So, again, the very simple, human-readable policy to actually protect your applications at layer three and layer four, as well as layer seven, getting into the API layer as well. And you can see there’s a few of our customers that actually use this. There’s actually some three letter agency, I’m not even allowed to know who they are, and Comcast ventures, as well, they are actually using our solution to handle this east-west and cloud network security solution.

 

So another use case we’ve had is remote access. You’ve got SSH keys, you don’t know who has them, having to deal with the management of them, and then also figuring out how do I control the user access or even just once they’re in. What we’re doing with that is also allowing you to sort of log the activity of those users and deal with key management as well. Sort of this uncontrollable lateral movement, how do we handle that? With our solution, you can actually control where the user goes after they’ve actually SSHed in and where they go to that next hop. You can carry that identity along and create a policy based on their identity information independent of the IP address that they’re coming from to determine whether or not they’re able to access a resource. And I’m actually going to show you some of this. And again, we’re going to do some compliance logging in as well for this.

 

So this just enables you to have a better sense of security. We have a few of our customers, Informatica is actually using this sort of solution as just in time access, meaning you’re getting that on demand certificate on the fly. And, again, it allows you to gain access and unlock all the user activity. Again, what we’re doing with the cert is not just getting a cert, we’ll actually integrating with your identity provider and inserting identity information into the certificate as well. I’m going to show you some of this in a second. So we can actually identify who that individual user is. And that’s how we’re able to, even with a shared account, determine what activity each individual user has actually done.

 

So now the last use case is challenges around again, web app remote access. You can’t, once someone’s in with their VPN, there’s not a lot of control in terms of where they go. What we can do is instead of using a VPN for your access to your web apps, what we can do is use your identity provider and after the authentication proxy for your web applications as well. So that without having to use a VPN, we can act as a cell proxy so we can secure that communication. If it was just a web APP we can actually make HGPS. And then, again, offload the authentication to the identity provider and sort of act as that identityware proxy for your web apps without having to rewrite your web applications.

 

So what this, and we have customers that are actually using this . We’ve got Bart and we’ve got the National Association of Insurance Commissioners as well. This, again, just gives you a stronger and more granular level of security around who can access what within your environment.

 

What all this enables us to do is to provide stronger security, simpler operations, and better ROI on your investment. Without more actually, here we go, we have a slide to show us all our value propositions. And then actually we have some of our customers that have actually used them have found the solution to be extremely helpful in terms of helping out with their real operations. So what I’m going to do is I’m actually going to jump into a demo and show you a little bit about what we’re actually doing. I think we’ve had enough slides for now. We’re going to just swap over and we’re explain what we’re actually seeing.

 

I’m going to start off pretty simple, but it’s going to get a little more involved as we go along. But I want to just sort of level set you on what you’re seeing here. What I’m showing you is actually a traditional three cure app. I’ve got a web cure up here. I have a load balancer. We’ve got our web tier here at the top. We’ve got a load balancer that the web tier is then talking into. And the load balancer then distributing our connections over to a processing tier and then over to a database.

 

Again, this is a traditional app. But what we’re able to do is secure that communication very simply, using a very simple policy, again, saying this label can talk to this label. And if we actually go into the host, again this is on premise, you can actually see that metadata here. This is actually what’s being passed along is identity. You can see that this workload has this app one label and when it’s communicating with another workload, it has the same label. That allows us to secure that communication for this entire app using that very, very simple policy, which basically says app one can talk to app one. The second thing we’re able to do is actually, with the click of a button, just enable encryption as well for this communication.

 

I’m going to actually migrate this web tier up into the cloud. I’m going to transform it using Kubernetes for the web workloads and cells as well. And I’m not going to have to change any of the policy. What you’ll end up seeing is the web tier will disappear on prem and I’m going to migrate it over to AWS. As I do that, I’m going to carry my label and my identity information along. And I actually don’t have to change anything with my policy as I do this migration.

 

What you will see is that web tier has actually moved over to AWS, you’ll end up seeing some different lines between the web tier and again that load balancer that’s on prem. And we can actually see the policy is exactly the same if we look at that green line. That green line is an indication that we actually have a valid and authorized communication. And the red line is indicating that that traffic is actually rejected. You’ll notice that, well, I thought you said the policy carries over, I’ve added a special tag into this workload that gives it, the one with the red line, that actually gives it a policy violation. The policy violation then enacts a different policy that says you’re not allowed to communicate with app one anymore.

 

Now that policy violation could come from, for example, a bad CBD score. So some vulnerabilities that were there. We don’t want that workload being accessible within our app. What that allows us to do, again, with the simple label is then trigger a policy that says, you’re no longer part of this application. Now if I want to actually modernize this application, I can actually do that as well. I can move this web tier into a Kubernetes cluster. And what you’re going to see, in a second, is I’m going to spin up some web workloads in a Kubernetes cluster in AWS. Again, we’re expanding AWS and an on prem site. And what I’m showing you, or will be showing you in a second, is this web tier now, which was VMs is not expanded out into a Kubernetes cluster and the web tier is actually now communicating with the on prem site.

 

So I’ve begun to sort of move along that journey and transform it to more cloud native architecture using containers. As I’m doing this, you can see, again, the green lines indicating that my flows are being accepted. There is a little locked icon that’s going in between them to ensure that the actual communication between these workloads is also encrypted. So, again, it’s mutual keylapse authentication. We’re providing the certificate, but we can integrate with your own CA as well. But, again, the traffic is encrypted.

 

Now what I’m going to show you is I’m going to spin up a WOR workload. Now this is a rogue container in that same EKS cluster and what it’s going to do is it’s going to be trying to reach out to different components. So some of the other containers that are in the EKS cluster as well. And as it’s going to do that, you’ll end up seeing that its communications are actually being blocked. One thing I want to point out with this, is that the IP address of the containers as they leave the cluster, are actually the same. So when I start looking at that first container, I can see it’s got a 40.18 IP address. And the second container, it’s got that same 40.18 IP address. But based on the identity information, where able to differentiate between the different workloads. We’re not dependent on the IP address anymore. Again, the identity information is what’s important. And we’re able to, even with this container that’s sitting in a Kubernetes cluster in the same node as some of these other containers, we’re able to protect our applications based on identity information.

 

I also mentioned that we were able to do the API access control. And what I’m going to show is how we actually do that as well. What I’m going to show you is how we’re able to control egress as well as ingress API access control. And I’m going to do it in AWS actually. So when my screen shows up, I’m going to show you how we’re able to control access to an estru bucket as well as to RDS. And then I have an instance that’s got a bunch of APIs and I’m going to show you how I’m able to control that API access based on the identity that coming in.

 

So if you’re seeing the screen right now, you’re actually able to see on the right-hand side I’ve got a bunch of docker containers in a single host. So the IPs are all the same when they’re leaving this host. So it would be difficult to use the sample security groups to control this access. But based on identity information, I’m able to determine which APIs, each one docker containers is actually able to access. So with no access, it’s not going to be able to get to any of the APIs. With the public, it’s only going to be able to the public APIs. And to the private, it’s going to be able to get to the public and the private APIs. I’m doing the same with the RDS instance as well. So I’m just showing you, again, as an example of how we’re able to control API access control, to API access control in AWS.

 

One of the other things I want to show you is on another instance. And it’s got some APIs that are being exposed. What I’m able to show you there is, again, ingress control. I’ve got APIs and I’ve got different entities that are coming into these APIs. My agent, which is sitting on that host is able to determine who’s coming in and whether or not they can access my public and my private API. So I’ve got an admin test, I’ve got a public test, and I’ve got an internal test. And, for example, the agent list public, it’s not able to, again this is going to be based on where they are coming from, is not going to be able to get to my admin or my internal APIs. That’s only reserved for my agent’s private access. And that one’s going to be able to get to my admin and my public test, but not to my internal. That’s reserved. These guys can’t actually access any of those workloads.

 

So again, to show how we’re not just there to do segmentation based on layer three and level four, but at the API level as well. So as you’re moving into this micro-services architecture, you need to be concerned about that and who’s actually accessing your APIs. And we’re actually able to give you that level of flexibility and control.

 

One thing I want to point out is that a lot of three tier architecture is very legacy. As you’re moving into microservices, the architecture is actually a lot more complicated. One of the things I want to be able to show is there’s a hipster shop app that Google has that actually kind of shows some of this level of complexity. It’s not just these tiers, but you have equal micro-services that are talking to different micro-services in different ways, you need a solution that’s going to be able to handle all that.

 

So that’s what I’m showing you with that slide. To show you what that microservices architecture look like. Again, this is a free app from Google for their hipster store. And I’ve actually deployed this in a kubernetes cluster just to show you again, what this app can actually do. I can move through this app and I can have function and I can actually show you again what that actually looks like. So as I’m scanning through this application, I’m actually then able to create flows and connections and then visualize that communication, and then help build policy around what I’m actually doing. And what I’m going to do is I’m going to actually show you what some of that policy looks like.

 

Imagine you have to do and write policy across the UI. It might be a little daunting. You want it to be able to go fast. You don’t necessarily want to have to get slowed down by having to use the UI for everything. So we have an ability to take out data and then use YAML files, for example, to upload the policy. Or, for example, we can take our policy and we can talk directly to Kubernetes and use the network policies in Kubernetes by taking a policy that you visualize and design, and then pushing them down to Kubernetes using some custom resource definitions.

 

So what you’re able to see over there is, if you actually click on one of the lines, you’re actually seeing what that policy is. And, again, it’s a little bit more involved there. I’m showing the checkout service, and it’s a development environment, and the project is a company store. And I’m defining all the different components that the checkout service can actually communicate with. What I’m able to do though, I want to be able to go fast. This is part of my CSC pipeline and I’m making changes, I’m adding components to my micro-services. I don’t want to necessarily have to go back into the UI and make those changes. So what we enable you to do is we enabled you to create these policies, design them, for example, in DEB and what you can then do is then create a YAML file.

 

So what I have here is I have a script. I’m actually going to use one of our own tools. It’s called Apple CTO. And what that allows you to do is talk to our console and then hold down these policies and convert them into Kubernetes based definition grids where we have a custom research definitions in a Kubernetes cluster itself. So if I want to, once I’ve got this pulled down, I can actually go and apply the policy itself and I can actually work directly with the Kubernetes cluster networking policies. I don’t have to necessarily rely on using the UI. If I need to make a modification, I can actually just go into YAML file, it can be part of my pipeline. I don’t have to go into the UI and slow myself down to figure out how I’m going to write this policy. I’ve got the definitions here. They’re easy to understand. And if I want to do advanced things, for example we support encryption, I can do that with these policies as well.

 

Now these custom resource definitions there’s things that are also for the service authentication offloading. They’re pretty extensive. Again, all the functionality that we have in our UI, we’ve injected that into Kubernetes. It is actually a controller that actually sits in the kubernetes cluster. We call it the Aporeto Operator and it watches for events and it will report policies that you may have already created as well as network policies in Kubernetes. And we’ll be able to inject those policies into our solutions. So you can see what’s actually been created. And now you have also this visual method of understanding what’s talking to what. And if something is getting blocked, you’ll be able to see that as well.

 

Now that app is interesting in itself, but I’m going to show you something more interesting in terms of when we’re talking about the service authentication offloading, I’m going to show you how we can apply that to the hipster shop actually. So I’ve got another instant of it actually running. What I’m actually doing, I’m actually going to front end out service authentication proxy in front of this hipster store. Before it was just plain HDPI status, but what I’ve done is I’ve configured our hipster shop with our service authentication proxy. So now you have to get authenticated from Okta, and once you are, the information that comes back will determine whether or not you’re going to be actually authorized to access the hipster store app. And then once I’m in, again, I can actually go in and do my regular shopping and place my orders as well.

 

The communication for this actual application is actually not just in one cluster, I’ve actually broken up the hipster store across multiple clusters. It’s running across GKE as well as on prem vanilla Kubernetes cluster that actually brought up myself as well. So across this entire ecosystem of cloud and on prem, I’ve split this app—and again, I’m able to secure it in a very simple way using identity. So I created a policy that I just said anything that has the label of project equals company store is allowed to communicate. And I’ve enabled encryption for this communication as well. So as my pods are running, they’re running across two different Kubernetes clusters. I’m able to secure the communication between them very easily and create a very simple policy to ensure that only the authorized pods are actually able to communicate with each other.

 

Now the thing I want to point out is with the Kubernetes cluster, I’ve actually not just the pods but I’m actually able to secure the actual infrastructure as well. So the actual node. What I’m exposing on the left side, what you’ll see is specific Kubernetes services that the node is actually I’ve exposed. So the Kubernetes UPI server as it sinks into the box, I’ve actually, those specific services under that node, I’ve actually exposed them and I’m protecting them. Everything else is basically denied or it’s going to be accepted based on policy.

 

So what I’m going to show you next is how we’re able to control access based on users to those UPI, to those endpoints, based on identity using SSH. I’m going to grab—on demand and I’m going to add this to my SSH agent. And this is what’s comprised inside that certificate actually. What you’re going to see, in a second, is me generating an SSH certificate and then inspecting the contents of the SSH certificate. So once that pops up, you’ll actually be able to see that there are actually identity clams. I’m using Google to open ID to do my calculation to determine who I am. That identity information is then can be used to identify me from a different user.

 

And the bottom of the screen is going to be—I’m using a different identity. I’m using my other google account essentially. So even though I’m from the same box, I’m actually two different IDs. I’m going to actually SSH using a single account. So it’s going to be a common shared account, but when I do this, what ends up happening is I end up creating two different users that are going to pop up on the screen to identify the individual user context. And if I actually look inside the user context, I can actually see all that in that processing unit.

 

So my email address and my name and everything are now forms of my identity. And I can create policy around this. That’s actually what I’ve done. So, for example, if I was to try to access the Kubernetes nodes just to, for example, get what are the nodes in the Kubernetes cluster. I can actually see—it’s actually coming out from different users and see whether or not those connections are actually allowed or denied. So, you’ll be able to see two different lines coming up. So the first line you’ll end up seeing is this red line. Again, this is coming from the same host, but the only thing different is the user context. And I’ll do this using my identity, which is my actual email address. So the instance on the left it’s getting denied and it’s got the red line. And the instance on the right, is a green line indicating that I was actually able to access the Kubernetes API server.

 

Now this is, again, just a sample of how we’re able to do this. It’s not necessarily going to be specific to QAs, but this can be to anything that our agent is actually running on. But this is a sample to show you how we’re able to control user access based on identity as well.

 

With that I’m going to actually end my demo. We’re going to go into Q&A. Thelen?

 

Ellen: Great. Thanks Anand. All right, it looks like we’ve received a few questions from the audience. The first question is: Does the Aporeto platform work in Azure?

 

Anon: Yes. So we are actually completely independent of any infrastructure. So it can be Azure, AWS, it could be Google. It could Bob’s Big Boy cloud service. It actually doesn’t matter to us. We’re not beholding to any one specific infrastructure. Now with some of these, the bigger cloud providers like Azure, and AWS, and Google, we’re able to extract a lot more metadata from the cloud provider. So the piece of identity information like security groups can be pieces of your information that you can actually then, for example, create policy around your security group membership for instance. So yes, we’ll work across anything, anywhere and it makes no difference to us.

 

Thelen: Great. Next question. What is the performance impact of the Aporeto enforcer?

 

Anand: It’s very minimal. It’s a few milliseconds. This is only because we’re involved in that initial part of that three way handshake moving information along and then after that, we are sort of out of the picture. Unless we’re acting as an SSL proxy then yes we have to see every packet and encrypt them. But even then it’s very minimal in terms of latency. For normal communication, it is only a few milliseconds, barely even measurable in terms of performance impact. We’re not going to be looking at every single packet. Once we determine communication is authorized, we’re allowing the kernel module, which is for connecting tracking, which is what makes IP tables staple. We’re just using that same mechanism to then ensure each of these hosts are essentially a stable firewall.

 

Thelen: Great. All right and our last question. Other solutions use labels for their policies as well. What makes Aporeto solution unique?

 

Anand: So just about every other solution you’re going to find that tries to do something similar, first of all they’re not going into the API layer. But the other thing that they’re doing is that they’re converting these labels into IPs. Now the advantage for us is that we don’t have to keep staple information about which IPs are available and up and do this computation around IP addresses because we’re completely independent of all that. So when you have to, for example, scale up from one to a thousand nodes, there is no additional policy re-computation that actually has to occur. I’ve got my policy that’s based on identity, I spin up another thousand instances that have the same form of identity, there’s no extra work that has to be done.

 

So that makes this solution more scalable than anything else that’s out today. We’ve done some scale tests actually for some customers to sort of prove this out and dropping an entire data center, 50,000 nodes and we were able to come back up and have our policy converge in, I think it was a couple minutes. It took the firewall actually 15 minutes to come back up. So what makes us different, again, is this idea that we don’t have to deal with this policy on IP computation as different things are coming up. Our instance comes up, it gets its policy, it doesn’t have to wait to figure out if other IPs are available and whether or not they should be allowed to actually access that instance. Are there any other questions as well?

 

Thelen: That was the final question. Thank you Anand.

 

Anand: I just want to say we’re giving you this zero trust security, end to end visibility, with encryption, completely independent of IPs or infrastructure, allows you to sort of deploy your application how you want, where you want. The freedom to do that as well as to go fast the way you’ve always wanted to and haven’t been able to.

 

Thelen: Great. All right everyone this concludes today’s webinar. Thank you for attending. If you have any further questions on the information presented today, please don’t hesitate to reach out to at Aporeto via our website. I have a few final notes. If you enjoyed the webinar and would like to experience the Aporeto platform for yourself, we’re offering a personal demo walkthrough to address your organization’s unique cloud security concerns. Please visit our website at www.aporeto.com and click on the request demo link at the top of our home page. Just register and a representative from Aporeto will contact you directly. If you enjoyed the webinar, we are offering another webinar next month titled “Identity with a game changing model for cloud security, so what is it?” This will be on July 18 at 10:00 a.m. Pacific time, 1:00 pm Eastern time. You can sign up for it now on BrightTALK. You can also go to our website and sign up. If there’s a webinar topic you would like us to discuss, please feel free to contact us through our website and let us know. Additionally, if you’d like learn more about the identity powered cloud security solutions Aporeto provides, please visit the resources page on our website. And don’t forget you have an opportunity now to download the attachments that are available through this webinar. With that, we thank you for your time today and hope you found the content informative. This now concludes the Aporeto webinar. Enjoy the rest of your day.

scroll to top