[00:00.000 --> 00:12.320] from RUTWIC about demystifying, demystifying Stackrocks. Welcome RUTWIC. [00:12.320 --> 00:20.640] Thank you. Good evening everyone. Thanks for showing back for the late evening talk. [00:20.640 --> 00:25.440] I appreciate your time. So my name is RUTWIC Shiv Sagar. I work at [00:25.440 --> 00:30.960] that as a senior technical support engineer. I mainly work on solving open shift as well as [00:30.960 --> 00:37.440] Stackrocks issues with the customer and with engineering teams. So with the recent time and [00:38.400 --> 00:44.080] security threats or attacks, we have seen that container and Kubernetes adoption has equally [00:44.080 --> 00:51.040] increased. With that, security has become a biggest concern, right? So we'll see how [00:51.040 --> 00:57.600] Stackrocks is paving the way for Kubernetes native security and helping us to achieve or resolve [00:57.600 --> 01:05.840] the security issues with ease and automation. So this is the brief agenda for today's talk. [01:05.840 --> 01:10.480] So in the first few slides, we'll discuss current state of Kubernetes security, [01:11.520 --> 01:16.160] what are the best practices and how DevSecOps approach benefits into the [01:16.160 --> 01:22.960] security posture, you know, to shift the security for your developers as well as your security [01:22.960 --> 01:29.600] admins. And then we will see how Stackrocks ecosystem is helping end users, developers, [01:29.600 --> 01:36.480] as well as your security teams to overcome the security issues with ease. And then we will [01:36.480 --> 01:44.640] have some demo at then. Yeah. So first of all, let's understand what is zero-dose security, [01:44.640 --> 01:51.120] why we require zero-dose security, right? So zero-dose security is basically a framework [01:51.120 --> 01:57.920] which requires all the users to be authenticated and authorized continuously before they've been [01:57.920 --> 02:04.240] granted an access to your application and data. So if you manage to achieve zero-dose security [02:04.240 --> 02:10.960] model, then I would say that we could resolve or minimize the impact at the very early stage of [02:10.960 --> 02:19.200] your application lifecycle. Then how exactly zero-dose security fits into the software supply [02:19.200 --> 02:27.200] chain. So what exactly is software supply chain? It includes everything, everyone and everything [02:27.200 --> 02:32.640] that touches your application code into entire software development lifecycle, right? It could [02:32.640 --> 02:39.200] be your deployment, it could be your final artifact, it could be a CI CD pipeline. So it's [02:39.200 --> 02:46.400] essential that we build the application in such a way that the assurance at every stage of [02:46.400 --> 02:52.960] application is being taken seriously. That way we could achieve the trust rates of software [02:52.960 --> 03:00.960] supply chain. So yeah, we can see that security dependencies, securing code, securing containers [03:00.960 --> 03:06.080] as well as the infrastructure are all part of the software supply chain. [03:06.080 --> 03:13.600] Let me ask you this question. If you're using Kubernetes or in general any applications, [03:13.600 --> 03:18.480] have you ever delayed or slowed down the application deployment into production due [03:18.480 --> 03:30.080] to container security concerns? Anyone of you? All right, I assume so because that's how we go [03:30.080 --> 03:35.040] through application lifecycle. We deploy the application, then we analyze the application, [03:35.040 --> 03:43.680] behavior and we detect the vulnerabilities. So in the recent trend, we have seen some common factors [03:43.680 --> 03:50.640] or common anti-patterns which were causing delays for an application to get deployed on the production. [03:51.200 --> 03:57.600] So misconfiguration has topped the percentage where following to that we have vulnerabilities [03:57.600 --> 04:03.920] to remediate, right? So for example, we kind of able to detect the vulnerabilities but somehow [04:03.920 --> 04:10.800] we tend to overlook them or we could not assist them accurately. That leads to the vulnerability. [04:11.920 --> 04:18.000] I mean, we get to know that okay, vulnerability exists but there are no proper ways or tooling [04:18.000 --> 04:23.920] to fix that kind of vulnerability. Then we ultimately have security issues at the runtime, [04:23.920 --> 04:28.560] you know, which could be costly or it could affect your entire production. [04:28.560 --> 04:37.920] So how can we make sure that these kind of issues are reduced? Let's say, [04:39.040 --> 04:46.160] so in today's world, we need DevSecOps approach, DevOps, just DevOps isn't enough, right? We need [04:46.160 --> 04:56.240] DevSecOps to shift the security from our traditional security practices. So DevSecOps helps us define [04:56.240 --> 05:01.200] microservices architecture. It provides us declared definition to, you know, harden your [05:01.200 --> 05:06.160] security parameters, network policies and deployment conflicts. It also makes sure that [05:06.160 --> 05:11.600] the infrastructure stays immutable. So at the runtime, nobody else is allowed to, you know, [05:11.600 --> 05:19.920] touch the software or your application deployment. At the same time, it is important that we know [05:19.920 --> 05:25.760] Kubernetes native security is increasingly critical and securing supply chain is also [05:26.320 --> 05:34.320] equally essential. So what are the basic Kubernetes security scan challenges? So we know [05:34.320 --> 05:40.960] that containers are numerous and everywhere. If I have to put analogy, like we say that [05:40.960 --> 05:45.920] everything is a file in Linux in a similar way, everything runs in a container when we [05:45.920 --> 05:52.240] talk about Kubernetes, right? So they may tend to pose compliance challenges. Every container [05:52.240 --> 05:58.080] image is tied up with some of the other container registers, right? So sometimes we even forget to [05:58.080 --> 06:03.440] add TLS with best authentication to our image registry that may, you know, expose [06:04.080 --> 06:09.680] security threats over the internet if at all we expose that. And we are also aware that [06:09.680 --> 06:15.680] containers by default talk to each other without any network policies. So it is important that we [06:15.680 --> 06:22.240] define network policies at early stages. And this one, I think most of you can relate that [06:22.240 --> 06:28.160] when we show Kubernetes, all of the configurations looks pretty easy, but defaults are usually [06:28.880 --> 06:34.000] the less secure, right? So we as an admin or developer have to proactively understand what [06:34.000 --> 06:40.160] configuration or what risk tolerance required for my organization or developer environment. [06:40.160 --> 06:47.360] So in Kubernetes, application lifecycle span across three phases mainly, that is build phase, [06:47.360 --> 06:54.080] deployment phase, and runtime phase. So how we can make sure that we secure each and every stage [06:54.080 --> 06:59.920] of the application, right? So when we talk about build phase, it's important that we isolate the [06:59.920 --> 07:05.840] vulnerability of security issue at the earliest. Otherwise, it would be very costly and risky to [07:05.840 --> 07:12.160] detect the vulnerabilities at the runtime, right? So what we can do, we can use minimal base images [07:12.160 --> 07:18.960] so that we can avoid unnecessary package managers or, you know, any executable programs into your [07:18.960 --> 07:26.160] container images. Then we can always use image scanner to identify known vulnerabilities. I think [07:26.160 --> 07:31.040] identifying vulnerabilities just once is not enough. You need to make sure that whatever [07:31.040 --> 07:36.320] security integration scanner you're using that will continuously validate your container images and [07:36.320 --> 07:42.800] send the real-time alerts to your development team as well as security admins. Then yes, at the [07:42.800 --> 07:49.920] build phase, we need to integrate CI CD pipeline. So that way, most of the things becomes automated [07:49.920 --> 07:55.840] and you don't have to look around each and every build config to understand where the security issue [07:55.840 --> 08:01.840] lies through CI CD pipeline. If the stage gets filled, your production won't be affected and [08:01.840 --> 08:08.960] build would be stopped over there. Then at the deployment phase, as I mentioned, the default [08:08.960 --> 08:16.160] deployment config doesn't come with network policy. We need to understand what services that [08:16.160 --> 08:22.000] deployment is trying to communicate, what are the ports that are defining the deployment config. [08:22.000 --> 08:28.960] And accordingly, we can define our own network policies. Then we also need to make sure that [08:28.960 --> 08:35.120] the deployment doesn't allow root-level privileges or any unknown users, you know, [08:35.120 --> 08:39.760] user IDs to access your application. You should be always aware of what users are [08:39.760 --> 08:45.440] going to access your application. And then yes, we can extend the image scanning to [08:45.440 --> 08:51.440] deployment phase. So it's important that we do not restrict our image scanning at the build phase, [08:51.440 --> 08:58.720] but we continue doing that at the deployment phase as well. Then runtime phase, as I mentioned, [08:58.720 --> 09:05.600] we need to extend our scanning at the runtime as well. So we can easily understand and quickly [09:05.600 --> 09:10.720] understand what issues have appeared and what actions I need to take. It also helps [09:11.520 --> 09:18.080] monitoring network traffic to limit unnecessary or insecure communications. Then if you find any [09:18.080 --> 09:24.000] suspicious activity, and if we, at the same time, if we have multiple replicas of your application, [09:24.000 --> 09:29.120] then we can compare all the replicas and processes in time to understand what anonymous [09:29.120 --> 09:38.880] activity is happening. So to overcome all the challenges, we see Sycrox is helping the end [09:38.880 --> 09:47.040] users and the community as well. So why Sycrox is open source, right? Red Hat believes open model [09:47.040 --> 09:54.160] when it comes to your software or developing the application. And we believe that open source [09:54.160 --> 10:01.280] software can significantly help developers to drive the project with innovation as well as foster [10:01.280 --> 10:07.920] the collaboration within community. So Sycrox is working towards providing the open source solution [10:08.480 --> 10:14.000] which will allow end users to decide how they want to protect their Kubernetes clusters. [10:14.000 --> 10:23.920] So let's understand what Sycrox has to offer us. It enable users to address all significant [10:23.920 --> 10:28.720] secretive cases across entire application lifecycle that we discussed, right? Right from [10:28.720 --> 10:34.960] your build deployment and runtime. It also gives you greater visibility over vulnerability management, [10:35.520 --> 10:40.720] configuration management, network segmentation, compliance, threat detection, incident response, [10:40.720 --> 10:51.600] and risk profiling and tolerance. So Sycrox has a policy engine that allows user to run the policies [10:51.600 --> 10:56.960] out of the box, meaning that let's say if I have severity with CVSS score greater than or equal to [10:56.960 --> 11:02.800] seven, then I could have alert for the same CVSS score and understand what deployments are associated [11:02.800 --> 11:10.880] with it. Then Sycrox API allows user to integrate with the image scanning tools, CICD tools, [11:10.880 --> 11:15.600] container and times of their own choice, secret management, DevOps notification, [11:15.600 --> 11:23.840] to ease and security flow end to end. You can also run it on any cloud or hybrid cloud, [11:23.840 --> 11:27.120] or if you want to choose on prem, you can deploy it over there. [11:27.120 --> 11:34.880] So this is the bird eye view architecture. I would say where you would see a central [11:34.880 --> 11:39.920] in the blue box as a central hub, which gets exposed over load balancer for the [11:39.920 --> 11:46.640] clients to consume the Sycrox API. It is written in the REST API. And then we have sensor, [11:46.640 --> 11:52.080] admission controller collector, which is logically grouped and called as a secure cluster, right? [11:52.080 --> 11:57.360] So you can, once you configure this set of components, you can call a Kubernetes cluster [11:57.360 --> 12:02.560] as a secure cluster. And then you can keep on adding as many Kubernetes cluster as you want [12:02.560 --> 12:09.680] into the secure central. Then central also has scanner, which aggregates the vulnerability [12:09.680 --> 12:14.960] feeds that are fetched from the central. So central basically collect vulnerabilities [12:15.760 --> 12:21.200] feed from upstream sources as well as NVD database. Then on each and every node, [12:21.200 --> 12:26.880] we would have collector agent, which will collect host level data for the container network and [12:26.880 --> 12:36.000] the runtime. So this is the UI where let's say if I have integrated 100 Kubernetes clusters, [12:36.000 --> 12:41.600] then how can I manage or understand how those are behaving? What are the healthy components [12:41.600 --> 12:47.840] and what are unhealthy? So we can have a quick look to see how systems are performing. [12:47.840 --> 12:55.040] So what problem segments Stacrox is going to solve? So these are four problem [12:55.040 --> 12:59.440] segments, which I found very common between developers and security teams to understand [12:59.440 --> 13:05.120] whether my container contains content-compromising infrastructure, are there any known vulnerabilities, [13:05.120 --> 13:10.560] are there any runtime and OS layers, container up-to-date, is my Kubernetes cluster compliant [13:10.560 --> 13:17.760] with industry-certified security benchmarks? So let's see how Stacrox solved these problems. [13:17.760 --> 13:24.720] So Stacrox can identify the vulnerabilities in the base image package that are installed by the [13:24.720 --> 13:30.160] package managers, then programming language-specific dependencies, programming runtime and frameworks. [13:30.160 --> 13:35.280] It supports package formats, which I have mentioned there. And I believe most of you [13:35.280 --> 13:41.680] work with the same package formats. And there are supported operating systems like Alpine, [13:41.680 --> 13:48.560] Amazon, CentOS, Red Hat, Enterprise Linux. So managing compliance is equally important for [13:48.560 --> 13:54.720] our organizations to the security standards. So it supports out-of-the-box compliance standards [13:54.720 --> 14:01.680] like CIS, benchmark for Kubernetes is occur, then HIPAA, NIST, PCI. So you can run scans [14:02.240 --> 14:08.240] through this profile. So Centel or Stacrox specifically collect snapshots of your Kubernetes [14:08.240 --> 14:15.280] cluster, then it aggregates the data and analyzes what checks are being passed and what checks are [14:15.280 --> 14:22.240] getting filled. It will help to evaluate for the regulatory compliance. It will help to harden [14:22.240 --> 14:30.080] your Docker and underlying container runtime. So this is the UI where you can see passing [14:30.080 --> 14:40.480] percentage across your cluster, across namespaces, across the nodes. So you can have a better idea [14:40.480 --> 14:45.520] where the issue or what compliance checks are filling. Accordingly, you can navigate to that. [14:45.520 --> 14:51.280] In the right section, here you will see what controls are filling, what needs to be set. [14:51.280 --> 14:56.400] For example, here I have taken an example of CNF files, which says that the file permission [14:56.400 --> 15:01.680] should be more restricted. You can accordingly take the actions and fix that control. [15:03.040 --> 15:11.840] So what is Collector? Collector overall helps all the Stacrox ecosystem to maintain and manage the [15:12.560 --> 15:18.800] container runtime activities as well as post-level processes information. So it's an agent that [15:18.800 --> 15:23.360] runs on every node, under strict performance limitations and gather data via either kernel [15:23.360 --> 15:30.320] module or a BPF probes. It collects, it analyzes and monitor content activity on cluster nodes. [15:30.320 --> 15:36.480] It collects information about runtime and network activity and sends collected data to the sensor. [15:36.480 --> 15:41.120] Sensor then will help central to display all the data over the UI. [15:43.920 --> 15:50.320] Okay, we'll quickly see. This is the traditional way of how we used to see at kernel when we [15:50.320 --> 15:56.640] when we deployed the application. We have user space where application, user application runs [15:56.640 --> 16:01.920] and for every resource that we need into user application, we need system calls. So [16:03.200 --> 16:08.400] then user request any data, the kernel copies that information from kernel space to user space. [16:08.400 --> 16:12.800] But due to some limitations, it is not possible for user to access [16:12.800 --> 16:20.960] everything that is into the kernel space, right? And this was not a problem when we talk about [16:20.960 --> 16:26.560] a single Linux source, but with container adoption, we know that the number of processes or container [16:26.560 --> 16:31.360] that may run on a Linux source have increased, the density of container have increased, right? [16:31.360 --> 16:36.880] So resource overhead, managing container issues, container runtime issues has become a great [16:36.880 --> 16:43.200] challenge. So all these required activities require kernel support that we know. So how, [16:43.200 --> 16:48.880] how do we overcome that? We can use EPPF rules. What is that? It is an extended Berkeley packet [16:48.880 --> 16:54.720] filter, right? It is not just a packet filter. It is more than that. It helps us in networking, [16:54.720 --> 17:01.840] tracing, profiling, observability and monitoring and security. I will quickly go ahead because [17:01.840 --> 17:09.040] of time constraint. Then we have network policies. In Kubernetes, we know that by default, [17:09.040 --> 17:14.000] network is, network policies are not there. We need to define our network policies by our own. [17:14.000 --> 17:19.360] But considering a production grade environment, it is really difficult to, you know, write [17:19.360 --> 17:25.120] each and every network policy ML because sometimes we do not understand what source, [17:26.160 --> 17:30.560] from what source the traffic is coming. At large scale, it could be a difficult, right? [17:30.560 --> 17:38.080] So it provides network graph, network segment, segmentation to understand or to modify baselines [17:38.080 --> 17:43.840] so that we can define, okay, if traffic is coming from this source, then this should be blocked [17:43.840 --> 17:47.840] or network policy should be created accordingly with this baseline. [17:51.600 --> 17:59.600] So this provide is, so yeah, we, Cyclox provide a network simulator, network policy simulator [17:59.600 --> 18:04.000] through which you can understand what are the active connection from where the connection is [18:04.000 --> 18:09.600] coming, whether it is allowed by the deployment or whether it is anonymous. Accordingly, you can [18:09.600 --> 18:14.640] define your baseline and restrict the traffic. It will help us to create the network policies at [18:14.640 --> 18:19.680] the runtime. So we can just copy that network policy and configure it in our Kubernetes lecture. [18:22.800 --> 18:27.920] Then we have admission controllers. So it basically helps control, to enforce the security [18:27.920 --> 18:34.640] policies before Kubernetes creates workload. For example, deployment, demo sets, it intercepts [18:34.640 --> 18:40.880] the API request when any program runs or application runs into the pod. So in Cyclox, [18:40.880 --> 18:45.920] we use admission controller with security policies so that any policy gets violated, [18:45.920 --> 18:50.480] then it will immediately prevent the deployment from getting into running straight. [18:50.480 --> 19:00.800] Okay, so I will quickly show a demo where I have given an example of log forces, [19:00.800 --> 19:21.120] forces CV and to understand how it can prevent the deployment. Just let me show it quickly. [19:21.120 --> 19:32.160] I hope screen is visible, yeah. So this is the cluster dashboard, [19:38.800 --> 19:43.920] where I can see images at the most risk, what are the policies, current policies violated. [19:43.920 --> 19:53.200] So Cyclox provides some default policies as per the best practices pertaining to the security [19:53.200 --> 19:59.040] posture. So considering the criticality of the log forces, we have included this policy as well. [20:01.600 --> 20:05.920] So you can configure policies into two modes, inform as well as enforce. [20:05.920 --> 20:13.280] So currently, if I look at this policy, it is into inform mode only. So I have edited it and [20:13.280 --> 20:21.600] make it enforced. [20:33.840 --> 20:37.040] Yeah, so it executes on build stage, deployment stage. [20:37.040 --> 20:44.000] I marked inform and enforce and enable it for the deployment phase. [20:45.680 --> 20:50.640] Right, so once the policy created, it will show whether any existing deployments are [20:51.440 --> 20:58.240] violating this policy or not. Then for the demonstration purpose, I have run a vulnerable [20:58.240 --> 21:08.560] deployment which has this log forces CVT. So this container image has the vulnerable app. [21:13.840 --> 21:21.360] So in the parallel terminal, I have keep a watch to trace the events in the run time. [21:21.360 --> 21:32.560] So as soon as I create this deployment, you will see that the parts are getting terminated [21:33.360 --> 21:38.000] because of the policy violation. So it won't allow the part to get into a running state [21:38.000 --> 21:45.040] because of policy violation. And in the events, you will see that [21:45.040 --> 21:50.640] stack rock enforcement has been detected and the deployment has been scaled to zero. [21:58.400 --> 22:05.440] Okay, time is up. I have one more demo, quick demo. If you would like to see, let me know. [22:05.440 --> 22:18.400] Quick demo, yeah, that would be interesting. So in this demo, I have explained how we can [22:18.400 --> 22:24.000] leverage the DevSecOps approach to shift the security. For that, I have used Tekton in the [22:24.720 --> 22:29.360] pipeline operator, which is deployed in an open shift. So this operator is nothing but [22:29.360 --> 22:38.000] using Tekton framework under the hood. Let's see it quickly. So it provides a standard CICD [22:38.000 --> 22:44.000] pipeline definition in a declarative approach. So we can define the task as well as pipeline, [22:44.000 --> 22:48.000] which further than can be portable across all your Kubernetes infrastructure. [22:50.000 --> 22:55.680] So I have defined these three tasks where images, image will be checked and scanned. [22:55.680 --> 23:01.440] And in the task, it's in the background, it is calling a stack rocks API through rock CTL. [23:02.240 --> 23:10.400] It's same as keep CTL. It talks with the stack rocks API and performs the scanning for the image. [23:15.360 --> 23:19.200] So these two tasks I have mentioned in the pipeline definition, image check and image scan. [23:20.000 --> 23:24.720] And there is one more secret where I have provided stack rocks API endpoint and the [23:24.720 --> 23:34.560] credentials. So we'll create a name space called pipeline demo. [23:39.440 --> 23:44.560] Then I have created secret as well as the pipeline definition. Next, we will [23:46.240 --> 23:50.480] execute those tasks. We should develop more and [23:50.480 --> 24:01.360] see that pipeline has been defined. So these two tasks are there. Pipeline run is not initiated [24:01.360 --> 24:07.200] yet. So we'll initiate the pipeline run. We'll pass the container image that we want to scan. [24:07.200 --> 24:17.040] For example, here I have provided MySQL 80. So pipeline has been created. You can check the [24:17.040 --> 24:27.280] logs, real-time logs through Tecton. It's a client for Tecton through which you can perform the [24:27.280 --> 24:37.760] operations. So it also gives you better visibility if in case your tasks are failing. For example, [24:37.760 --> 24:47.280] here my credentials were expired. So I had to refresh the credentials and then I ran the pipeline. [24:49.200 --> 24:51.760] Now we will see the pipeline gets into running set. [24:54.400 --> 25:01.600] The tasks has been passed. Now we will see all the CVs that are associated with this [25:01.600 --> 25:08.160] particular container images. You can get each and every CV ID, its CVS score, and you can [25:08.160 --> 25:16.240] accordingly share those security admin. You can also check policy violation through image check [25:16.240 --> 25:24.000] tasks to understand what policies have been violated, what are their ratings, whether those [25:24.000 --> 25:47.040] are rated as low or moderate or risky. That is it. So I have put some handful resources [25:47.040 --> 25:52.640] for you to go ahead and get started with the StackFox community project. You can also hop [25:52.640 --> 25:59.680] into our Slack channel and that is it from my side. So do we have some questions here? [26:08.960 --> 26:13.600] Thanks for the excellent presentation. I have one question regarding you mentioned a lot about [26:13.600 --> 26:19.040] the agent which is kind of scanning and detecting the vulnerabilities. You briefly touched upon the [26:19.040 --> 26:27.040] object central, which I think if I understand correctly you are pushing that detection of [26:27.040 --> 26:34.720] vulnerabilities into the central. Is that right? Yes. So central fetches the vulnerability [26:34.720 --> 26:40.160] feeds from the upstream sources or let's say you have NVD database. So every five minutes it will [26:40.160 --> 26:46.800] keep on checking what vulnerability are present in the upstream. So accordingly once you download [26:46.800 --> 26:52.560] then the collector or the sensor fetches those data into your respective Kubernetes cluster. [26:53.760 --> 27:01.280] So what if when the container is running, the pod is running and suddenly the agent checks the [27:01.280 --> 27:08.160] vulnerability database and detects possibly that the version running in the pod has having some [27:08.160 --> 27:14.720] critical vulnerability. What actions would it do actually? It actually depends on us what actions [27:14.720 --> 27:20.960] we want the admission controller to perform. Either we can have it in inform mode so that we [27:20.960 --> 27:26.960] understand okay policy is violated but that whether that is really affecting my workload or the [27:26.960 --> 27:32.960] runtime accordingly we can take actions. If you want strictly not to allow any deployment to run [27:32.960 --> 27:39.600] as soon as the policy is violated we can put it into enforced mode and we can decide at what stage [27:39.600 --> 27:44.640] we want to terminate that at the build stage, deploy stage. It's basically based on your policy. [27:46.320 --> 27:53.520] And the central is kind of accessible by is it like a closed environment or it is open where [27:54.160 --> 28:00.160] anywhere anyone can access that. Any containers running in any cloud can access that. It can be [28:00.160 --> 28:04.720] configured in online mode as well as air gap environment. So again it depends on your case [28:04.720 --> 28:10.080] or your organizational requirements how you want to install it. In terms of offline mode you can [28:10.080 --> 28:17.280] always download those vulnerability feeds or kernel probes modules in your secure host and [28:17.280 --> 28:23.120] then you can inject those to center offline way. Okay. That option is also there. Thank you. [28:25.440 --> 28:34.240] Any other questions? Yes. I just have a question. Can you use stack rocks as a [28:34.240 --> 28:42.240] honeypot? I mean can you just let the intruder or the security thing to go to actually get a [28:45.280 --> 28:53.440] like a description of all the things it's doing. The attacker instead. So let's say you not just [28:53.440 --> 28:59.680] cut it because you just right now basically applying a policy you're cutting the thing. But [28:59.680 --> 29:06.400] can you let it just isolate the container and let it run just to have 4 and 6 out of it. See how [29:06.400 --> 29:14.720] things are behaving. Yeah other than policies we can always do the risk analysis. Sometimes it [29:14.720 --> 29:21.520] happens that vulnerability that may found as a critical but in terms of my application I might [29:21.520 --> 29:28.080] not have that vulnerable code at the runtime stage right. So I can always mark that vulnerability [29:28.080 --> 29:35.680] as a false positive or I can defer that vulnerability. Does that answer your question or you have [29:35.680 --> 29:39.840] something else? Yeah I mean as long as you can get I mean sometimes the scenario is that you [29:39.840 --> 29:46.480] have the pod actually in production and something happens to it and you want to actually isolate [29:46.480 --> 29:51.040] it but you still want to have 4 and 6. You don't want to just cut it. You just want to understand [29:51.040 --> 29:59.200] the attack. So in terms of isolation it gives us a rich context from the UI at what layer the [29:59.200 --> 30:06.560] vulnerability is present. For example we can inspect each and every Docker layer. It allows us to see [30:06.560 --> 30:13.520] at what component the vulnerability exists. So you can always you know modify the image. You [30:13.520 --> 30:20.400] can build it again and patch the changes. Thank you for the question and thank you for [30:20.400 --> 30:27.760] the talk. I think we are out of the time. Thank you.