[00:00.000 --> 00:13.760] So, last session for today, and we will make sure it's going to be a really, really long [00:13.760 --> 00:18.400] one, so that you have to starve and don't get to the drinks. [00:18.400 --> 00:24.720] And I'm happy to welcome Tom from Red Hat, have fun, and enlighten us. [00:24.720 --> 00:25.720] Thank you. [00:25.720 --> 00:31.160] Should I have fun or them? [00:31.160 --> 00:32.160] What? [00:32.160 --> 00:33.160] Should I have fun or them? [00:33.160 --> 00:34.160] Both? [00:34.160 --> 00:35.160] Okay. [00:35.160 --> 00:36.160] Okay. [00:36.160 --> 00:38.160] Hello, everyone. [00:38.160 --> 00:39.160] My name is Tom Sofao. [00:39.160 --> 00:46.760] I'm, as you heard, I work at Red Hat, and I think the last talk was a great segue into [00:46.760 --> 00:47.760] my talk. [00:47.760 --> 00:52.280] So, if you were here for the previous presentation, who was here? [00:52.280 --> 00:53.280] Good. [00:53.280 --> 00:54.280] Good. [00:54.280 --> 01:01.080] So, we were talking about, or the talk was about standardization, call for a unified [01:01.080 --> 01:11.000] platform, call for a sharing exchange of ideas, knowledge of findings, and how to get to some [01:11.000 --> 01:15.200] kind of an open, unified, sovereign cloud. [01:15.200 --> 01:23.800] Well, we've been working on, I think, like that for past two years or so. [01:23.840 --> 01:24.840] Yeah. [01:24.840 --> 01:33.480] In an initiative called Operate First, building an open hybrid cloud platform ready for everybody [01:33.480 --> 01:40.200] to consume, to use, to look into operations, to look into metrics, look into whatever telemetry [01:40.200 --> 01:44.800] you have to actually do the operations yourself if you want to. [01:44.800 --> 01:52.000] So, this talk is going to be focused precisely on that, on sharing a story, sharing a lesson [01:52.000 --> 02:02.600] that we learned during the time, and maybe hopefully take it as an opportunity to not [02:02.600 --> 02:08.520] just share that with you, but to also encourage you to learn those lessons for yourself and [02:08.520 --> 02:14.480] experience our pains and our challenges yourself. [02:14.480 --> 02:17.120] So, let's dig in. [02:17.760 --> 02:25.040] So, the talk is called Effective Management of Kubernetes Resources, the GitOps way, GitOps [02:25.040 --> 02:26.040] for cluster admins. [02:26.040 --> 02:30.960] So, first we're going to talk a bit about what is a cluster lifecycle and what's the [02:30.960 --> 02:33.880] role of cluster operations in that. [02:33.880 --> 02:39.320] Then we're going to experience the chaos that is out there in the world, and then we're [02:39.320 --> 02:40.800] going to talk YAML. [02:40.800 --> 02:45.600] If you've been on the YAML lighting talk, this is going to be a very slight variation [02:45.600 --> 02:52.400] of that, but more Kubernetes focused, and then we're going to bring some order to that [02:52.400 --> 02:53.400] chaos. [02:53.400 --> 02:59.160] So, we have these free graces of cloud management, right? [02:59.160 --> 03:02.120] We usually provision some resources. [03:02.120 --> 03:06.120] We manage those clusters once they are deployed, once they are provisioned. [03:06.120 --> 03:09.040] We then deploy applications on top of them. [03:09.040 --> 03:15.560] If you are talking about Kubernetes-based cloud systems, this is the usual free pillar [03:15.920 --> 03:20.000] of free graces of what we are experiencing. [03:20.000 --> 03:25.240] We have tools available for both hands of the spectrum. [03:25.240 --> 03:33.000] So, for resource provisioning, we have great tools like Ansible Terraform, Hive, or cluster [03:33.000 --> 03:35.840] API in Kubernetes. [03:35.840 --> 03:42.200] This is an established pattern, established workflow that is widely used across hyperscalers, [03:42.200 --> 03:48.560] across people who are deploying Kubernetes by themselves, and so on. [03:48.560 --> 03:49.560] Good. [03:49.560 --> 03:51.200] This is a solved issue. [03:51.200 --> 03:54.400] This is a non-issue. [03:54.400 --> 03:58.720] Then there's the application maintenance, application deployment, application lifecycle. [03:58.720 --> 04:10.280] Again, very well thought through aspect, very studied place. [04:10.280 --> 04:12.080] We have tools like Estimize and Helm. [04:12.080 --> 04:21.240] We have Argo CD or Flux CD to do continuous deployment of your workloads and to provide [04:21.240 --> 04:28.040] you with all the goodies like rollbacks to previously non-broken deployment and taking [04:28.040 --> 04:36.040] it even further with other projects like, and now I forgot the name. [04:36.040 --> 04:39.280] What do we are talking last about? [04:40.280 --> 04:42.280] The SRE-driven deployment? [04:44.280 --> 04:45.280] No, you don't. [04:45.280 --> 04:47.280] Okay, let's move on. [04:47.280 --> 04:49.280] What about the middle part? [04:49.280 --> 04:57.280] The cluster management itself, if we are managing Kubernetes resources, what are we talking about? [04:57.280 --> 05:05.280] If we are managing nodes, if we are managing tenancy, if we are managing networks, what [05:05.280 --> 05:08.280] are we actually talking about and how we can manage that? [05:10.280 --> 05:15.280] We have these four main problems that we want to solve somehow. [05:15.280 --> 05:21.280] We found out that basically, nowadays, it wasn't the case two years ago, but nowadays [05:21.280 --> 05:28.280] it's the case that we can solve all of them through Kubernetes native resources, through [05:28.280 --> 05:34.280] YAMLs, through deploying YAMLs, applying YAMLs to our clusters. [05:34.280 --> 05:43.280] It's done by a few different means, so we have main areas within Kubernetes API that [05:43.280 --> 05:46.280] we can explore to solve these needs. [05:46.280 --> 05:52.280] We have multi-nancy, so we can solve that by just simple namespaces, cluster roles and [05:52.280 --> 05:54.280] what not. [05:54.280 --> 06:00.280] Cluster upgrades, again, we can apply install operators, talk to those operators and get [06:00.280 --> 06:02.280] those clusters upgraded. [06:02.280 --> 06:09.280] For storage management, we can use operators, we can use storage classes and storage providers [06:09.280 --> 06:14.280] and custom resources if we wanted to deploy our own storage on, for example, bare metal [06:14.280 --> 06:16.280] clusters. [06:16.280 --> 06:23.280] For network management, we can do that also through operators, so things like search manager, [06:23.280 --> 06:29.280] things like NM state, all of that can now happen through Kubernetes API natively. [06:30.280 --> 06:31.280] That's great. [06:31.280 --> 06:34.280] What did it tell us about the cluster management? [06:34.280 --> 06:38.280] It can be all managed as a Kubernetes application. [06:38.280 --> 06:41.280] It's in YAML. [06:41.280 --> 06:44.280] Well, YAML is a mess. [06:44.280 --> 06:51.280] We know that, and we know that thanks to multiple aspects. [06:51.280 --> 06:57.280] YAML can be defined and stored in files, no matter how you structure it. [06:57.280 --> 07:00.280] It can be a single file with many different resources. [07:00.280 --> 07:11.280] It can be many different files, each of them holding a separate resource, and only asterisks [07:11.280 --> 07:15.280] in Bash is the limit for your Q-cattle apply. [07:15.280 --> 07:19.280] You can do whatever you like on the client side. [07:19.280 --> 07:25.280] On the other hand, on the server side, the manifest that you apply to the cluster is not [07:25.280 --> 07:27.280] the same that you get from the cluster back. [07:27.280 --> 07:28.280] It's modified. [07:28.280 --> 07:29.280] It's mutated. [07:29.280 --> 07:31.280] You have things like status. [07:31.280 --> 07:35.280] Some operators, some controllers modify also the specs. [07:35.280 --> 07:40.280] Some modify also annotations, labels, and whatnot. [07:40.280 --> 07:43.280] You don't have the full control over the definition. [07:43.280 --> 07:54.280] You need to know what subset of the keys and values you can actually define as a declarative [07:54.280 --> 07:57.280] manifest for your resource. [07:57.280 --> 08:02.280] It's not the same as the manifest applied on the cluster. [08:02.280 --> 08:05.280] So how people store manifests online? [08:05.280 --> 08:15.280] If we pull random project on GitHub that is deployed to Kubernetes, you will find many of these patterns. [08:15.280 --> 08:23.280] Q-cattle doesn't have ordering, so people solve it creatively through numbering their manifests. [08:23.280 --> 08:28.280] Some are aware that their application may run in different environments. [08:28.280 --> 08:38.280] So they create different files with duplicate content with the same deployment with just few lines changed here and there. [08:38.280 --> 08:40.280] Some combine those approaches. [08:40.280 --> 08:52.280] In some projects, and we find that even in some controllers for their dev setup, they have a single file with all those resources in line in there. [08:52.280 --> 08:55.280] This is not a standard. This is not a good practice. [08:55.280 --> 09:10.280] And if we want to manage environment, which is live, which should be approachable to people, this is not the way we should do it. [09:10.280 --> 09:14.280] So in application space, we have basically two choices. [09:14.280 --> 09:17.280] How to organize, how to structure our manifest. [09:17.280 --> 09:31.280] One is through Helm, which is great if you're deploying applications and you want some templating involved if you want to quickly change many different places of the same manifest or of different manifests. [09:31.280 --> 09:42.280] So you can basically create this template, applying these values, and you get the full YAML that we saw earlier. [09:42.280 --> 09:49.280] Great. But is it readable? Is it understandable from the YAML itself without rendering? [09:49.280 --> 09:51.280] We don't think so. [09:51.280 --> 09:59.280] And we want our cloud manifests to be auditable, to be approachable, to be reviewable. [09:59.280 --> 10:09.280] So if we want to be able to explore what those changes do on a PR review without actually spinning up a cluster and applying that PR, [10:10.280 --> 10:15.280] and maybe do some static validation on it, you can do that with this. [10:15.280 --> 10:18.280] You would need to render it. You would need to understand it. [10:18.280 --> 10:27.280] And if you change a template and you have different values for different environments, how would it affect the template itself? [10:27.280 --> 10:29.280] So you need to explore all the possibilities. [10:29.280 --> 10:35.280] And this is one of the biggest challenges in Helm space that we are currently facing in application development. [10:36.280 --> 10:45.280] Then we have the other way, the Kubernetes native configuration management customized, which is a bit nicer. [10:45.280 --> 11:00.280] All of those manifests are fairly easily organized and referenced through customization, and all those customizations are organized into basis and overlays. [11:00.280 --> 11:13.280] So it's a composition type of configuration that we have a base which defines the basics and then we have the overlay which can patch and mix different resources. [11:15.280 --> 11:18.280] These resources in the base are already complete. [11:18.280 --> 11:24.280] This is a complete definition, a complete declaration of my resource. [11:24.280 --> 11:26.280] This is reviewable. [11:26.280 --> 11:39.280] So we kind of thought that this might be a way, but before that we defined a couple of rules, a couple of directives that we wanted to achieve with this. [11:39.280 --> 11:47.280] So if we wanted to organize our manifests, we don't want to build our own solution. [11:47.280 --> 11:52.280] We don't want to build our own CI CD that would understand our manifest structure. [11:52.280 --> 11:57.280] We want to use something that is readily available with great community. [11:57.280 --> 12:02.280] We want the configuration to be stable. [12:02.280 --> 12:06.280] So if I change one manifest, it doesn't break five different clusters. [12:06.280 --> 12:10.280] And those things that never happen usually happen. [12:10.280 --> 12:18.280] So if something like that happens, I can roll back the faulty cluster, just that individual cluster. [12:18.280 --> 12:24.280] I don't need to roll back all the clusters that are working with that particular configuration. [12:24.280 --> 12:26.280] And it's unit testable. [12:26.280 --> 12:28.280] So that's also an important thing. [12:28.280 --> 12:36.280] File mapping, also very interesting topic because YAML allows you to inline multiple resources into a single file. [12:36.280 --> 12:39.280] But we don't want that. [12:39.280 --> 12:45.280] We want the file and its name to fully represent the resource. [12:45.280 --> 12:50.280] And before I even open the YAML, I already know what to expect inside. [12:50.280 --> 13:04.280] I don't have to guess from a def.yaml or from namespace.yaml, which also contains like OpenSheetProject or whatnot. [13:04.280 --> 13:07.280] Each file is readable without processing. [13:07.280 --> 13:09.280] That's so explanatory. [13:09.280 --> 13:19.280] I want to be able to open the code tab on my GitHub repository and understand the manifest. [13:19.280 --> 13:25.280] If I'm defying the same resource on multiple clusters, if I'm applying the same resource on multiple clusters, [13:25.280 --> 13:34.280] let's say I have the same user group on two different clusters, I want to apply similar or the same RBAC. [13:34.280 --> 13:42.280] I want to apply the same cluster roles, project namespace, permissions and whatnot. [13:42.280 --> 13:49.280] I don't want this definition to be explicit, to be defined differently, maybe differently, maybe slightly differently, [13:49.280 --> 13:51.280] maybe the same in two different places. [13:51.280 --> 13:54.280] I want to share the same definition. [13:54.280 --> 13:59.280] As a practice that we use in programming for ages, [14:00.280 --> 14:05.280] this is not a well-established pattern in Kubernetes manifest. [14:05.280 --> 14:08.280] We want to reuse stuff. [14:08.280 --> 14:16.280] And as I told before, the file name already describes what's inside. [14:16.280 --> 14:26.280] So we came up with this pattern, and this pattern has been embedded through a couple of organizations that I'll show you later on. [14:26.280 --> 14:29.280] And this is a pattern that we come up to. [14:29.280 --> 14:38.280] We have a base for Customize, which references every single object that we deploy to any other, [14:38.280 --> 14:44.280] any our cluster that requires elevated permissions. [14:44.280 --> 14:53.280] If those resources are standard namespace scoped things like deployment, config map, secret, whatnot, [14:53.280 --> 14:56.280] this is the developer responsibility. [14:56.280 --> 15:01.280] They live in their own self-contained namespace, and they can do whatever they want in there. [15:01.280 --> 15:07.280] But if we are talking about creating namespaces or creating cluster roles, [15:07.280 --> 15:19.280] we don't want developers to create namespaces on their own or create limit ranges or create resource quotas on their own. [15:19.280 --> 15:36.280] But we want to do this, set those things for them because we don't want them to basically expand and take over the cluster if we don't want them to. [15:36.280 --> 15:45.280] So this pattern of API group kind of name is actually kind of working because already from the path base, [15:45.280 --> 15:52.280] core namespace, sovereign cloud, or base, fosdmorg, talks, and I talk, [15:52.280 --> 15:58.280] I already know what the resource is about without actually looking into the file. [15:58.280 --> 16:04.280] Then I have overlays, which each overlay represents a single cluster. [16:04.280 --> 16:13.280] And they have customization, which basically mixes and matches whatever resources I want to pull from base. [16:13.280 --> 16:21.280] And if I want to change something from the base, I basically just patch it because customize allows us to patch resources [16:21.280 --> 16:27.280] and applies either a strategic merge patch or adjacent patch so I can do various things with that. [16:27.280 --> 16:36.280] This is very helpful if I have, for example, cluster admins group and I want different cluster admins on different cluster, [16:36.280 --> 16:42.280] but the group itself is already defined in base. [16:42.280 --> 16:48.280] Well, this is nice, but it doesn't work in all cases. It doesn't solve all the issues. [16:48.280 --> 16:59.280] So we had to introduce two additional concepts. One is components, which is also an alpha extension to customize, [16:59.280 --> 17:04.280] which allows you to reuse the same manifest multiple times. [17:04.280 --> 17:14.280] This is important in cases like RBAC if we have role bindings that we want to apply to multiple namespaces, [17:14.280 --> 17:19.280] like granting this user group admin access to a certain namespace, [17:19.280 --> 17:25.280] because if customized by itself wouldn't allow us to use that resource multiple times. [17:25.280 --> 17:31.280] So this is a limitation of customize in this particular case that can be overcome through components. [17:31.280 --> 17:43.280] And then we came up with bundles, which is an addition that basically selects related resources from the base, [17:43.280 --> 17:51.280] which are always applied together. So imagine you want to install a cert manager. [17:51.280 --> 17:56.280] It's always a namespace. It's always a service account with cluster role. [17:56.280 --> 18:04.280] It's always subscription or whatever, or cluster issuer for certificates. [18:04.280 --> 18:12.280] So all of these things come together and there are references bundles, so we don't clutter the overlays too much. [18:12.280 --> 18:19.280] And we also introduced common overlays, which are region specific, which are shared across regions, [18:19.280 --> 18:25.280] because for some regions we have a shared config. [18:25.280 --> 18:31.280] So how such single cluster overlay customization looks like? [18:31.280 --> 18:41.280] We reference the common. We take all from common, which also references some things from the base and whatnot. [18:41.280 --> 18:47.280] Then we can, for example, this way deploy our customer resource definition for proud. [18:47.280 --> 18:56.280] We can create an namespace for proud, and we can apply some RBAC to node labor. [18:56.280 --> 19:07.280] We can install a whole bundle for cert manager as is, and this ensures cert manager is deployed and configured properly for this cluster. [19:08.280 --> 19:22.280] We also can specify a specific version for that particular open shift cluster to upgrade it to do maintenance on the whole CPU version. [19:22.280 --> 19:29.280] And if we want to, we can patch certain resources, as I mentioned, the cluster admin. [19:30.280 --> 19:40.280] So fairly simple pattern, but there's been a two-year journey to get into a state where it's actually working across regions, [19:40.280 --> 19:47.280] where it's actually working across multiple clusters, and when it's efficient in managing multiple clusters [19:47.280 --> 19:56.280] through PRs, through GitOps, through single file YAML-based changes, so it doesn't break all the clusters. [19:57.280 --> 20:04.280] What I didn't mention on this slide, each of the cluster has their own separate ARGO CD application. [20:04.280 --> 20:14.280] So they act independently in the CD process. They reference the same code base, but they are independent, so the rollback is possible. [20:15.280 --> 20:25.280] So in conclusion, to evaluate what we did here, we have no duplicity. [20:25.280 --> 20:29.280] Manifests are readable. Manifests are not confusing. [20:29.280 --> 20:36.280] The set of rules is fairly simple. It's nothing very complex or bulky. [20:37.280 --> 20:44.280] The CI CD is very easy, and we can do static validation, we can do unit tests, we can do integration tests. [20:44.280 --> 20:48.280] All of that can be done fairly nicely. [20:48.280 --> 20:56.280] What are the downsides? We have boilerplate in the form of customizations, in the form of components, [20:56.280 --> 21:02.280] in the form of very nested path structures, directory structures and whatnot. [21:03.280 --> 21:10.280] Customize is not always very straightforward, so you need to learn the tools before you can use it. [21:13.280 --> 21:19.280] What also limits our static scheme validation is that manifests in base can be partial, [21:19.280 --> 21:26.280] because they are not always complete, because we expect to patch them in those overlays [21:26.280 --> 21:31.280] to, for example, set a specific channel for our operator subscription and whatnot. [21:32.280 --> 21:39.280] So that's that. We have four organizations currently adopting this scheme and running this scheme. [21:39.280 --> 21:46.280] We have Operators Community Cloud, New England Research Cloud, Massachusetts Open Cloud [21:46.280 --> 21:52.280] and Open Source Climate Alliance, all running on this pattern. [21:53.280 --> 21:58.280] So this is a lesson that we learned through collaboration in cloud operations, [21:58.280 --> 22:07.280] and I hope we may be able to learn more such lessons in the future by exploring cloud together. [22:07.280 --> 22:13.280] So if you want to know more, you can join us in Operators Virtual Cloud. [22:13.280 --> 22:21.280] You can see our ADRs and how we got to those outcomes, and on the last link over here, [22:21.280 --> 22:26.280] you can actually see the code base that we are running against all of those clusters. [22:26.280 --> 22:28.280] Thank you very much. [22:44.280 --> 22:52.280] Thank you for the talk. We use the same pattern, but one of the manifests in completion, [22:53.280 --> 23:01.280] and we fix it. We adopted an approach that we define those attributes that are required with customization overlay. [23:01.280 --> 23:07.280] So like ADUMI value, and then you have completion, and then you know that that particular value, [23:07.280 --> 23:11.280] it's a valid YAML because it matches the spec fully, [23:11.280 --> 23:17.280] but then you know visually that that particular field will be patched in overlay. [23:17.280 --> 23:23.280] So we use that as a solution for the manifesting completion and the static validation. [23:23.280 --> 23:30.280] We always use customization over overlay, and then we know that we are going to do that. [23:30.280 --> 23:39.280] That's just a solution that we... I don't know if there is a better way or a better word to use for that, but that's our approach. [23:39.280 --> 23:43.280] We use the same, but it doesn't work in every case. [23:44.280 --> 23:51.280] In some cases, the scheme is very detailed. It requires this complex nested structure, [23:51.280 --> 24:02.280] like for example, search manager requires solvers, and if you define a solver, you can't remove it in a patch [24:02.280 --> 24:10.280] because it's a mapping, so strategic merges don't work that way in customize. [24:10.280 --> 24:19.280] You would need a JSON patch, and you would need a long JSON patch, and it's becoming less and less clear in this regard. [24:19.280 --> 24:24.280] I think another thing that we do is we have, for example, a common base like you, [24:24.280 --> 24:30.280] and then have a non-production base, production base, and then for example for the admin groups. [24:30.280 --> 24:37.280] So we have a group of admins for the non-production, but we don't have a full group of admins in the base. [24:37.280 --> 24:45.280] We have what? And then we edit from the non-production to the production in case we need one group or another. [24:45.280 --> 24:49.280] That's another approach that we... [24:49.280 --> 24:50.280] Thank you. [24:50.280 --> 24:51.280] I think I forgot. [24:58.280 --> 25:01.280] And then the last one. [25:01.280 --> 25:11.280] In this case, when you have a couple of bundles, maybe it's easy, but you have a cluster with 12 or 15 bundles, [25:11.280 --> 25:18.280] it can be a little bloated, having a single LARGO CD-app, managing all the applications of a single cluster, [25:18.280 --> 25:25.280] and we use the approach of we have one for the cluster deployment with Hive, and then we have for each operator, [25:25.280 --> 25:33.280] we have his own tree, so we have independent applications, and when, for example, an operator breaks, [25:33.280 --> 25:36.280] it doesn't break the entire LARGO CD-app of the cluster. [25:36.280 --> 25:42.280] It only breaks the LARGO CD-app, or when we need to upgrade, or we think it's safer, [25:42.280 --> 25:48.280] because you are really, really scoped, and you can not break the entire cluster, just a single application. [25:48.280 --> 25:54.280] Yeah, we do the same for operators which have specific deployments and whatnot. [25:54.280 --> 26:02.280] If we can deploy operators through subscription to the OpenShift operator catalog, operator hub, [26:02.280 --> 26:06.280] we can do that through a single resource, and then it's not bloated that way. [26:06.280 --> 26:09.280] So, yes, we... [26:09.280 --> 26:15.280] Same lesson that we faced the same, same issue, and we were solving it very similarly. [26:15.280 --> 26:19.280] We work at Red Hat, but we did the same approach independently. [26:19.280 --> 26:25.280] In the front. Good, great, sounds great. [26:25.280 --> 26:29.280] We should talk after. [26:29.280 --> 26:31.280] Yeah, hi, really nice talk. Thank you. [26:31.280 --> 26:33.280] Thank you. [26:33.280 --> 26:39.280] We build a lot of internal developer platforms, and we face the same issue where we kind of lose track of the code bases. [26:39.280 --> 26:45.280] Do you implement any repo scanning or file structure scanning that makes sure that this is enforced [26:45.280 --> 26:49.280] among your customized charts, and kind of a two-parter? [26:49.280 --> 26:54.280] Do you just block all use of the Helm charts, because everything has a Helm chart nowadays, [26:54.280 --> 26:58.280] and it would be kind of limiting to have to rewrite something in this format [26:58.280 --> 27:03.280] if there's an existing Helm chart or existing customized, or is this only for, you know, [27:03.280 --> 27:06.280] internal YAML? Thank you. [27:06.280 --> 27:12.280] So, we enforce this only for resources that require elevated permissions. [27:12.280 --> 27:17.280] If you have a Helm chart that is deploying custom resource definition, [27:17.280 --> 27:22.280] then we tell you this is not a good thing, you shouldn't do that. [27:22.280 --> 27:28.280] The API wouldn't allow you to do that, like our RBAC settings. [27:28.280 --> 27:37.280] So, we basically tell those people you need to get that CRD into our repository, [27:37.280 --> 27:43.280] check it in our base for resources which require cluster admin or elevated permissions, [27:43.280 --> 27:48.280] because if we would reference it from somebody else, from some other repository, [27:48.280 --> 27:51.280] they can change it in their repository, we don't want to do that. [27:51.280 --> 27:58.280] And we don't want them to be applying CRDs, because those are shared on the cluster. [27:58.280 --> 28:03.280] And if two people on the same cluster are deploying the same Helm chart in different versions [28:03.280 --> 28:07.280] with different CRD schema, it can fight, and we don't want that. [28:07.280 --> 28:12.280] So, that's why we want a single source of truth for all the resources [28:12.280 --> 28:16.280] that are cluster scoped or requiring elevated permissions. [28:16.280 --> 28:22.280] So, Helm charts are allowed for developer and application workloads [28:22.280 --> 28:27.280] in their own namespace, self-contained, or across all of their namespaces if they have more, [28:27.280 --> 28:34.280] but not under our watch on the elevated permissions. [28:34.280 --> 28:36.280] Thank you. [28:36.280 --> 28:51.280] You said that when you have several clusters, you can limit what the developers [28:51.280 --> 28:55.280] or the user of the cluster can deploy, but how do you manage that? [28:55.280 --> 29:00.280] For example, we use a pro, so we have a chat-option interface and we have ownership, [29:00.280 --> 29:06.280] so each environment has a set of owners, but we cannot limit. [29:06.280 --> 29:11.280] So, a developer can create a customization that adds a new namespace, [29:11.280 --> 29:18.280] and statically, we cannot limit what kind of resources it's going to be created by the developers [29:18.280 --> 29:23.280] inside his cluster tree. How do you handle this? [29:23.280 --> 29:29.280] So, if it's deployed from our overlays, we would know that, [29:29.280 --> 29:36.280] and if it's deployed from his own customization repository or whatever, [29:36.280 --> 29:40.280] he wouldn't have the permissions. [29:40.280 --> 29:43.280] To create a specific resources? [29:43.280 --> 29:44.280] Yes. [29:44.280 --> 29:47.280] How do you manage that limitation? [29:47.280 --> 29:51.280] So, if he's, maybe I don't understand the question, [29:51.280 --> 29:57.280] but if I have a developer who has access to set of namespaces, [29:57.280 --> 30:00.280] they can deploy only to that set of namespaces, [30:00.280 --> 30:07.280] and if they onboard our ArgoCD to manage their application through our ArgoCD, [30:07.280 --> 30:12.280] they have their specific ArgoCD project, which also restricts the RBEC, [30:12.280 --> 30:15.280] so they won't be able to deploy to any cluster, [30:15.280 --> 30:21.280] just to that cluster that they have access to, and just to those namespaces they have access to. [30:21.280 --> 30:25.280] Okay, so the cluster resources are only managed by the operations team, [30:25.280 --> 30:28.280] and then developers, in our case, we have a mix-it, [30:28.280 --> 30:35.280] so the developers can create patches and edit parts of the tree of the clusters, [30:35.280 --> 30:41.280] so we don't know how to handle, like, they only can create a specific set of resources, [30:41.280 --> 30:45.280] and we do that through validation, so they can review, [30:45.280 --> 30:48.280] but we need to approve and manually review that they are not creating, [30:48.280 --> 30:53.280] like, namespaces or operators or cluster roles or something like that. [30:53.280 --> 30:57.280] Yeah, we limit that through a single code basically, [30:57.280 --> 31:00.280] for a single repository, yeah. [31:00.280 --> 31:04.280] But we also, like, do this pro with Chateaubes and what not, ownership, [31:04.280 --> 31:07.280] and that's great addition. [31:12.280 --> 31:15.280] Any more questions? [31:15.280 --> 31:17.280] Okay, then we call it a day. [31:17.280 --> 31:19.280] Thank you so much.