[00:00.000 --> 00:12.680] Okay. Okay. I am Marco Mancini. I am a social architect in Open Nebula System. Open Nebula [00:12.680 --> 00:19.720] System is, okay. Open Nebula System is the company behind Open Nebula, the open source [00:19.720 --> 00:27.640] software. So today, I will talk about how you can easily deploy Kubernetes clusters [00:27.640 --> 00:35.400] on hybrid and multi-cloud environments by using our open source solution. So let me [00:35.400 --> 00:42.880] introduce Open Nebula. Open Nebula was born around 14 years ago as a solution for private [00:42.880 --> 00:48.760] cloud computing, you know, for on-premises. And evolved during the last years, and now [00:48.760 --> 00:57.480] we provide a solution that allows you to manage different types of workloads. So going from [00:57.480 --> 01:02.520] virtual machines to application containers to Kubernetes clusters along what we call [01:02.520 --> 01:08.920] today, you know, the data-sender cloud edge continuum. So you can have resources on on-premises, [01:08.920 --> 01:15.960] you can have resources on public or on far edge and so on. So what we would like to do [01:15.960 --> 01:23.120] with this open source solution, no? Open Nebula is to provide you with a simple way in order [01:23.120 --> 01:30.080] to manage different type of workloads along the, let's say, this cloud edge continuum. [01:30.080 --> 01:36.600] And so you can minimize the complexity, you know, to manage these workloads. You reduce [01:36.600 --> 01:42.440] no consumption of resources because you can manage different types across different kind [01:42.440 --> 01:51.680] of resources and so on. So mainly what we, at the core of Open Nebula, we use different [01:51.680 --> 02:00.120] virtualization technologies. So we go from supporting using VMware, KVM for virtual machine [02:00.120 --> 02:09.200] workloads, to LXC for system containers, to Firecracker, where you can manage micro-VMs [02:09.200 --> 02:19.000] and deploy container-based applications. And we manage these technologies by using clearly [02:19.000 --> 02:25.480] advanced features, no? You can have multi-tenancy, self-service, no? You can provide resources [02:25.480 --> 02:33.640] on these different virtualization technologies and so on. We have a graphical user interface [02:33.640 --> 02:41.480] where you can manage all your resources across the, as I said, this continuum and also we [02:41.480 --> 02:48.040] have integrated different third-party tools, no? Going from Terraform to Ansible to Kubernetes [02:48.040 --> 02:59.600] and so on. So our vision about the multi-cloud is that we can, no, we would like to provide [02:59.600 --> 03:06.880] an easy way for automatic provisioning of resources, no? Across multiple cloud providers. [03:06.880 --> 03:12.640] That's the moment. So what we have built is a tool that is called the One Provision. [03:12.640 --> 03:17.320] So you can see in the bottom, so we have also a graphical user interface, but it's also [03:17.320 --> 03:22.800] a command line interface. So you can create resources on different providers. At the moment, [03:22.800 --> 03:33.800] we support providers that has bare-metal servers like AWS and Equinix. But, yeah, we can support [03:33.800 --> 03:40.920] other providers. We just need to write some drivers, no? That allow us to provide resources [03:40.920 --> 03:47.360] also across different providers. Behind the One Provision tool, we use open-source tools [03:47.360 --> 03:54.400] like Terraform and Ansible. So with this tool, with this One Provision tool, we can build [03:54.400 --> 04:00.640] so different what we call edge clusters. So an edge cluster for Openable is an abstraction [04:00.640 --> 04:07.560] where you have computing, you have storage and networking. So once you provide this edge [04:07.560 --> 04:18.480] cluster, every cluster, whenever it's provisioning, can be managed by our uniform, just from one [04:18.480 --> 04:26.640] managing place that is our Sunstone graphical user interface or with our command line interface. [04:26.640 --> 04:33.200] And so from one just panel, you can manage all your clusters across different, for example, [04:33.200 --> 04:40.920] providers or your premise resources. And then at the end, what we have is the concept [04:40.920 --> 04:48.160] of marketplace. So whether you can have appliances or you can have, we have also integrated Docker [04:48.160 --> 04:55.240] app. So you can have also Docker images that you can deploy. So you can deploy virtual [04:55.240 --> 05:02.680] machine, multi-virtual machine, containers, and Kubernetes clusters across these different [05:02.680 --> 05:13.200] resources that we have provisioned. So this is how we manage, let's say, multi-cloud [05:13.200 --> 05:20.000] environment. So by using this One Provision tool and then our graphical user interface [05:20.000 --> 05:28.240] and the marketplace. So let me introduce also how we have built Kubernetes, how Kubernetes [05:28.240 --> 05:37.040] is integrated in Open Nebula. So for us, Kubernetes is just a service. So we have built an appliance. [05:37.040 --> 05:45.960] I will talk soon about how we have built this appliance. So as I said, you can manage Kubernetes [05:45.960 --> 05:54.960] by using our tool for managing any application, right? And then you can deploy on different [05:54.960 --> 06:01.920] edge clusters, right? So you can exploit all the features that we have. So since we have [06:01.920 --> 06:08.720] a multi-tenants environment, you can deploy Kubernetes clusters for all your tenant within [06:08.720 --> 06:15.080] Open Nebula. So you want to deploy Kubernetes clusters on the same physical resources that [06:15.080 --> 06:23.880] are shared. They will be deployed in a secure way because you can deploy by using our visualization [06:23.880 --> 06:31.200] technologies and so on. And also you are not looking to any vendor because you can just [06:31.200 --> 06:40.000] deploy your Kubernetes clusters on any, let's say, cloud edge or premise or far edge provider [06:40.000 --> 06:45.760] that you would like to integrate within your infrastructure and the price infrastructure. [06:45.760 --> 06:52.240] So how we have built Kubernetes, integrated Kubernetes in Open Nebula is we have defined [06:52.240 --> 07:00.520] an appliance. It's called one key. This is just a complete Kubernetes deployment. So [07:00.520 --> 07:09.960] it's based on RQ2 and we use the version 1.24 of Kubernetes. So we provide all the features. [07:09.960 --> 07:16.160] So when you deploy this appliance, you have all the features included. So you don't have [07:16.160 --> 07:24.800] to deal with managing deployment of a storage solution or ingress controllers or load balancing. [07:24.800 --> 07:31.560] At the moment, we have used these technologies on our roadmap. There are some features that [07:31.560 --> 07:36.280] we would like to include, especially a better integration with some of the features that [07:36.280 --> 07:42.640] has Open Nebula. But at the moment, yeah, we have this kind of solution that is based [07:42.640 --> 07:52.360] on, as I said, on RQ2. The one key appliance, these are the components. It's based on one [07:52.360 --> 08:00.040] flow. One flow is a component in Open Nebula that allow you to define multi VMs applications. [08:00.040 --> 08:06.880] So in a one flow service, you can have different roles. And each role, for example, in this [08:06.880 --> 08:13.360] case, for the Kubernetes appliance, we have defined different roles. For example, we use [08:13.360 --> 08:20.080] the VNF role. This is the load balancer for the control plane. But it also does NAT and [08:20.080 --> 08:25.920] routing because we have two networks within our appliance. One is the public network [08:25.920 --> 08:33.480] and another is the private network between the different components. So this VNF also [08:33.480 --> 08:42.400] allows for the different VMs within the private network to communicate outside to the public. [08:42.400 --> 08:52.280] Then we have the master role. His role is to manage the control plane, the ATC database, [08:52.280 --> 09:00.760] the API, and so on. Then we have the worker nodes that you can use for any workloads that [09:00.760 --> 09:05.920] you want to deploy on your Kubernetes cluster. And then finally, we have the storage nodes. [09:05.920 --> 09:10.520] These are dedicated so they will not be used for when you have to deploy some workloads, [09:10.520 --> 09:19.240] but they are used just for your storage needs. And we use Longcore for persistent volumes [09:19.240 --> 09:27.520] within other Kubernetes, within the one case service. As I said, the VNF, this virtual [09:27.520 --> 09:35.480] network function service provides a load balancer. So you can have multiple VNF, so in an availability [09:35.480 --> 09:44.000] mode. Taking into account that OpenNebula offers you the abstraction of virtual machine groups. [09:44.000 --> 09:51.280] So usually for having an availability solution, if you have a virtual machine, you would like [09:51.280 --> 09:56.880] to deploy your virtual machine on different hosts in order to have an available solution. [09:56.880 --> 10:04.160] So you can use OpenNebula VM groups and then using some affinity rules, your VMs will be [10:04.160 --> 10:09.000] deployed for example on different hosts so you can have an available solution. And this [10:09.000 --> 10:16.680] is valid for any role that you have seen before. So for any role, you can use these VM groups [10:16.680 --> 10:24.840] in order to have also available solution. So one key by default, just create one VM for [10:24.840 --> 10:32.480] each role, but you can modify and scale the solution. So having multiple VMs for each [10:32.480 --> 10:42.720] role. So this is the VM. As I said, for the persistent volumes, we have this storage nodes [10:42.720 --> 10:52.200] where we deploy a Longcore, we use Longcore. So you can have replicas of your volumes on [10:52.200 --> 11:01.520] different VMs related to the storage nodes. Then we have, in order to access your services, [11:01.520 --> 11:07.800] we need that you deploy within your Kubernetes clusters. You can have the ingress controller [11:07.800 --> 11:14.040] you can use. We deploy an ingress controller based on traffic. So this can be used for [11:14.040 --> 11:21.880] HTTP and HTTPS protocol. And then you can access the service by just defining an ingress [11:21.880 --> 11:27.920] controller for your service. And then we have integrated also Metal LB, instead for the [11:27.920 --> 11:35.480] load balancer service. So in this case, you can use this for other kind of protocols [11:35.480 --> 11:43.280] that are not HTTP or HTTP based. Yeah, I would like to go because, yeah, it's almost, [11:43.280 --> 11:53.640] I have five minutes now, more or less. I will prepare just a demo to show you how you can [11:53.640 --> 12:02.080] use Open Nebula. So I will show you how to use one provision in order to provide resources [12:02.080 --> 12:08.560] on AWS and Equinix. And then we can deploy a Kubernetes cluster on both edge clusters [12:08.560 --> 12:17.000] that on this two public cloud provider. And then we just, you can just access one of the [12:17.000 --> 12:28.120] Kubernetes clusters and just deploy an application. Let's me go on the demo. Okay, so this is [12:28.120 --> 12:32.520] the Sunstone Graphica user interface that you can see here. If we go to clusters, we [12:32.520 --> 12:38.120] have just the default cluster. But there are no host, no data, there are only data store, [12:38.120 --> 12:43.600] there are no host. So in this moment, we have just our front end without any resources. [12:43.600 --> 12:48.960] Now what's it go is to go to the one provision. We have defined already two providers, one [12:48.960 --> 12:58.520] for Equinix and one for AWS. And once you define these providers, you can create clusters [12:58.520 --> 13:04.600] on the two providers. So we are going to create a cluster, for example, in AWS. In this case, [13:04.600 --> 13:12.200] we have defined a provider for AWS in London, the zone. And this will now create an edge [13:12.200 --> 13:18.520] cluster on AWS. As I said, we use Terraform and Ansible to create resources and to compute [13:18.520 --> 13:24.520] in such a way that you create an edge cluster for OpenEbula. And then here I'm going to [13:24.520 --> 13:30.200] create another cluster instead of on Equinix. Clearly, you have some parameters. You can [13:30.200 --> 13:34.920] define the number of hosts, you can define the number of public IP that you would like [13:34.920 --> 13:42.000] to access, and so on. Okay? By the way, you can define two type of clusters with one provision. [13:42.000 --> 13:48.120] One is an edge cluster, it's a base, or you can also create a safe cluster, an hyperconverged [13:48.120 --> 13:52.880] cluster. As you can see here, once you use one provision in a Sunstone, graphical user [13:52.880 --> 13:57.960] interface, you will see the hosts that are going to be proficient. And while it will [13:57.960 --> 14:04.680] take around five, ten minutes, this depends on the cloud provider how much time it needs [14:04.680 --> 14:12.080] to create resources. But once you have created the resources, you can see here the two clusters. [14:12.080 --> 14:18.280] What you have to do is to instantiate a couple of, to use Kubernetes appliance, we have to [14:18.280 --> 14:27.480] define a couple of private networks, one for Equinix and one for the other AWS clusters. [14:27.480 --> 14:33.480] And in order to do this, it's simplified because we create a template, then you just instantiate [14:33.480 --> 14:39.760] the template, and then you can create also the private networks, both for AWS and Equinix. [14:39.760 --> 14:47.160] Because we need the private for the internal VMs, the roles like node, master, storage, [14:47.160 --> 14:52.480] and the worker nodes, and then we need the public network instead for the VNF, that is [14:52.480 --> 15:01.960] our main endpoint where to access the Kubernetes clusters. Now what we are going to do is to [15:01.960 --> 15:09.920] import the one key appliance for our marketplace within our open Nebula. You can do this just [15:09.920 --> 15:17.520] once. So we are going to import the appliance. And once you import the appliance, what will [15:17.520 --> 15:23.640] be imported are templates for the VMs that are for each role, and the template for the [15:23.640 --> 15:31.360] service. This service is based on one floor, and also the images that are related to the [15:31.360 --> 15:36.800] different roles. So in order to create a new Kubernetes cluster, what we have to do is [15:36.800 --> 15:43.240] to just instantiate a service by selecting the appropriate networks, for example. So [15:43.240 --> 15:48.360] in this case, you can see now I'm creating a cluster on AWS. So I select for the public [15:48.360 --> 15:55.160] network, the AWS cluster public for the private, the AWS private, and then I just have to put [15:55.160 --> 16:02.240] a couple of IPs internal. These are for the internal networks, for the virtual IP, for [16:02.240 --> 16:10.920] the VNF, and for the gateway. And we can do the same for Equinix. So by just selecting [16:10.920 --> 16:17.360] the public networks of Equinix and then the private networks that we have defined. Also [16:17.360 --> 16:28.880] in this case, I've used the same network for both clusters. And here you see that now we [16:28.880 --> 16:34.640] are deploying the two Kubernetes clusters on the two different edge clusters that are [16:34.640 --> 16:43.480] on AWS London and Equinix. As you see, the first role that is deployed is VNF. Once the [16:43.480 --> 16:51.960] VNF is ready running, in one floor, you can define dependencies. And once the VNF is [16:51.960 --> 16:59.560] ready, one floor is going to deploy the other roles, master, the worker, and the storage [16:59.560 --> 17:09.720] node. In order to access the Kubernetes clusters, you have to use the public IP of the VNF. [17:09.720 --> 17:17.960] And you can use SSH agent forwarding by using, you know, first connecting to the VNF and [17:17.960 --> 17:24.240] then connecting to the master by using the private IP. Okay. Here we can see the nodes. [17:24.240 --> 17:29.880] So we can have, as I said, by default, you have one node for each master clear. This [17:29.880 --> 17:36.920] is not for production environment. If you want to have for production environment, you [17:36.920 --> 17:43.480] need to scale each node, for example. So here, I just create an image and I prepared also [17:43.480 --> 17:50.520] a YAML file, a manifest file for exposing the service through the ingress controller. [17:50.520 --> 17:59.520] And then you can use the public IP of the VNF to access the service. Okay. Clearly OpenEBOLA [17:59.520 --> 18:08.320] is not, doesn't have any tools for managing the deployment of application on Kubernetes. [18:08.320 --> 18:14.400] So we manage the infrastructure and the deployment of Kubernetes cluster. Then you can use kubectl, [18:14.400 --> 18:21.760] you can use Ranger, you can use other open source tooling, you know, that maybe in the [18:21.760 --> 18:26.680] future we can add also. As you can see here, by using the public IP of the VNF, I have [18:26.680 --> 18:33.840] access to the engine mix. Another thing, you can scale the roles once you deploy, for example. [18:33.840 --> 18:39.440] In this case, I can scale, for example, the worker. You just put the number here. We use [18:39.440 --> 18:47.800] the one flow, one flow allows us to scale the cluster for each role. Okay. And now you [18:47.800 --> 18:57.720] can see another worker is going to be deployed. Yeah, this was the demo and I think that's [18:57.720 --> 19:11.760] the conclusion. Okay. Thank you. Thank you all. Okay.