[00:00.000 --> 00:18.000] Welcome to the virtualization room, Simone will talk about OCD virtualization. [00:18.000 --> 00:20.000] So, hi all. [00:20.000 --> 00:25.000] Nice to see you here. [00:25.000 --> 00:29.000] Today we are going to talk about OCD virtualization. [00:29.000 --> 00:38.000] We are going to have just a quick intro for who doesn't really know what OCD virtualization is, just to get a bit of context. [00:38.000 --> 00:49.000] Then we are going to see how you can use CRC to play with OCD virtualization at home in a really small footprint. [00:49.000 --> 00:56.000] Just if you want to try it or if you want to start developing on OCD virtualization. [00:56.000 --> 00:59.000] We are going to see a couple of new features. [00:59.000 --> 01:02.000] I choose this one because they are cloud native. [01:02.000 --> 01:10.000] I think that they are a bit different from what you use it to see in related kind of products. [01:10.000 --> 01:14.000] And then we are going to see next challenges for us. [01:14.000 --> 01:16.000] So, let's start from OCD. [01:16.000 --> 01:18.000] What OCD is? [01:18.000 --> 01:22.000] OCD is a distribution of Kubernetes. [01:22.000 --> 01:32.000] It's a sibling distribution of OpenShift Container Platform, which is the data distribution of Kubernetes. [01:32.000 --> 01:39.000] OCD is the community of the upstream release of the VET. [01:39.000 --> 01:46.000] It's based on physical machines that can be bare metal or virtual machines on a hyperscaler. [01:46.000 --> 01:53.000] In our case, it would be better to use bare metal nodes just because we are talking about a virtualization solution. [01:53.000 --> 02:01.000] Then we are going to have hosts there that nowadays are based on Fedora CoreOS, [02:01.000 --> 02:04.000] but then we are going to see that this is going to change. [02:04.000 --> 02:11.000] Then we have all the Kubernetes stack and you can use it to start your application. [02:11.000 --> 02:14.000] Now, what is KubeVirt? [02:14.000 --> 02:21.000] KubeVirt is a set of virtualization APIs for Kubernetes. [02:21.000 --> 02:31.000] So you can extend Kubernetes in order to be able to run virtual machines that runs in containers on your Kubernetes infrastructure. [02:31.000 --> 02:35.000] At the end, it's still using the KVM hypervisor. [02:35.000 --> 02:44.000] It's able to schedule and manage virtual machines as if they are native Kubernetes objects. [02:44.000 --> 02:49.000] What is its main advantages? [02:49.000 --> 02:51.000] It's that it's cloud-on-active. [02:51.000 --> 02:56.000] It means that you can use all the Kubernetes stack. [02:56.000 --> 03:04.000] So the container networking interface for the network, the container storage interface that you are already using for Kubernetes for the storage. [03:04.000 --> 03:12.000] It's based on customer source definition and customer source that are a way to extend Kubernetes with a new API. [03:12.000 --> 03:23.000] It can schedule virtual machines as native Kubernetes objects and you can manage them to talk with what you already developed as a microservice. [03:23.000 --> 03:33.000] So in an ideal world, you are going to rewrite your application from scratch, completely split into microservices. [03:33.000 --> 03:40.000] In the real world, probably you have a bit of legacy code or something that is already running in a virtual machine. [03:40.000 --> 03:50.000] When are you supposed to schedule it on an external hypervisor or on the same infrastructure that you are using for your microservices [03:50.000 --> 03:55.000] with the capability to have them talking natively to your virtual machines? [03:55.000 --> 03:59.000] CubeVirt is the response for this challenge. [03:59.000 --> 04:04.000] Now, how can you test it at home? [04:04.000 --> 04:06.000] You can easily try it with CRC. [04:06.000 --> 04:16.000] CRC is a really quick way to start playing debugging, hacking on OpenShift in general. [04:16.000 --> 04:26.000] CRC is a micro distribution of OpenShift that runs in a virtual machine that can be executed on your laptop. [04:26.000 --> 04:30.000] It's absolutely not intended for production usage. [04:30.000 --> 04:33.000] It's going to be executed in a virtual machine. [04:33.000 --> 04:36.000] So it's a single-node cluster. [04:36.000 --> 04:38.000] It's not going to scale out. [04:38.000 --> 04:40.000] It's not going to support upgrades. [04:40.000 --> 04:43.000] It's just a test platform. [04:43.000 --> 04:49.000] In order here, just a few instructions if you want to try it at home. [04:49.000 --> 05:01.000] Since we are talking about a virtualization product and you are running it in CRC, which is a virtual machine as well, [05:01.000 --> 05:11.000] you need to enable Nest virtualization on your laptop in order to be able to start virtual machines inside the CRC one. [05:11.000 --> 05:14.000] Then you can tune the configuration. [05:14.000 --> 05:17.000] Normally, CRC comes with a really small configuration. [05:17.000 --> 05:21.000] If I'm not wrong, it's about 9 gigs of RAM, which is not that small, [05:21.000 --> 05:25.000] but it's just enough to run the OKD by itself. [05:25.000 --> 05:29.000] If you want to think about playing with a couple of virtual machines and so on, [05:29.000 --> 05:37.000] it's better to extend the memory up to at least 20 gigs in order to be able to do something realistic. [05:37.000 --> 05:44.000] It's also nice that CRC already comes with a CubeVirtual Path Provisioner, [05:44.000 --> 05:51.000] which is a way to dynamically provision PV persistent volumes for your virtual machines. [05:51.000 --> 05:57.000] As you can imagine, a pod is something ephemeral while your virtual machine needs a persistent volume. [05:57.000 --> 06:03.000] You need a way to provide persistent volumes for your virtual machines. [06:03.000 --> 06:08.000] CRC is just a virtual machine where you can run other virtual machines inside, [06:08.000 --> 06:13.000] but you still need a mechanism to provide persistent volumes for that. [06:13.000 --> 06:22.000] It's already integrated in CRC, but you have to extend its disk in order to have a bit of space to create disks. [06:22.000 --> 06:28.000] At the end, you have a gesture to execute a couple of commands, CRC setup and CRC start. [06:28.000 --> 06:35.000] After a few minutes, you are going to gain the access to your environment. [06:35.000 --> 06:40.000] Of course, you can do everything also from the command line. [06:40.000 --> 06:46.000] Probably much of you are going to prefer using the command line here. [06:46.000 --> 06:52.000] I mean, I attached a screenshot to the presentation just because they are nicer. [06:52.000 --> 07:02.000] So you can connect to the user interface, to the admin user interface where you have the operator app page. [07:02.000 --> 07:10.000] In the operator app page, you are going to find already there because it's distributed via the operator app mechanism. [07:10.000 --> 07:15.000] You are going to find the CubeVirtual type of converted cluster operator. [07:15.000 --> 07:24.000] As I mentioned, you don't need to configure the storage, so you are only supposed to install the operator and create a CR to trigger the operator, [07:24.000 --> 07:27.000] but the storage is already pre-configured. [07:27.000 --> 07:34.000] After a while, you will be asked to create the first CR for the operator in order to configure it. [07:34.000 --> 07:40.000] Here you can fine-tune the configuration of OKD virtualization for your specific cluster. [07:40.000 --> 07:48.000] In particular, we have a stanza called the feature gates where you can enable optional features. [07:48.000 --> 07:51.000] Here we are going to talk about two features. [07:51.000 --> 07:58.000] One of them is already enabled by default, which is enabled common boot image import, [07:58.000 --> 08:03.000] and the other is deployed tecton task resources. [08:03.000 --> 08:09.000] This one is not enabled by default, but if you want to do at home what we are going to see now, [08:09.000 --> 08:11.000] you have to enable it. [08:11.000 --> 08:15.000] You can enable it also at day two. [08:15.000 --> 08:19.000] After a few minutes, the operator is installed. [08:19.000 --> 08:29.000] It's going to also extend the UI with a new tab where you can see what you can do with your virtual machines. [08:29.000 --> 08:39.000] So now let's start talking about one of the... In the last year we introduced a lot of features, [08:39.000 --> 08:41.000] but today I want to talk about two of them. [08:41.000 --> 08:43.000] The first one is golden images. [08:43.000 --> 08:47.000] Why I think that it's interesting. [08:47.000 --> 08:54.000] Nowadays, if you think to any public ground environment on public hyperscalers, [08:54.000 --> 08:58.000] you are going to find... It's really used to use them. [08:58.000 --> 09:06.000] Why? Because you can find their images for your preferred operating systems already available. [09:06.000 --> 09:13.000] You have just to select one of them, click, and in a matter of minutes you are going to get a virtual machine [09:13.000 --> 09:17.000] that it's running, you don't need to upload your disk, you don't need to upload, [09:17.000 --> 09:21.000] eventually an ISO, start defining your virtual machine and so on. [09:21.000 --> 09:23.000] They are really convenient. [09:23.000 --> 09:30.000] We want to have the same experience also in CubeVirt. [09:30.000 --> 09:34.000] So we introduced this feature. [09:34.000 --> 09:41.000] The whole idea is that we are going to have a container registry [09:41.000 --> 09:48.000] which contains some images with the disk image for your virtual machines [09:48.000 --> 09:55.000] that are going to be periodically refreshed to include a new feature of their operating systems [09:55.000 --> 09:58.000] or security fixes. [09:58.000 --> 10:03.000] Then we have this new object called the data import corona, [10:03.000 --> 10:11.000] which is going to say that you want to periodically pull an image from that container registry [10:11.000 --> 10:16.000] with a schedule and import it in your cluster. [10:16.000 --> 10:24.000] It's a mechanism in order to configure the garbage collecting, the retention policy, [10:24.000 --> 10:32.000] but at the end the idea is that you are going to find images for popular operating systems [10:32.000 --> 10:36.000] out of the box already available in your cluster. [10:36.000 --> 10:40.000] They are going to be refreshed over the time, so each time you... [10:40.000 --> 10:46.000] Let's see it. This is the catalog in order to create virtual machines [10:46.000 --> 10:50.000] in the user interface of OKD virtualization. [10:50.000 --> 10:52.000] We have a catalog with objects. [10:52.000 --> 10:54.000] The whole feature is here. [10:54.000 --> 10:58.000] As you can see for popular operating systems, [10:58.000 --> 11:04.000] we already have a nice label saying that the source is already available. [11:04.000 --> 11:09.000] It means that this new feature automatically imported for you [11:09.000 --> 11:12.000] a golden image of that operating system [11:12.000 --> 11:17.000] and it's going to continuously keep it up to date. [11:17.000 --> 11:21.000] The benefit is that when you want to start a virtual machine, [11:21.000 --> 11:24.000] you will be able to do it with a single click. [11:24.000 --> 11:28.000] You can customize the name, you can say in which namespace it's going to be executed, [11:28.000 --> 11:30.000] but everything is already ready. [11:30.000 --> 11:34.000] With one click you are going to start your virtual machine. [11:34.000 --> 11:37.000] What is going to happen on the storage side? [11:37.000 --> 11:42.000] We see that we have some existing persistent volume claims [11:42.000 --> 11:45.000] for the disk that got automatically important. [11:45.000 --> 11:49.000] One of them is going to be cloned to be your virtual machine. [11:49.000 --> 11:52.000] Depending on the CSI implementation, [11:52.000 --> 11:56.000] this can be even completely afforded to the CSI provider [11:56.000 --> 11:58.000] and it can be really fast. [11:58.000 --> 12:02.000] After a few minutes, your virtual machine is there [12:02.000 --> 12:07.000] and as you can see through CloudInnit or CSI, whatever, [12:07.000 --> 12:11.000] it can be also customized to use a custom name and so on. [12:15.000 --> 12:19.000] Our data in particular looks like, [12:19.000 --> 12:25.000] we are saying that we want to have a data source named Fedora [12:25.000 --> 12:28.000] with a schedule with the usual consequences. [12:28.000 --> 12:33.000] We want to periodically consume images that are available on Quay, [12:33.000 --> 12:35.000] which is a Docker register. [12:35.000 --> 12:38.000] Here you can see the status and the image is up to date, [12:38.000 --> 12:42.000] meaning that the most fresh version of Fedora [12:42.000 --> 12:45.000] got automatically imported in your cluster. [12:45.000 --> 12:48.000] The important thing is that if you look here, [12:48.000 --> 12:52.000] you see that Fedora, the target for the Fedora image is the latest. [12:52.000 --> 12:56.000] It means that when the next release of Fedora is going to come out, [12:56.000 --> 12:59.000] it's going to be automatically available in your cluster. [13:01.000 --> 13:05.000] Of course, we are providing images for Fedora for centers, [13:05.000 --> 13:09.000] but you can use the same mechanism and the same infrastructure [13:09.000 --> 13:14.000] to import on your cluster your own images. [13:14.000 --> 13:20.000] You can create custom data source, sorry, custom data import columns. [13:20.000 --> 13:25.000] Now, I want to talk about an additional really nice feature, [13:25.000 --> 13:29.000] which is Cubivius Tecton Task Pipelines. [13:29.000 --> 13:35.000] In the previous section, we see that we are able to import images [13:35.000 --> 13:37.000] for a popular operative system, [13:37.000 --> 13:40.000] but maybe there is some other operative system [13:40.000 --> 13:44.000] that requires to create a virtual machine starting from an ISO [13:44.000 --> 13:49.000] and installing it, so how can we automate it? [13:49.000 --> 13:54.000] We cannot expect that the provider of all the operative systems [13:54.000 --> 13:56.000] in the world are going to use this mechanism [13:56.000 --> 13:58.000] and publish images for us. [13:58.000 --> 14:03.000] We need a way to be able to automate the creation of the images [14:03.000 --> 14:06.000] for other operative systems. [14:06.000 --> 14:11.000] In this case, we are going to use a Tecton pipeline to automate this. [14:11.000 --> 14:14.000] What Tecton is? [14:14.000 --> 14:19.000] Tecton, also known as OpenShift Pipelines, [14:19.000 --> 14:23.000] is a cloud-native continuous integration [14:23.000 --> 14:26.000] and continuous delivery solution. [14:26.000 --> 14:31.000] It's also cloud-native and it's fully based on Kubernetes resources. [14:31.000 --> 14:37.000] It uses what are called Tecton blocks to automate tasks, [14:37.000 --> 14:39.000] extracting the infrastructures. [14:39.000 --> 14:44.000] In the Tecton world, we have tasks. [14:44.000 --> 14:48.000] A task is something that defines a set of build steps, [14:48.000 --> 14:50.000] like compiling a code, running tests, [14:50.000 --> 14:52.000] or building and deploying images. [14:52.000 --> 14:56.000] In our case, we are now interested in deploying images, [14:56.000 --> 15:00.000] but as you can imagine, you can combine it with other tasks. [15:00.000 --> 15:02.000] Then you can define a pipeline. [15:02.000 --> 15:06.000] A pipeline is a set of orchestrated tasks, [15:06.000 --> 15:09.000] and then you can use a pipeline resource, [15:09.000 --> 15:12.000] which is a set of inputs that are going to be injected [15:12.000 --> 15:15.000] into the execution of your pipeline, [15:15.000 --> 15:18.000] which is a pipeline run. [15:18.000 --> 15:23.000] On KubeVirt Tecton Task Operator, [15:23.000 --> 15:27.000] we introduced some specific tasks [15:27.000 --> 15:33.000] to create, update, and manage the specific KubeVirt resources, [15:33.000 --> 15:36.000] so virtual machines, data volumes, data sources, [15:36.000 --> 15:38.000] templates, and so on. [15:38.000 --> 15:40.000] You are able to populate these images, [15:40.000 --> 15:44.000] even with LibGestFS to inject files and so on. [15:44.000 --> 15:49.000] You are able to execute scripts, bash, or PowerShell, [15:49.000 --> 15:51.000] and whatever. [15:51.000 --> 15:54.000] We have a set of tasks I don't want to give you, [15:54.000 --> 15:57.000] but some are already available. [15:57.000 --> 15:59.000] We are extending one. [15:59.000 --> 16:03.000] We have an operator that is going to populate the task [16:03.000 --> 16:05.000] for you and your cluster. [16:05.000 --> 16:09.000] Now we want to see an example pipeline. [16:09.000 --> 16:13.000] We have two pipelines that are going to be injected [16:13.000 --> 16:15.000] by the Tecton Task Operator. [16:15.000 --> 16:18.000] The first one is called the Windows 10 installer. [16:18.000 --> 16:22.000] It's going to populate a golden image for Windows 10, [16:22.000 --> 16:26.000] according to some input that you are going to provide. [16:26.000 --> 16:29.000] The idea is that it's going to copy a template, [16:29.000 --> 16:31.000] it's going to modify a template, [16:31.000 --> 16:37.000] and it's going to start installing Windows from the ISO, [16:37.000 --> 16:41.000] and it's going to create a virtual machine for you. [16:41.000 --> 17:00.000] We can see a small demo. [17:00.000 --> 17:03.000] Here is the pipeline. [17:03.000 --> 17:06.000] We have to provide a few inputs in particular. [17:06.000 --> 17:15.000] We have to provide the ease of Windows that we want to install. [17:15.000 --> 17:17.000] It's the first... [17:17.000 --> 17:22.000] Yes, there, perfect. [17:22.000 --> 17:24.000] Here we see the pipeline. [17:24.000 --> 17:26.000] The pipeline is going to copy the template. [17:26.000 --> 17:28.000] It's going to modify it. [17:28.000 --> 17:30.000] It's going to create a first VM that it's used [17:30.000 --> 17:32.000] in order to install Windows [17:32.000 --> 17:43.000] and then create the Windows image from that VM. [17:43.000 --> 17:45.000] Here is... [17:45.000 --> 17:47.000] Now we are simply going to see what is happening, [17:47.000 --> 17:49.000] but everything is fully automated. [17:49.000 --> 17:51.000] You don't really have to watch it. [17:51.000 --> 17:55.000] But if you like, you can also see it live. [17:55.000 --> 17:57.000] Here it's our virtual machine, [17:57.000 --> 17:59.000] and as you can see, it's starting, it's booting, [17:59.000 --> 18:04.000] and it's going to install Windows. [18:04.000 --> 18:07.000] We have also a second pipeline. [18:07.000 --> 18:09.000] I have a demo for that. [18:09.000 --> 18:11.000] It's called the Windows Customize. [18:11.000 --> 18:14.000] Probably we are a bit over time. [18:14.000 --> 18:16.000] The idea of the second pipeline [18:16.000 --> 18:20.000] is that you can customize this image, [18:20.000 --> 18:23.000] running additional steps, [18:23.000 --> 18:26.000] like installing the software that you need, [18:26.000 --> 18:29.000] modifying the image, that it's going to be [18:29.000 --> 18:31.000] one of the golden images [18:31.000 --> 18:45.000] that you are going to provide in your cluster. [18:45.000 --> 18:53.000] Let's move back. [18:53.000 --> 19:05.000] This is the second one. [19:05.000 --> 19:07.000] I'm going to skip the demo, [19:07.000 --> 19:10.000] but if you have any questions, please please reach me. [19:10.000 --> 19:13.000] The idea is that we can use this pipeline [19:13.000 --> 19:21.000] in this case to install SQL Server and so on. [19:21.000 --> 19:24.000] What's next? [19:24.000 --> 19:27.000] OKD is going to change. [19:27.000 --> 19:29.000] Now, in the beginning, [19:29.000 --> 19:32.000] I told you that now it's OKD. [19:32.000 --> 19:35.000] It's based on Fedora CoreOS. [19:35.000 --> 19:37.000] We are going to have a big change there, [19:37.000 --> 19:41.000] which is called OKD CentOS Streams, [19:41.000 --> 19:43.000] which means that the nodes of OKD [19:43.000 --> 19:46.000] are going to be based on CentOS Stream. [19:46.000 --> 19:50.000] So it's going to be a really upstream [19:50.000 --> 19:52.000] for OpenShift Container Platform, [19:52.000 --> 19:57.000] where the nodes are based on Red Hat CoreOS. [19:57.000 --> 20:01.000] CentOS Stream is the upstream of Red Hat CoreOS. [20:01.000 --> 20:05.000] Everything on CentOS Stream is going to be built [20:05.000 --> 20:07.000] as well using tecton pipelines, [20:07.000 --> 20:11.000] just because we really believe in that project. [20:11.000 --> 20:13.000] On OKD virtualization side, [20:13.000 --> 20:15.000] we are going to introduce many features. [20:15.000 --> 20:18.000] We are going to try more pipelines, more automation. [20:18.000 --> 20:20.000] We are working on ARM support. [20:20.000 --> 20:23.000] We are working on better backup store APIs. [20:23.000 --> 20:28.000] We are working to reduce the privileges of the pods [20:28.000 --> 20:31.000] that really execute the virtual machines. [20:31.000 --> 20:36.000] We are working on the real-time area. [20:36.000 --> 20:41.000] And here are a few links if you want to reach us. [20:41.000 --> 20:42.000] Thank you. [20:42.000 --> 20:50.000] APPLAUSE [20:50.000 --> 20:52.000] Questions? [20:52.000 --> 20:54.000] Sure. [20:54.000 --> 20:58.000] There's already OpenStack and Cata Containers. [20:58.000 --> 21:04.000] And I wonder how does OKD compare or integrates [21:04.000 --> 21:09.000] or overlaps with these two projects? [21:09.000 --> 21:13.000] So the question is, we already have other projects [21:13.000 --> 21:16.000] like OpenStack and Cata Containers. [21:16.000 --> 21:20.000] I want to understand how CubeVirt compares to them. [21:20.000 --> 21:24.000] So the first idea is that in CubeVirt, [21:24.000 --> 21:28.000] we are managing virtual machines [21:28.000 --> 21:33.000] as first class cities on Kubernetes. [21:33.000 --> 21:37.000] You can define a virtual machine [21:37.000 --> 21:39.000] with a customer source [21:39.000 --> 21:42.000] because Kubernetes provides a mechanism [21:42.000 --> 21:44.000] called the customer source definition [21:44.000 --> 21:47.000] to provide customer new APIs. [21:47.000 --> 21:49.000] You can use them to define a virtual machine [21:49.000 --> 21:52.000] as a really native object that is going to be scheduled [21:52.000 --> 21:54.000] by the Kubernetes orchestrator [21:54.000 --> 21:57.000] alongside pods and other resources. [21:57.000 --> 22:00.000] The main benefit is that you can expose... [22:00.000 --> 22:03.000] You can use the same storage that you are using for your pods. [22:03.000 --> 22:07.000] You can have your pods talking at the network level [22:07.000 --> 22:09.000] with virtual machines, [22:09.000 --> 22:12.000] without the need to configure tanners and so on, [22:12.000 --> 22:16.000] because virtual machines are running on the lower stack, [22:16.000 --> 22:20.000] like if you have OpenStack under Kubernetes. [22:20.000 --> 22:23.000] Virtual machine is going to be a first class citizen [22:23.000 --> 22:26.000] of this infrastructure. [22:26.000 --> 22:29.000] So it integrates with OpenStack? [22:29.000 --> 22:34.000] So not really integrated. [22:34.000 --> 22:37.000] If we go here... [22:42.000 --> 22:44.000] Okay. [22:44.000 --> 22:48.000] Here the idea is that we have something here, [22:48.000 --> 22:51.000] which in our case probably is going to be Bermeter nodes, [22:51.000 --> 22:53.000] but it can be also another IPA scaler, [22:53.000 --> 22:55.000] but in that case you need nested virtualization, [22:55.000 --> 22:58.000] which is not always the best idea. [22:58.000 --> 23:01.000] Then you have Linux host nodes, [23:01.000 --> 23:03.000] now with Fedora Core rise, [23:03.000 --> 23:05.000] but in the future with CentroStreams, [23:05.000 --> 23:08.000] and here you have Kubernetes stack. [23:08.000 --> 23:13.000] Kubernetes is going to schedule pods as containers on those nodes, [23:13.000 --> 23:15.000] and the virtual machines bear. [23:15.000 --> 23:22.000] So you don't really care of what you have on the last level. [23:22.000 --> 23:28.000] Yeah, you showed how to install Windows and prepare images from pipelines. [23:28.000 --> 23:30.000] Isn't it easier to use simple Docker file [23:30.000 --> 23:33.000] and give it's possible to use just Docker file [23:33.000 --> 23:38.000] and do that on way just building standard Docker images? [23:38.000 --> 23:39.000] Okay. [23:39.000 --> 23:41.000] So the question is, [23:41.000 --> 23:48.000] you showed how to use a pipeline in order to prepare an image for Windows. [23:48.000 --> 23:53.000] Isn't it simpler to directly use a Docker file to create a container? [23:53.000 --> 23:55.000] So in tier it is, [23:55.000 --> 23:58.000] but you have to start from an already running virtual machine [23:58.000 --> 23:59.000] and take the disk, [23:59.000 --> 24:07.000] because Microsoft is providing an ISO with a tool that you have to execute. [24:07.000 --> 24:12.000] But you can use the same virtual pass and guest page tools [24:12.000 --> 24:15.000] to install Windows inside the Docker Cloud. [24:15.000 --> 24:17.000] But you have to execute it. [24:17.000 --> 24:19.000] You have to execute the banner of installer. [24:19.000 --> 24:22.000] So you have to, at the end, [24:22.000 --> 24:27.000] you are manually running something that it's going to install Windows. [24:27.000 --> 24:30.000] And at the end, you need to take a snapshot, [24:30.000 --> 24:33.000] which is going to be your image. [24:33.000 --> 24:34.000] You want to automate it. [24:34.000 --> 24:40.000] You want to continue to execute it in order to fetch updates. [24:40.000 --> 24:42.000] How we solved this? [24:42.000 --> 24:45.000] We automated it with a pipeline because you have a set of tasks. [24:45.000 --> 24:59.000] And so the pipeline is the most smart way to execute and monitor them. [24:59.000 --> 25:00.000] Sure? [25:00.000 --> 25:01.000] Last one. [25:01.000 --> 25:07.000] Which format are the golden image disks used? [25:07.000 --> 25:17.000] Which format are used for the operative system disks? [25:17.000 --> 25:19.000] Do we support our format? [25:19.000 --> 25:20.000] No? [25:20.000 --> 25:21.000] No. [25:21.000 --> 25:23.000] So just a little. [25:23.000 --> 25:26.000] Thank you. [25:26.000 --> 25:27.000] Time is up. [25:27.000 --> 25:29.000] But if you want, please reach me outside. [25:29.000 --> 25:39.000] Thank you.