All right. Thanks for coming. So here is what I'm for this talk. Basically, I'm going to describe what my OpenStack closer and Star does. First a bit about myself. So that's me playing with my Atari computer. So I'm a depend developer. It's been like nearly 15 years. I maintain not only the whole of OpenStack since it existed, but also things like self-open this switch, a bit MQ, many, many more stuff. That's probably a bit too much. So I've been working on hosting since the beginning of my career and of a maniac doing cloud computing for the last six years. If you don't know what OpenStack is, maybe you've been living in a cave. But like we used to have this schematic, which is kind of fun because it has all wires all around. It doesn't mean much because when you really know about it, it's a little bit more simple than that. But it handles a lot of projects and it's simply not reasonable to do everything by hand. You have to use some kind of automation at some point. That's what OCI does. So you boot up your computers over PXE. So OpenStack cluster installer provides that. And from bare metal to fully provisioned OpenStack cluster, every single artifact is taken from the Debian archive. So the only thing you need is a Debian mirror and that's about it. Even the pipette manifest that OCI is using are packaged. It's a solution which is kind of mature because it's been five years we're using it. And so all the pictures you will see are actual real pictures from our data centers, by the way. So it supports many types of hardware. I recently added ARM supports because we're putting that in production as well. We use many brands so we do recognize lots of Dell's, Gigabyte, HPEs, Lenovo, Supermicro. And what it does is full hand-free automation. It means you plug in the server, you press the power and it can do everything for you, including IPMI, hardware profiling and red setup. It discovers the location so that it can put your servers in the correct availability zones, this type of things. So at the end of the setup, everything is free SSL encrypted so that even though you are supposed to set that up on a private network, it's still best to have SSL everywhere. And so in OCI, there's many roles, like there's controller, compute network, high-scasy volumes, Ceph, monitor, Ceph OSDs, SwiftProxy, SwiftDoor and DNS. And maybe a bit more, I forgot. So every single computer, you can decide what type of node it's going to be and you can define that in the hardware profile so that the process of enrolling that node will be automated. So we've been using that software in production for five years and a half. So it's not just for fun that I'm doing it. We have real customers and we are making millions out of it. So we have decently large Swift clusters, probably eight clusters in total with like 6,000 hard drives running. And it also powers our public cloud. So it's really a production-ready system that I have uploaded to Debian. So as I said earlier, the overall workflow is that you're going to PXC boot your servers. So it also handles secure boot, meaning that it uses shim, then grub, and then your live system will download the SquashFS system over the network using that's how a live build works, right? And then once the server has booted up, it's going to report all the hardware capabilities of your server. How many hard drives, their size, this type of CPU, all this type of information. So with that hardware discovery, which is kind of simple, it's a simple shell script. It's not like everything in that open stack cluster installer is made in a way so that it's easily hackable. Everybody's going to understand it. It's bash scripts most of the time, some pipettes manifest. There's some PHP, but you mostly do not need to touch it a lot. So once the machine is enrolled, we know it's rolled, we know it's IP address because it has a network manager to assign IP addresses. Then OCI will produce the pipettes node classifier. So that's a huge MF manifest that tells all the parameters to the node. And so that when the machine boots up for the first time in its operating system, then the system knows what to set up on that machine. So it's kind of dynamic because the ENC ML file is generated from the DVE, which you can interact with the CLI. So you can modify the DVE to, I don't know, add a GPU, and then on the next pipette run, it's going to install your GPU or anything like that. We also provide many types of networking options. We used to do L2 connections so that you have a lot of ARP, and then my network guys started to complain about it. So we implemented BGP to the host so that you have an L3, only connect to the tree between the hosts. So I'm not going to describe what BGP to the host is in a lot of details, but basically all the links that you see there are BGP connectivity between the host on the bottom and the switches on top, which provides redundancy because every device that you see is connected to two other devices, and then you can use multiple routes. So the way it's done is that it uses a link local IPv6 connectivity between the two devices, meaning that you will have absolutely no ARP between your whole rack. It's going to be only L2 connectivity over IPv6 on the link local. So it's probably a little bit small, but what you can see in this is... So you have... Here's the types of machines that you have when you do a DMI decode. That's the product name, and here is the switch host names. So data center, row, rack, location, and computer aggregates. So when the servers boot up, they see the switch names, and then I can deduct where they are physically in the data center. So thanks to that, we can classify them in availability zones. That as well is probably a little bit small, but this is the way we classify hardware. So here you give a name, whatever you want, and the role that you want for the machine. A product name can be multiple ones, like if you have many types of compute nodes that works. The amount of RAM, and that's the description of the red layout that you will want. It also supports software RAID if you want. That's what the system automatically setups as a command line, as if you were typing OCLI, CLI, and then some parameters, so it does that for you. And then this makes it enrolling compute nodes into some compute aggregates and availability zones. So once you define that, you can define it for all the roles you have in your cluster, and that's how it does the magic of, okay, server has booted, I'll put it as this role in the cluster in that availability zones, install the operating system, and do everything without touching the keyboard. There's other features that are fancy, so it's maybe going to be a little bit a feature catalog, because it does a lot of things that we actually needed in production. So you can set up a Swift cluster with OCI, without compute, and then you can define a compute cluster with another OCI instance, and then you can connect the two, and if you do that, then you have Glens and Cinderbackup that are going to use that other cluster. The point to do it is that most of the time we set up our Swift cluster in a cross data center way, with one availability in each data center, and that's not really what you want to do with a compute cloud, right? You want everything to be in the same data center, with VMs close to each other, and just define availability zones per rack, for example. So that's the advantage of doing that. So we also support GPUs. For Maniac, we have a huge demand from our customers to use GPUs. You can define as many GPUs as you want per compute, so if you have four, six, eight, it's fine. So we have a picture of some A100 NVIDIA GPUs. So just to activate GPU support, you enter the GPU name, the PCI vendor IDs, I believe these are for the T4 GPUs that you see on the screen. Then you define a NOVA flavor that is going to use the name that you defined on top. So you can have multiple types of GPUs in one server that's also supported. The only thing is that once you've activated GPU, you need to reboot the compute so that it knows it has GPU and therefore it's a blacklist NOVA kernel module. Otherwise, you won't be able to use it with virtualization. There's also support for CPU models. So most of the time, in one cluster, you will define one CPU model for the whole of your clusters. Let's say you have epic CPUs from AMD, you will do that. But then if you have a mix of CPU types, like let's say AMD and Intel, then you can also define it per compute. There's also the possibility to do a hyper-converged model, meaning that basically you put your safe storage in the compute. So I designed it, we tried it, and we were not very happy about the performance of it. So if you don't have a lot of money and it's not really for customer-facing stuff, you can do it, it works, but I do not recommend it if you do large-scale. If you do that, you can also provision things like nutrient dynamic routing agents. So yeah, we also support BGP announced IPs for your VMs, so that's why there's this. So at Infomaniac, we also provide public cloud, therefore we also support telemetry. So telemetry is the name in OpenStack for rating all the resources. That's not the actual billing, that's counting resources. Let's say you've used that type of flavor for two hours, then it's that price. And then billing with actual PDF and such, it's up to you to rate it. So with telemetry, people also can do auto-scaling, that's how it works with heat. So you can basically rate any type of resources that's more OpenStack-free than OCI, though we provide all the things you need to implement it yourself. So telemetry is a huge resource-demanding thing. So I just made a small calculation so that you understand that. If you have 6,000 meters and 20 metrics every five minutes, that's 400 metrics per second. That's a lot of metrics to process, and that's only VMs. In the production system, you won't only build VMs, but many other things like, I don't know, Glance images, Swift objects, public IPs, load balances, some polling will be done. And so all of that takes a lot of resources, and therefore we wrote dedicated roles for it. So there's the messaging nodes where Clouty processor will be hosted. There's the new key API, so new key is the thing that does the time series for the resources. This messaging node, we provisioned it with a lot of cores, so we have three of these nodes with 128 cores, and it handles about 5,000 VMs in one cluster just to give you a rough idea. So it has dedicated RabbitMQ for the notifications, dedicated Galera cluster so that it doesn't interfere with your control plane, and dedicated Ceph as well. So if you decide to do with telemetry, you can set up the special Ceph cluster and these three messaging nodes, but you don't have to. You can also don't set it up, and then it's going to use the three control nodes. Everything is a little bit like that in OCI, so let's say you add some compute nodes, then it's going to provision Nova on the control planes, and if you provision some messaging nodes, then it's going to remove all the new key API and Clouty from the control plane. So that's the rough idea about it. So if you want to test the results, you can try on a FormalActiveDecloud. We're cheaper than everybody that you see over here. And we give you a 300 USD trial for two months. So in the near future, we expect to implement more services like Magnum. If we do that, we are going to do it with the cluster API plugin from the Vex host. Otherwise, we may implement CubaseService, not an open stack with our old Boo solution. So we are still working on it. I can't really tell you. Malilla, we are not going to implement it until the Viettayo driver is done. So I'm not going to go into details, but the generic and the Ceph FS driver, we are not happy about it. We don't think it's pollution ready. And Chauvin is trying maybe for later. So I was scared to have too many slides. So I went a bit fast. I have some time remaining. Before I do that, I may show you a little bit how it works. So there's no actual demo. There's this that shows you how to interact with OCI. So I started working on the web interface, and then quickly I realized it was crap. And then I just did, I work every day with that CLI. So you see there it created a cluster. You can set up many options on the cluster like time server and I don't know, 40 options probably for every machines as well. You can do machine sets. So it turns in loop. What you see here is a virtualize environment that I have a machine with half a terabyte of RAM where I spawn 38 VMs just for OpenStack and nine more where I have a virtual switch environment. So that you can reproduce at home. The virtual switch can do the BGP, so it's fun because from the host you can trace routes to your VMs that are using the OpenStack workload. Okay, that other one. So there you will see a few machines by hand. So machine add three controllers on zones and so on. So I'm open for questions. There's a few minutes remaining. I haven't been in the game for quite a while, so I don't know what to do with the second stage. One of the problems that OpenStack was kind of upgrading. So how do you end up being upgrade between the two? Yeah, so how do I do upgrades? With a simple shell script. So I have written, so when you see OCCLI, it has a bash completion everywhere. And I wrote HAPC, which is HAProxy command that does controlling the backends which one you disable or enable. So it does that to disable the APIs when it's upgrading one node. So I don't know how to explain, but despite what everything said around, I wrote that script and it wasn't that hard. And it's kind of easy when you read it to understand how it works. So you upgrade the... First you upgrade the puppet machine. It's going to upgrade all the puppet manifests. So it disables all puppet everywhere, upgrades that machine, and then it basically does upgrade. Not really in fact, because I've calculated all the OpenStack dependencies, so that's the only thing it's going to upgrade. It's not going to upgrade your OpenVsuit when you do that, for example. And then, yeah, it just works. So I've tested with Tempest the upgrades from Victoria to Bobcat. I'm not completely finished, but it's going to be soon done. So if you don't know Tempest, there's functional testing of OpenStack. Any other question? All right.