Hello. So, yeah, I'm Stefan Graber. I'm the project leader for Linux containers. And I'm just switching to the right screen here. There we go. And I'm one of the, one of the in-cast, in-cast maintainers. I was also the former project leader for LexD when I was working at Canonical. So, gonna go for a tiny bit of history first and then go into more, you know, what in-cast is and what can you do with it. So, the LXC project itself was created way back in August 2008 through IBM. That's the original Linux containers run time and was, has been used kind of everywhere, including the original version of Docker and some other places at that point. Linux containers itself was created, so the organization was created back in September 2014 and the LexD project got announced by Canonical in November 2014. Then LexD's been going on for a while until a lot of things happened in 2023. So, on July 4th Canonical announced that LexD was gonna be moved out of the Linux containers community project and moved into the Canonical organization itself. The next day we noticed that all non Canonical maintainers had lost all privileges on the repository. So, only Canonical employees were left maintaining it at that point. Then a few days later I left Canonical, so that happened. Then August 1st, Alex Astorai, who was the open SUSE package for LexD decided to go ahead and fork LexD as a new community project called InCus. A few days after that we made a decision to include InCus as part of the Linux containers project, effectively giving it the spot that LexD once had. InCus 0.1 was released on October 7th and we've had another four releases since then. Lastly, just as a bit of an early Christmas present, Canonical decided to go ahead and re-license LexD to AGPL as well as require everyone to sign a CLA to contribute to LexD. The consequence of that for us as a not-patry-to project is that we cannot look at anything happening in LexD anymore. We can't take any changes from LexD anymore, so InCus is effectively a hard fork at that point. So, that's the history. Now, to go back to what is this thing actually all about. So, InCus is a system container and virtual machine manager. It's image-based, so you've got a pretty large selection of distros. It's going to be a whole slide about it a bit later. But, yeah, it lets you effectively kind of cloud-like immediately create instances from any of those images. The system container part means that we run full Linux distributions. We don't run application containers, we don't run OCI right now, we don't do any other kind of stuff. The containers are really like a full Linux system that you then install packages in a normal way. Everything is built around a REST API with a pretty decent CLI tool. That REST API also has other clients who will go through that in a tiny bit. InCus got great support for resource limits, so you can pretty easily limit CPU memory, disk network, I or whatever you want. It's also got extremely good device pass-through to both containers and virtual machines, so you can do things like passing GPUs or attaching virtual TPMs or sharing your home directory or doing a whole bunch of different kind of sharing and passing through devices into containers and virtual machines. It also supports all of the expected stuff. I mean, it does snapshots, does backups, it's got a variety of networking options, a bunch of storage options, all of that stuff. It can also create projects as a way to group a bunch of instances together and effectively even open ID connect, which is cannot go to standard these days. And for authorization, we support OpenFGA, which is the open fine-grained access control project. That gets you, as the name implies, pretty fine-grained access control. There's also a number of web interfaces you can use on top of that. So here you've got one of those, which is actually the LexD web interface that runs perfectly fine on top of InCus. And yeah, that's one of the options there. As far as what you can run, well, there are a few options you can see up there. So InCus is indeed all based around images. We build images for pretty much all of the major Linux distros and even some of the not-so-major. And we build everything on both X86 and ARM. The vast majority of them are available for both container or VMs. We've got a number of them that are just for containers. And then because we do normal VMs, you can also run Windows, 3BSD, whatever else you want inside of the virtual machine. All right. So let's do a first quick demo of the standalone InCus experience. So if I switch over there, first thing we'll do is just launch an ARM Linux container. There we go. So we've got that. Then let's do another one for, let's do Alpine, the Edge release. So just do that. And this is obviously at risk of blowing up at any point because I'm on the first Wi-Fi. I think Ubuntu was planning on doing a VM. So let's do a VM instead of a container. So just tell it you want a VM instead. That's pretty much all that there is to it. And with that running, so we can see that the two containers already started, got their IPs and everything. The VM is still booting up, so it hasn't got its IP yet. It does now. If you want to get into any of them, you can just exact any commands. You can get a shell into Alpine. You can get a full bash inside of Arch. And you can do the exact same thing with the virtual machine. So like you don't need to get a console and log in and everything. Like there's an agent automatically in our virtual machines. You get to just immediately access them as if they're containers. So that works really well. You can create snapshots. So if you wanted to snapshot, the opposite snapshot creates the Arch one. If you don't give it a name, it just picks one for you. So we can see there's now a snapshot that we can restore or just keep around. There's also the ability to do automatic snapshots with a chron-type pattern with automatic snapshot expiry. You can do all that kind of stuff. Now let's create a custom storage volume. So we'll just do storage, volume, create, default. Let's call it demo. And then we're going to be adding that as a device to, let's say, Arch. So just call it demo. It's a disk. It comes from the default storage pool. And the volume is called demo. Configure this. There. And I forgot to do add. There. twice add. Now if we go inside of that VM, again, we see there's a new entry there. And then empty home. Hey, that's my home die tray. So that's very nice and easy. It's kind of doing automatically, VIRTA, UFS, 9p, all that kind of stuff. It talks to the agents to trigger the mounts. And it just, like our goal is for virtual machines to feel like containers in, like as much as we can. And having that agent in there really makes that super easy. And for the last party trick of this demo, let's do launch images. Open suzer, tumbleweed, desktop KDE as a desktop image. And also tell it that I want to see the VGA console as soon as it starts. So when I do that, it actually gets me a second window, which I need to drag over here. And let's try full screen that thing. Maybe. Yeah, full screen doesn't work. Okay. But we can see it boot. And it's going to get us eventually into a KDE session. Not sure where the resize didn't work. Oh, okay. Maybe the desktop where? I saw a mouse pointer that was about the right size. Nope. Okay. So it is starting KDE there. So we even have some desktop images. We've got an arch desktop image with GNOME. We've got Ubuntu with GNOME. And we've got open suzer with KDE. We're not building too many more of them mostly because they're actually very expensive to build as far as like resource, like the build time and distributing pretty large images. But it's to show that this works. And if you want to run your own, you can totally do that. All right. So let's just go back to slides. Come on. There we go. So other things you can do as this thing is effectively your own local tiny cloud and it's all built on rest API. It's what it also makes it very easy to integrate with other things. And other things here mean some of the pretty usual tools you might be dealing with. So Terraform, OpenTofu, you can integrate with that very easily. We've got a provider to maintain ourselves that you get to use. Encebal has got a connection plugin that you can use to deploy any of your playbooks directly against virtual machines or containers. And if you want to build your own images as derivatives of ours, you can use Packer as a very easy way to take our images and inject whatever stuff you want in there. There are a bunch of other tools. I mean, LexD especially had a lot of third-party tools that could integrate with it. A bunch of those are now migrating over to InCurse or supporting both. So that's very, it's a list that's very rapidly growing. Other things you can do, well, InCurse exposes an open metrics endpoint to get the details like the resource consumption and usage and all that kind of stuff of all of the instances running on it. So you can integrate that with Prometheus to script that data and keep it on the side. It also supports streaming, logging and audit events to Gryffina low key. So you get to effectively have your events and your metric in the same spot at which point you can use the dashboard that we've got in the Gryffina store to get something like that and run on Intel. So that's pretty convenient as well. If you don't like typing the name of your remote every time, you can switch to a remote. So you just do a remote switch at which point if I do a list, it goes straight to that remote and you don't need to type it every single time. That cluster is actually using a mix of local storage and remote storage. So it's got CF for HDD SSDs and it's got a local ZFS storage pool as well. And on the network side, it uses oven. So it actually has all of the stuff in place. And actually if we look at the remote list from earlier, we can see that it uses OIDC for login. So it's also using authentication bits I mentioned. Now if you wanted to launch, say, the BN12 instance on that thing, you can do it the perfectly normal way. And that's just going to instruct the cluster to go and do it. So in this case, thankfully it's running back home with very fast internet. So I don't need to wait for the first Wi-Fi to download stuff for me. But it's actually downloading the image and parking it creating the volume on-safe in this case and then starting the instance. I didn't even tell it whatever I wanted it on. So it just picked wherever it made sense, which is actually funny because if you use an image and you don't specify what architecture you want, you're going to get one of the architectures. So in this case, I didn't tell it I wanted ARM or Intel. There was more capacity on ARM, so I got an ARM instance. We can go and check that easily. But I know that the server it picked in that list is an ARM server. So if I go in here and look at the architecture, it's AR64. All right. Let's just look at things here. And I wanted to just show the dashboard as well. I'm just going to drag that particular window over. Where is it? It is here. I had it open. I've got way too many windows open on my laptop. Okay. So it's Grafana. It's loading. It's loading. And in this dashboard. Okay. I'm just making sure it looks at the right cluster before I show it to you. So there we go. Yeah. So this is actually the dashboard for the very first I was talking to. So it's SHF, the one I was showing. It's looking at the demo project. So we can see the top offenders as far as resource usage and that kind of stuff. We can look at graphs for network, for storage. And we can even kind of go down on the specific instances and see what they've been doing. So you could expand an instance and go look at its usage. It also gets all of the events from Loki. So we can see the instance creation and any comments like that. That shell I got is actually right here. And any error and stuff is also all captured right there. So that's the metric side of things. All right. So where do you get to run this thing? Well, quite a few distros have packages now for Incus as well as I meant, as I've mentioned, Devenin Lubuntu without packages in their next table release. We're also looking at doing a long term support release for Incus itself. Right now you might see version numbers like 0.4, 0.5 and be a bit scared about it. You need to remember that this is a derivative of Lexday. So one of our zero point release is just as stable if not more stable and like a five point something on the Lexday side. We've just not done anything past zero because we're waiting for the LTS of our other projects within the next containers, which we will do in March. And that's going to be the LTS of LXC, like CFS and Incus all at the same time. And we usually try to line up versions. So Incus is going to jump straight from 0.6 probably straight to 6.0. That's what's going to happen with the LTS. As far as other features we're looking at adding, with the release of Linux 6.7, we now have Bicash FS in the Linux kernel. And it's pretty interesting for us on the Incus side because it's very close to what ZFS or the RFS does, which we already support. So we're looking at adding a Bicash FS storage driver for people who want to start using that. On the cluster side, I mentioned that we support Cef right now, which is a pretty good option, but also a bit heavyweight. A bunch of people could instead do something different, whether it's like using a shared NVMe of a fabric drive or using some old fiber channel sand they might have gotten on eBay or something like that. So we're looking at adding distributed LVM as a storage driver, which effectively means if you have multiple systems that can all see the exact same block device somehow, then you can use LVM on top of that with a distributed locking manager on top so that all of the different machines in the cluster get to use that. So that kind of solves the issue of like how do I use my old sand at work or something else, you can use that. But it can also work in some other cases. I think someone is looking at using that with DRBD, for example, as an option. We are looking at adding OCI application container support. So that's potentially a bit of a surprise for some folks. But we feel that these days, like the application container space has now stabilized enough and we've got enough of our users who literally just run, like for some reason are running Docker inside of InCast to run a few specific applications that this particular use case we could support natively. So we're not looking at like competing with Kubernetes with all of the service mesh, this auto distribution thing. Like that's crazy stuff. They get to do that. But we would like it to be possible for you to run like two or three small containers for like your IoT software or whatever. That's kind of what we're looking at doing there. And on the networking side, we're using OVEN for distributed networking, which works pretty well. But we're also working now on another feature of OVEN which is called Interconnects, which then allows for having multiple clusters and then interconnect to the network. So you can have instances on multiple networks, on multiple clusters, and then connect those together and can direct them. And you've got 30 minutes with InCast pre-installed in there to just take it for a ride, play with it for a bit, see if that's something that's interesting to you and if it is, then you can go and install it for yourself. And that's it. We can try some questions. We've seen it's a bit difficult. So please, everyone remain quiet if there's any questions. So we can try and hear them. Is there anything? Oh, you have it there. Okay. So I'm quite sure some people are interested with the differences from the end there and this too. So compared to what? Sorry, I didn't catch that part. Oh, VMware. Okay. Well, so it's a lot cheaper. Yeah, for anyone who's using VMware professionally and has followed the news recently, let's say your VMware build is not great right now. So this is a viable alternative in many cases. It doesn't do, it doesn't have all 50,000 components all around it and all that kind of stuff. But if you are primarily using it as a way to get a cluster, create a bunch of VMs, maybe create some containers, run whatever OS you want on there, that will do it just fine. So it's definitely an option there. I mean, it's kind of in the same vein at that point as compared to like, you know, a Proxmox or some of those other options, it will work just fine. With the FireTact, we do have like, it's not a distribution you can install it on any system you want. It's obviously all open source and yeah, it is a pretty viable alternative and we do have a lot of people who are using VMware that are very closely looking at this as a potential way out of VMware right now. So the question here, better understanding terminology, would the LNs find a backdoor between a system container and a location container? Yeah, so the difference between application containers and system containers is a system container will run like a full Linux distro. It will run system day, it's going to have Udev running, it's going to, you'll be able to access it into it, install packages, reboot it. It's really designed to be like a state full long running type thing. Whereas your application container is usually, I mean, ideally single process or a process of some of its children, it's really more designed around delivering a specific application and most often it's going to be quite stateless with the idea that you can just nuke the thing and replace it at any point. It's kind of two different concepts. Like some people like the idea of having a system that they actually get to select what packages installed exact config and stuff and some people prefer not to care about any of that and just have something pre-installed and that's what an application container gets you. That's why having the ability to run some application containers directly on InCus alongside the system containers I think will be quite interesting because if you just, like if for a specific application it's easier to just get their pre-made thing then you'll be able to do that while still being able to run everything else. Yep, so we do have a bash completion profile. I absolutely hate shell completion for some reason, so I don't have it on my machine so I can't show you. System containers provide the ones that are interested in the rights? Yeah, I mean it is possible to get application container run times to get you a full system container. I mean nothing prevents you from deciding that the application you run in the container has been in it. That's definitely possible. It's just not what they were really meant for so there's a bunch of kind of, it just feels kind of less polished because that's not, that wasn't their goal. Like things like being able to dynamically pass new files in and dynamically attach devices, get whatever number of shells you want, be able to interact with the outside words through like a Unix socket inside of there. Those kind of things don't make too much sense for application containers just at the beginning and so some of those features will probably be lacking on that side. I tend, I mean, I was going to say like I usually like, you know, having one tool for the job and like picking the right tool for the job and effectively if you really care about running a bunch of application containers use one of the application container run times whether it's using Podman, Docker or some of those others. One thing that's actually interesting is that you can totally run Docker or Podman inside of an InCast container. So that works. You can run your normal Ubuntu, Debian or whatever in Existio inside of an InCast container and then install Docker or Podman in there and run some containers alongside whatever else you might be doing in that container. So that's something that works fine. I think we're probably out of time at this point. So thanks a lot everyone. I'm probably going to just be outside for a tiny bit if anyone has more questions and things. But yeah, thanks a lot.