Okay, next lightning talk. So we don't have a lot of time to switch between speakers. Please take a seat. Next lightning talk as Christian, who's notorious for being very good on staying on time. I did once a very great job. I benefit from that still. Yeah. So if you see me, I talk about containers a lot. So this time I would like to give an update to the HPC container conformance project, which we started or I started last year and which got a little addition by being introduced to the OCI working or we created an OCI working group together with some other folks. So what is the problem? I mean, to just maybe call it challenges, everyone knows modules, right? If you're not, if you're new to containers and you use native code, you most likely use modules to figure out what's the best binary for your program on the current system you add. So you do module load grow max and the module system will pick the best binary for the current system you are on. So it's a runtime decision, right? So you have a bunch of software in a software share and it would just pick the best one problem or not problem. I think it's a good thing with containers. You don't want to have a lot of binaries or different variations of binaries within the container. You want one, right? So a single set of libraries and a single binary for a given problem. So what we end up doing was to create multiple containers for different systems. Let's say for the CPU like Graviton, Skylake, Zen 3, or maybe even we use a name to identify a cluster we are running on. That's fun, but problem is how do you pick the correct image? Within the container space, you have something that's called an image index, which is just a matching artifact that says, okay, you are on an arm system, you get this image. You're on an AMD system, you get this in or Intel or x86 system, you get this image. And if you are a wasm guy, then you even get another system. But the thing is that's not, that's not fine enough, right? It's very, it's very gross grained. You cannot just put like your, your, all your x86 code in this. So what we actually want is an image index that is more specific. So they can say, um, this CPU, this accelerator gives me this image. If I have this CPU, I get another one, maybe even configured with MPI in, in, in mind so that you say, like, if I have MPH, this version, I get this image. If I have open MPI, I get a different image. So you get the idea. So have a very, maybe long image index with different variations and then you pick the best image. And another thing that I didn't mention in the first slide is, uh, run times will go through the, uh, image index, the normal image index and we'll just pick the best or the first match that they get. So even if you have an image index with five different x86 images in it, the runtime will just pick the first one. It matches and off you go. And with this, of course, we, we cannot do this. We need to go through all the versions that we have, all the different specific images and then the runtime ideally picks the best image for you. Okay. So I did some hacking back in the days, right? So I changed or used an unused feature in the image index to make some identification. So I saw it. Okay. This is a broad will this and media driver and I hacked, uh, the Docker runtime to also recognize what the best image matching is for this specific platform you're on. So with this ugly heck, you were able to identify, create an image index with a lot of different images for different, um, different systems. And then you configured your runtime to search for a specific tech list, if you will, that was like hacky. And, um, I didn't intend it to be, I created a pull request for Docker, of course, what turned down, but, uh, because it's, it's, it's ugly, right? And what's ugly about it is that you need to implement it in every runtime. You need to implement it in any scheduler to make sure that it works. And this was of course bogus. So what we did, uh, as I said last year or the year before, uh, we created an HBC container conformance project to establish best practices and provide some guidance on how to build images for the HBC and how to implement the use of those images. The first thing, which is very brief, uh, is what we expect an image to behave or how it should behave. So the first one is, uh, we want, there are two types of containers, application containers, I call them and login containers. Uh, application containers is just if you have, for instance, a binary and you put the entry point to be this application, you can create an alias that just runs some program within a container without you knowing about it. So for instance, let's go release a example. You just, instead of running a binary, you point to an alias and then you run, um, this problem here is if you want to debug things, uh, and if you, it's, it's hard because the entry point is always tricky to get rid of, or at least I need to look up the Docker command or every time. The other thing is multi or a lot of HBC applications have multiple binaries you want to run. Maybe you have a pre-processor or the application and a post-processor with this case, you would have like three different images for this because the entry point is different. So that's kind of ugly for HPC is not really usable. What we actually want is a login container. So you start the container and it drops you into a bash. That's that way you can just, um, augment your, your script and just say a Docker run or a singularity run or whatever to, um, execute the GMX command. For instance, you can just run it here, uh, within a container and it just works. Um, another aspect that's very important, but everyone hopefully does it anyways is, um, that the user within the container, if you use a shop, a shared file system needs to be agnostic, right? So you cannot rely on a certain user within the container. So you might, or you should, um, make sure that the, that the container is able to run with nobody because the username will be dropped from external, uh, the user ID and group ID to have access to share file systems so that the process is owned by the user outside of the container and the container itself has no knowledge about the actual user. Um, okay. And then that's how we expect the container to behave. And I think that's common and already understood. I think I talked about, yeah, last time was annotations that was an idea of us HPC guys and girls, a simmering in our own soup and tried to come up with something to put forward. Um, that was kind of a nice exercise, but at the end, um, we jumped on the train of the image compatibility working group at the old OCI initiative. And you might ask, and hopefully a lot of you know it already, but what is the OCI? It's the open container initiative. It maintains the more the, the relevant specifications about containers. So what's the, what's the image like? How do, uh, run times interact with images and distributions and registries and so on. What is the distribution specifications or how do, uh, registry work and security stuff? So it's kind of like a body that maintains the specifications and we formed a working group together with others. Um, that's called image compatibility. So we want, as I discussed in the, in the beginning, we want to extend the manifest list or the image index to not only be able to, um, pick by platform by architecture, but extended so that you can make what I, what I said as a, as a desired state for the image index so that we can pick the right image and an optimized image for, uh, a certain application. And of course we want to express like what the image was built for, what we expect from the host, what runtime we might want to use and so on. All this cool stuff we want to incorporate in this. And why is it a better way? I mean we HPC folks, we like to, to do our own thing, right? And we are kind special and snowflakey, but this is of course a better way because we interact with the OCI community and we put it in front of them so that we can take into account other things like for instance, wasm is a thing, uh, haven't used it, but seems to be a thing and it's a runtime was in, was in the, was in the container ecosystem. And of course we also have different run times, right? We have like singularity, obtainers, saros, what have you. And, um, picking a runtime over the other is something that we are interested in. The wasm folks are interested in. Say you have a Kubernetes cluster and you have an x86 image for an application and a wasm image for an application. Maybe you want to pick one or the other like different, depending on the condition. So they want this, we want this as well. Uh, scheduling a registry, of course HPC is great, uh, but the container tech is much wider than HPC, say the least. And, uh, we want to make sure that we align with Kubernetes. We want to make sure that the registries are aligned with us and the OCI working groups have like, they have an oil machine of sanitization. So that's also very cool to do. Okay. Where are we now? Uh, we discuss around use cases and while discussing the use cases, we already brainstormed some implementation ideas and we came up with a couple of use cases or, uh, stand in, um, stand in stakeholders, let's say, um, for instance, like the first one, of course, and we are all building images. So the first one is image author. If you build a container image, you want to define this compatibility definition that we propose that we want to propose. Uh, easy. I, ideally it's implemented with an easy build of spec or geeks that, um, you don't need to do it yourself. So all the stuff you can put there and, uh, Vanessa already wrote a little tool for that. The other is of course a system admin that wants to make sure that the system that he's maybe pursuing, procuring, uh, is able to run the container. So you just go through all the competibilities and then you, you figure out what's, uh, what, what works and what not work. So that's all, uh, this good stuff. And, um, you also want to make sure that the configuration of your system is actually able to run this image. The end user just wanted to work, right? So we need to make sure that the system admin and the image author and the other stakeholders just hum together and, and conclude on a certain configuration. And that's what it wants to do. There are other use cases. I don't have time to go all of them, but, uh, we have a list of, of, of this use cases we are going through currently. Our meeting is every Monday. Uh, and if you want to join, please do. Um, I have some links. There are resources. The, the slides are available online. If you want to get in touch, we have an HPC container slack. We have an OCI slack channel. There is a HPC social slack channel as well. So if you want to have a more general overview and if you're at ISE, make sure to, uh, join our high performance container workshop. It's a tense edition. So we do it for 10 years now, which pretty cool. And we have a friends of container boat trip. So if you like to, to, um, meet container guys and girls, uh, make sure that you point your mark, your calendar at the 13th of May. Yeah. That's it. Thanks. And now the famous and I think I'm good on time. Awesome. Maybe do, do I get a sticker if I do it three times in a row on time? You, you get a beer. Oh, even better. Right. We have time for one question.