I'm happy to introduce our first speaker in the morning, who you can already see is all set up here. Well, I'm going to go and hand it over to Leonard to open us up and kick off the distribution s Devrem for the day. Take it over from here. I have this. Does it actually work? It works, right? Hi. Good morning. And thank you for waking up so early for me. Much appreciated. It was hard for me. It was probably hard for you as well. Today I'm going to talk about TPMs and UKIs and immutable inner-ities. I'll give a second talk later today in the boot and in-it track. So the topics are kind of related. But there I want to talk more about the early boot stuff. And here I want to focus on stuff, what it actually all means for distributions. So UKI, TPMs, immutable inner-ities and full description. I think this is where we should be going on the Linux distribution world. But of course, I am not the Linux distribution world. So in this talk, I kind of want to explain what I think might be next steps for distributions that actually want to adopt all this. Yeah. To start out with, this is a fairly technical talk. I'm pretty sure some of you at least have some rough idea about what I'm going to talk about. But just to get you up to some level at least that you have a chance to follow, let's go through a couple of very basic vocabulary and what I'm talking about. The first thing, secure boots. Like many of you probably came into contact with that. It's the thing where during boots, all the various binaries that are part of the boot process are signed cryptographically. And the firmware from early on makes sure that only properly signed stuff is run. The signing keys for that are kept by Microsoft. So it's like a centralized authority kind of thing. At this point, because it's relatively like they'll sign a lot of stuff. There's probably more of a denialist of bad stuff than an allow list of good stuff. And yeah, there's certainly criticism to be had about the centralized nature of this. There's another thing called measured boot. Measured boot is not so like say accepted, well known in the Linux world yet. Measured boot is something where basically rather than in Secure Boot where you disallow components that are bad to even run. In measured boot you allow everything to run. But before you run it, you make a measurement which is basically taking a SHA sum or something like that like a hash of like a cryptographic hash or what you're going to start next. And you write that in a certain register in a TPM. And this is an irreversible way. So afterwards you can cryptographically verify that everything that was started so far is actually what you think what it is. The good thing about this is it's more democratic in a way because it doesn't restrict anyone from running anything. But you can later use these measurements to protect your secrets. And that's what we're going to talk about later. So there's no centralized authority because there isn't a restrictions made on what's booted. But it's up to you to say basically I only want that if this software is run during my boot process that I can release my disk encryption secrets. And then you know, it gives you I think a more like more specific, more focused kind of security than the Secure Boot stuff gives you. TPM of course I already mentioned the word is like this little chip. I mean it used to be a little chip that is pretty much in all the laptops. And in one form or another it's also in all the cell phones. I mean they call it Secure Enclave and stuff like this. But conceptually it's always the same thing that you have like this security isolated security environment where you can keep your keys and that maintain access policies on keys and things like that. It's pretty common, has been around like all the laptops that were sold in the last 15 years probably already had a TPM. On Linux, well I mean it is automatically used because the measurements made into the TPM but actually actively used by the distributions it's generally not. Like it doesn't mean that you can't use it but it's so far typically left for hackers to actually have an interest in TPM to enable it. Regular people do not run this which is completely different like it is on Windows and these other operating systems was always used by default basically like BitLocker. If you don't do anything it just locks it to the TPM. One specific part of the TPM is the PCR registers. I already referenced them earlier, didn't call them PCRs. Those are these registers where you can write these hash values too. They do one relatively simple cryptographic operation which is like they take the old value and the new value and hash it together basically. This basically means only if the exact same stuff gets measured into it during boot the final result of the register will be as you expect. You can reverse this, I mentioned this, once something is measured into it it's measured into it and the only way to get the thing back to zero is to reboot. All the registers started zero and then you typically have 24 of these and half of them are basically used by firmware. The other one is for the operating system. Once you have these PCR values you can bind security to this like locking of disk secrets and thus you can do things like that. You can say that my disk secrets shall only be released if the operating system is in good state. How that actually works let's go into that detail a little bit later. That term is UKI. By the way I'm talking a lot and I have lots of slides and I much prefer though if we do a discussion here rather than me just talking. If any one of you has questions please interrupt me. Let's talk about this right away and let's not move the talks, the questions to the end because yeah I'm pretty sure half of you will probably then have forgotten your questions by then. Anyway so feel invited to interrupt me. So UKI's, this is actually what the other talks are going to be about is it's a unified kernel image. It's not radically new approach but it's certainly different than how most of the distributions use to manage their kernel images. UKI's are basically you take a kernel image, you take an inner d, you take a couple of other things like Chrome command line, boot splash, device tree or something like this. You glue it all together, turn it into a UFI binary like a PE binary and you sign it as a whole and during boot it gets measured as a whole. UKI's are awesome because they make things very very predictable because yeah once you deploy UKI it's one file, you drop it in the ESP like in the ESI system partition which is where the firmware starts from. You can update it as one file which is awesome because it's extremely robust right like you do not have to risk that you have half updated your kernels or something like this because it's always either you have the new file or you have the old file that's fantastic. So it's great from a cryptographic, from a robustness point of view and it's also great for other reasons like for example it's, is it always the same? You can test them better and so you have a greater chance to know that they will, like if you deploy them in lots of different machines they will probably all work equally well or equally bad but hopefully equally well. Anyway so much about the vocabulary just so that we all know at least the basics of what's coming next. How do we deal with the vocabulary? I want to explain a little bit the goals of what I'm actually doing here. So the general goal is tied to security and provide code integrity on Linux right? This is mostly about making sure that traditional Linux, like traditional Linux means distribution based Linux right? Like I do not mean Android or Chrome OS by this, I mean distributions like Fedora, Debian and these kind of things that have a, I would say the sudden democratic approach to things that everybody can participate in. It's not this over the wall open source but actual open source. So I want to make sure that these traditional Linux catch up to the level of security that the other ones actually provide you with. That Windows provides you since long time that Mac OS provides you these days Android, Chrome OS, they all have these code integrity protections. The general goal of this of course like if you want to talk about threat models is usually evil made stuff that you leave your laptop in your hotel room and you want to be sure that when you come back it's still your laptop with your software in it and it's not back doored because right now it's very easy to back door. So focus is generic distributions Fedora, Debian and so on and the goal is to make things just work right? Like I want to move this stuff out of this area where it's a specialist thing that TPM loving hackers enable and I'd rather want this that this is stuff that just works and defaults to just being enabled in distributions rather than being something you actually have to opt in and do work to get to. That is of course like it's a big ask but I think it's necessary because we nowadays like everybody knows the value of IT security and it's really sad that Linux has very little in this area by default because it's laughably easy to back door a laptop right now even if it uses full description because inner IDs and things like that are not protected at all. I already mentioned the word democratic a couple of times. My own focus is much more on measured boot than on Sacky boot right? Like Sacky boot is established like all the big distributions assigned the kernels with Microsoft key and things like that. I actually work for Microsoft as you might know but still I don't want to assign my kernels with Microsoft key. So I think measured boot is actually a much more interesting technology because it allows you to define your local policies yourself. You can sign your kernels yourself, you can define the policies for your secrets yourself and you can just say yeah I don't want to allow my machine to run Chrome OS or Windows or whatever else. I just want it to run my choice of kernels and my choice of inner dean, my choice of Linux operating system. So yeah the goal is definitely to, I think it fits nicely into how Linux distributions are traditionally organized because yeah they are in a way democratic too. So to be more technical what are your specific goals? I want that measured boot is done by default and that means not only, I mean it is done by default because you have, I am like up to the kernel do this anyway by default and have been doing this for the last 10 years or something but I want that this actually continues into the rest of the boot process and actually during the runtime of the operating system later as well. I also want that secure boot covers the whole boot process. Right now we end this really weird situation where it only covers the basic kernel and not the inner AD and I find that kind of laughable. But again measured boot is my main focus. Secure boot is, yeah we should do the two if we can but it is two different protections. Also you get the best results but I find the protection that measured boot provided was much more interesting than the one secure boot provided was. All the measurements that we make during the boot process right like the hashes like all the stuff that gets hashed I want to be predictable. Predictable basically means is that even before you boot it you have got to know if you know the components involved what these PCR values are going to be. This actually matters because you bind the security of your full description keys to these PCR values and if you cannot predict them you cannot do that. Only if you know that if I run the Fedora kernel from this version on and that is going to result in these hashes being measured into the PCR values you can say only unlock my keys if the PCRs have this value. So yeah predictability means a lot. Why do I even mention this? For example in Grubber things like that measurements are not so predictable because they don't measure the actual code so much but the selected pass through the code which basically means there are lots of variables in how it actually ends up in the PCR values depending on like I don't know if you move up your menu it might end up in different measurements. One of the goals is specifically also just encryption easy by default and this particularly also means service. Right now I'm pretty sure most of the people in the room probably use this encryption on their laptop and my assumption is also that most of you probably use it interactively with the keyboard unlock. You boot up the machine you type in your password. That's great but also we can do so much better and it's not something you could ever do in service. On service there's nobody in the server room to even unlock this stuff usually. So what TPMs give you is this ability that you can non interactively do disk encryption because the TPM keeps the secret for you. You do the PCR dance and you basically tell it to unlock the secrets only when the operating system like if it's your version of rel or your version of marina or whatever else that is booted up and nothing else. And this also means like I think actually we should get into that mode where distributions by default enable disk encryption right and even if people didn't ask for this and without necessarily even asking during the install time even for a passphrase or something like this simply because by default they should be locked to the TPM and then if people want to enroll a manual key or a FIDO key or whatever else then that would be on top of the TPM and not what you start out with. I mean this is like yeah this is the goal eventually we're not there yet. We don't even have the infrastructure to make this but I think it's I mean this is basically what Chromebooks and all these things generally do and I think we should catch up and kind of try to make this something that also works in Linux that way. Yeah another goal of boot process is testable I'll already mention it because if you have everything strictly predictable and uniform and on all the installations you kind of have the same set of software maybe in slightly different versions because one already updated his machine to the version of today and the other one didn't but still it should be a small set of different versions and yeah. By the way again questions yeah. So when you were talking about the measured boot regarding being local do you mean here local in terms of the hardware vendor or local based on the distribution or local based on the owner of the machine because at the moment with secure boot you have a buy from lots of parties Microsoft for signing, frameworks, vendors and then distribution that follows the whole process. Ultimately I mean you right like on your laptop you should be in power but the thing is like of course that's a big ask like if I install my mother Linux laptop she's not going to be capable of like it. So ultimately that basically means that I mean we come to this later hopefully given the time but it's like my assumption is that by default you get kernels and the OS provided to you and signed by you and protected by you by the distro vendor but I certainly want to enable you to that you can say basically fuck this I'm going to enroll my own stuff and we want to make this easily easy so that it's robust and you can actually do this right. So that you can basically be more restrictive even you can say not just it's okay that Fedora gets access to my disk encryption but you can even say something like only Fedora in the version that I picked on the architecture that I picked and so on and so on like you can make it much more focused because you know you machine better and the way like that you know that you don't boot from ice guzzies and things like that Fedora doesn't right like so you can make it much more focused and saying yeah I think I'm not involved right so but I'll be the goal is definitely to democratize it right like to put the people in the control if they if they want to but also knowing that this is not what in there like what people could do. So basically you said you but instead of you is your TPM so you don't even have to know about it. Sorry? Your TPM so you don't even have to know about it so it's easy for everybody to use. Yeah right. That's okay. Okay let's talk a little bit about the status quo like how it's right now. So most of the release distribution currently provide minimal second I call it minimal second boot because it only really covers the boot loader and the kernel and it doesn't only cover the inner d which I find really embarrassing for the fact that in 2024 you can just go to the ESP or boot petition and modify the inner d that anywhere you want and we'll just boot from it and nobody takes notice. But inner d is just a file system right? Why is it the same order as the kernel for measuring it? I mean the kernel could authenticate it if it wants to. I mean it could it just doesn't that's the thing that I'm saying. There is no authentication of the inner d right now. Not in the generic distributions at least and that's I mean it's rooted in the fact that inner d's are in the traditional line it's always generated locally on the system but basically they ultimately are different on every single system. They import not only code but also configuration local configuration and that basically means you cannot sign it on vendor systems right? Like if you are a customer of I don't know Suze and they give you kernel inner d then they cannot sign the inner d for you because that inner d only exists on your system and a specific system. So yeah so I think it's really bad situation right? Because it basically means that any evil mate can go into my hotel room just take the hardest guys on their laptop they can just go to the inner d change any file they want in particularly the password prompt for my lux stuff and send it all to send central server if they want and I will not be able to notice this. That is a situation that I think is really stupid all the way. So yeah I've already mentioned this the inner d's are locally built they are not protected by Secure Boot then there are very little measurements actually being done. The kernel now does a couple of them on its own like there's the inner id basically but in general the ones that are made by Grappa I already mentioned this are not predictable and it stops the moment the kernel actually does anything right? Like because the measurement that the kernel does and it still does in unify mode and then user space doesn't do anything anymore traditionally. So I think that's bad right? Because actually what I think makes a ton of sense for root disk encryption is that the key for the root disk encryption is only released by the TPM to the system in the inner d phase but never later right? That is a really nice property that you basically drop as you boot any chances to recover the boot the disk encryption key. I mean the kernel will always have it somewhere in memory because it actually needs to do the encryption but via the PCR mechanism you can make it relatively easily and we have now the infrastructure in place to do this that later on you can talk as much to the TPM as you want you will not be able to recover the disk encryption key from itself anymore because we basically blew a fuse through this but anyway this requires that we make measurements during the boot process and during the run time so that policy like this are actually very expressible. Yeah I remember this like in the status quo on the TPM based stuff again like there is a stat like two stacks even of TPM and Linux but except for hacker circles nobody uses them I would say you can't script it together there's many how-to's on the internet but nobody does 15 people in the world do it. So yeah I already mentioned this as well like the Lux password prompt is implemented in the Internet. Internet is not protected either way that's trivial backdoor it's a terrible thing. I would call this in summary pretty weak security and you could use words like laughable or something and compare to other operating systems. So what's the vision? Primary we want that kernels are shipped as UKIs by distributions so that they are everything is secured protected including the Internet and they are measured as ones and they are fully predictable. This means that the kernels and interities need to be pre-built right not on a local system I mean for the kernels they traditionally weren't except if you run Gantu but yeah the move would be to pre-build the interities centrally. If you do all this then you get stable hashes in the PCRs you can buy the disk encryption to it you get the universal predictability because the software doesn't deviate between systems it's always the same software. You have robust updates I mentioned already because the kernels can be updated in one file and yeah you test the combinations very well. Secondary is like a secondary goal that I have is what I just described is again central authority to some way because it's the distributions that do this. I think it's also important to keep people who actually want to sign their own stuff in pictures as well I mentioned that was your question basically earlier that yeah if you want to run your own like if you want to generate a key pair and sign your stuff yeah we should help you with this. So in this model you probably will still use a pre-built kernel by your distribution you might however combine it with a local Internet ID and then sign it with your key instead of the distribution key. Benefit of course is maximum flexibility but also you need to know your shit. The advantage of this of course is that the PCRs remain predictable but they only remain predictable within your local scope because only you know what you're actually going to build into the Internet IDs and how you're going to combine it. Yeah it's a it's a it's a it's a large installation footprint because you suddenly need to actually build to its installed to do this. I mean it might not be worse than the current situation with Drake and things like that but in some ways it is because you now need signing tools and things like this. But certainly both of these models are certainly in focus of what we should do I think. So the ultimate vision is there that yeah distributions in their install are to figure out is there a local TPM. I mean not all systems have TPMs in particular like ARM based they have other stuff but not TPMs and then the M's sometimes have them sometimes do not so we always have to work with the fact that TPMs might be there might not be there but the goal is certainly that if one is there we should lock to that by default. Locking to that by default doesn't mean non-interactive stuff exclusively it means yeah we can do non-interactive stuff but also mean you can combine it still with a pin. A pin is the exact same thing as a passphrase except that TPM people call it a pin. It doesn't imply a number or anything yeah. So the goal is to always encrypt the data when it's at rest and yeah we validate the boot process when we unlock things though so that we make sure that the right software at the right time and other conditions. Yeah and the goal is to install things by default that way. And then yeah I want that measurements are further done during for all facets of the system like not just for the boot code also for the OS itself for the applications for the configuration itself right like these for example measurements that are inherently local always because configurations always kind of local thing even if my mother would use her machine she probably would configure different backgrounds than somebody else. Backgrounds color is a shitty example because you probably don't need to measure that but still you get the idea. System identity by that I mean things like hostname machine stuff should probably also be measured so that you can use it in policies and can say yeah I want to have the secret that only is released on that machine and none on others. I want to see that these basic building blocks like the PCRs but also the policies generated of it out of it are automatically managed by the West because this is not entirely trivial right like because every time you update the West any component of a boot loader UKI or things like that you have to regenerate like you have to re predict what the PCRs are going to be in the next boot and then do something about that because you still want that when the system boots up next the disk encryption shall be released but not on other conditions so there's some extra work where you when you update something you need to predict the PCRs and do something with it. We'll talk about this hopefully later but let's see how much time we have. The result of this of course comprehensive code integrity the inner dirty gap is closed we are ready for remote attestation that's also kind of goal that remote attestation works I think I mean it's good for some cases if you actually run more than one system I'm pretty sure it's not so interesting for regular people themselves but we should at least be ready for this and that the stuff is that we have the building blocks ready so that people can use the TPM in any ways they want and we give them already building blocks for defining their policy on their own encrypted objects based on the state of the operating system because right now they're kind of lost in this and the result is that it's somewhat democratic because people can just do this on their own laptop and do not necessarily like get a high level of security of code integrity without necessarily getting the key sign by Microsoft. So to make all this any questions at this point so. Does division cover KXAC? That's a very specific and good question so KXAC is a big problem and like in the project that I work at Microsoft it's also a big problem. I have ideas how to deal with that but frankly we have so many problems we have to fix before we can fix that one too that I don't think that's gonna be fixed anytime soon right like but I have a pretty good idea what we probably should do with KXAC because KXAC for those who don't know KXAC is a thing where you basically boot one operating system and then while the operating system is running you decide you want to run another operating system usually new version of the operating system so you execute the new kernel. Now suddenly you didn't reboot so the DPM didn't get reset so all the PCR values will still have all the measurements from the first operating system and then the second operating system starts and the PCR measurements the PCR values just get added to that on top and then all your policies fall flat because they were predicted assuming that you started zero. So this creates a problem but I think we can deal with it like having a handover of secrets that are predicted that moment where you're about to start this up but let's not talk about that since it's highly specific for like we have way too much material before we start talking about KXAC. Any other questions at this point? Next question. So from my understanding if you had your computer that you've predicted all the values on if I was to take that drive and put that drive into another machine that said enterprise and I bought 100 of these laptops is there some kind of unique seed per machine or would it go oh this is functionally the same machine it has the same device tree it has the same hardware I'm going to unlock. The TPM generally contains like an encryption key that's specific to the TPM so no you cannot unlock the encryption key that you prepare for machine A on machine B I mean unless you have the same keys like the seed keys in the TPM but then yeah everything's out of the control and you don't have a TPM you have bullshit on your hands. Okay let's continue. So to make this all reality I'm the system guy so yeah that's what I'm talking about is all system stuff. We added different components these different components have shown up in the various distributions interestingly I find that different distributions adopted different parts of all this big tool set first so I think at this point there are very few distributions that adopted them all but there's at least one distribution that adopted each one of them individually. So I want to system reboot it's like we call it a boot lower it's actually not a boot lower it's a boot menu it's just a UFI program that allows you to select a different like a set of kernels and then chain loads those kernels it doesn't do anything fancy it doesn't have any understanding of how to load a kernel into memory and prepare it it doesn't do cryptography or anything like this it's just a dumb menu that xx other stuff. But it has nice properties because it takes inspiration from how Linux does drop-in directories like with RPM and DPKG there's this established pattern that you can extend other RPMs and DPKGs via drop-in files and directories so we took this idea and said okay new boot menu items are simply files that you drop in directories and as you install a new kernel you just drop in a file in a directory and that makes one new menu item show up. This is inherently different like how Grub works because in Grub you always have these boot scripts that need to be generated based on whatever you find and things like this this is much much simpler because it's just there's one file per kernel you find it and that's a boot menu item there you go. So that's one thing there's also system you stop. System you stop is a UFI bootstuff it's basically a little UFI program that you glue in front of a Linux kernel it runs in UFI mode does a couple of preparatory steps and then jumps into the actual kernel proper. These preparatory steps we'll discuss a little bit later but it's measurements and finding certain sidecars if you want them. So usually like in my perfect model where you use all these components the boot process is basically that the firmware invokes system to boot and then in system to boot you pick one kernel or automatically the newest kernel is picked and that then gives control to the stub inside of the kernel image and then that thing does a couple of things and gives control to the kernel inside of it which already has the init-rd loaded and then you jump to the init-rd so much about the boot path. Uki-fi or I don't know how we pronounce that you haven't really agreed on the pronunciation yet but it's a tool basically that allows you to build UKIs it takes a couple of different components glues them together can sign them from SecureBoot can do PCR predictions and spits out one EFI binary which you then can drop in your ESP. There's a tool called system demasher probably by this time you don't have to interact with it anymore because Uki-fi does it for you all it does it does that PCR prediction step for all the stuff that is contained in a UKI basically so you can run it and basically it tells you if you boot that UKI PCR 11 is going to be this value and then you can use that for policy but usually you don't have to interact with that anymore because Uki-fi is probably the tool you should be using and that calls it in the background so you don't have to bother. There's a thing called kernel install in system retweet. It used to be shell script but nowadays it's actually a proper program. Its job is to I mean Fedora has been using for a while other distributions are catching up I guess but the idea is basically that if the package manager drops its file in slash user and slash user is package manager then kernel install will take these and copy the kernel itself into the ESP to make the system bootable. So that you basically monopolize the OS vendor resources managed by the package manager and slash user and if you copy it into anything else like the ESP which is a shared location like it's not owned by the OS vendor it's owned by the system if you will and OS's just get the privilege to all drop something in there and the kernel is told to do this. The reason why you need something that's better than the CP is usually that you want to do a couple of things when you do this like create, I don't know, run MopPro, do a couple of other things. We have support even to generate the UKI at that step right so that you install on the system like a traditional kernel but locally it gets converted to UKI as you go and sign and things like that so that you basically can keep the old workflow in place how distributions generated in RIDs even and things like that but you end up in the new world with the UKI that is signed by your local key automatically without you even thinking about this. Other components, there is MKOS in RID, I'll ask a question. On the previous slide you mentioned, on the previous slide you mentioned system debuts, STUB measures the UKI. What measures system destub because you have... The firmware. So the stuff that I'm talking about system debuts, system destub, they are ultimately UFEI binaries and the firmware measures everything right so there's a full chain, the firmware does that part and then we like actually you know because system destub is just glued in front of the kernel to make the UKI which is a PE binary, actually the stuff that is in the UKI is already measured anyway by the firmware. The reason why we also measure the stuff ourselves a second time which sounds redundant is simply that PCRs, we have multiple of these and we want some separation of the stuff that... So there's one PCR basically, nine I think, where all the firmware stuff gets measured into AND the stuff right so there's going to be stuff that is specific to the local machine as well as the stuff that we as the OS vendor or whatever you want to call distributions and then we measure the control as measured into the same PCR and that basically makes the whole thing unpredictable. So we measure the second to the time just the stuff from the OS vendor into another PCR so that's what we combine the policy to. So that's why you have the double thing, it's two different PCRs. So MKSI, there's going to be another talk about this as basically a tool how you can build predictable reproducible in a D from generic Linux distributions and then make them ready for use in UKIs. The system you could set up is basically just a wrapper around like lip crypts up and it does a couple of these integrations, TPMs, FIDO and these kind of things and policy management and things like this. System decrypt and roll is the other side that allows you to enroll the TPM and roll the FIDO thing locally. System decrat is something, if we have the time we'll talk about it a little bit more later, it's basically, you know, if you have this vendor build UKI you might still want to be able to parameterize it, right? There's a reason why Inodore D generators the traditional way mix code from the West plus configuration into one CPIO Inodore D image. It's because people want to parameterize things. So parameterization is problematic, right? Like because it means things are not predictable anymore. Also you need to authenticate it again, right? Like that's what we want to come to. So the concept they came up with to fix this thing is called system decredentials. System decredentials are like ultimately they are way how you can pass secrets into system dec services. They have nothing, originally had nothing to do with the boot process. It's supposed to be like, you know, all these, the cloud people they love passing secrets in environment variables. I think that's a terrible idea because that gets inherited down the process tree. So this is supposed to be something better in that regard. So one of the nice things that system credentials actually has is that they can be encrypted, right? Like you can encrypt them and bind them to a TPM and local policy and things like that. This is extremely useful because it basically means that these credentials you can put them on untrusted territory, meaning the UFI ESP which has no authentication itself. It's an unprotected VFAT file system where basically the rule is the stuff that you read from the ESP you need to authenticate before you use it. So you can just drop these credentials in there and be reasonably safe that the contents of them cannot be read. What's the use case for things like that? Like for example, if you have a UKI with an inner ID and you actually want to make it open so that you can log into the inner ID with root password to debug things, you can stick that in an assistant decredential put in the ESP next to the UKI, how that actually works, we'll hopefully still find the time later to look into this. And be sure that this thing, because it's bound to the local TPM, is not accessible, like the root password is not accessible to anything, but that specific system and things like that. So system decred is kind of an approach for local parameterization. It's an option though. It's not like, I would assume that in most of the consumer kind of things you would never use this, but it needs to be there because some people want something like this. There's no restrictions on what you actually encode with this. It could also be, I don't know, ISCSI server data or HTTPS, like X5 or 9 certificates or something like this. Another thing is system is sys-axed, right? Like if you have predictable inner IDs, this of course means that they will come by default with a very clearly defined set of criminal model you was built in. This is restrictive, right? Because people nowadays have NVIDIA drivers, which are like hundreds of megabytes. If you want to make the system work well with all the current consumer hardware, you will have a massive inner ID. That might be something people want to avoid. So on one hand, we kind of push everybody to say, put everything in one file and the world will be a better place, but on the other hand we also know that this is probably not necessarily doable for all environments because these files will get massively used. They will work perfectly if you know your system. For example, if you just focus on Azure cloud stuff, right? Then you know exactly the drives you need. You can build a tiny UKI. It's all good. It's going to be entirely generic for Azure, but you could probably even cover multiple clouds in one UKI. It's still going to be small, all great. But once you get into the wide world where all kind of shit exists, it might be too limiting. So we thought about this. Once we came up with the system, we also had a different use case. Originally it was mostly focused on system, but we can use it great for modularizing inner ID to some point. So the idea basically is system is system extension. It's basically a disk image, a GPT disk image that contains a traditional Linux file system, usually something like squashFS, EROFS plus a signature for the variety partition. Variety for those who don't know, it's a, like DM variety is like a kernel concept for adding integrity protection to immutable file systems. So basically that, like it was like the first user of this was Chromebooks back in the day. I mean it's old now, but it basically says that on every sector access of the file system, you do make sure that it's actually authentic. It's a fantastic technology and we can use it to have these disk images that are, when you enable them, overlaid on top of slash user. So suddenly you get a certain level of modularity where the basic identity has slash user populated with lots of stuff, but you can add a couple of other things into it by adding a couple of systems to the system, which are just overlaid. Overlaying is basically overlayFS. It's really nice because it's atomic, it's cheap to do, and ultimately nothing new about it, it's just regular GPT disk images. So this is me, contract is actually the same idea, but it's about overlaying things on top of Etsy instead of slash user. Also with all the integrity, cryptography, things like this, but it's really nice because in contrast to the credentials that focus on individual bits of secrets, the system in contracts focuses on combination of stuff. Like you can drop 55 configuration files into one of these contracts, and these contracts either are applied or they're not applied. They're never half applied. You cannot use them out of context, hence, because either you get all of these files dropped into Etsy or appear in Etsy, or none of them. Honestly, contracts in my point of view is actually like the perfect configuration management tool, and everybody should just use that and stop using all the weird, Ansible Chef things because they do not have these nice security or atomicity properties, and the security and atomicity properties are just awesome. Another component, system in PCR lock. So I talked a lot about the predictability of the PCRs, but so the way how you actually lock disk secrets to PCR is basically you say, this PCR has to have that value, that PCR has to have that value, and that's all the case. You tell the TPM, you will release your encrypted secret to the West so that full disk encryption can work. But now you need some infrastructure to do the prediction for this. System in PCR lock is that infrastructure that we added to do this prediction. Basically, it manages a set of components that you assume are part of the boot. Then it does some magic, figures out if that's actually true, if that actually matches the reality so far, and then calculates a TPM policy, it's how it's called, from that, which says, OK, we will now create a policy that basically says if you use that policy to lock down secrets, then you have to have this firmware component in this version. You have to have this bootloader version in this version, and this UKI in this version, and a couple of other components that might be part of the boot, and it allows alternatives. Because usually if you update a kernel, you do not just want to say the new kernel is now the only one you can boot, you want to still allow the old kernel, the preceding kernel, to boot. You want this concept of alternative options for every step. So firmware updates the same thing. If you prepare a firmware update, and it fails, you have to boot up with the old firmware in place. If your policy says no way, then you have a problem. So you always need this kind of alternative system. So that's what System.UPCI-Rlock does. This policy is like, all the operating systems have a prediction, like all the other ones, like Windows, Chromebook, they all have prediction engines like this. We have the luxury that we come 15 years later than anybody else with this. So we can actually rely on newer types of TPM functionality, because we can start from zero now, instead of having to be compatible with the original TPM2 stuff that is really old by now. So we actually can do nicer things. We can actually store these policies in the TPM itself. The traditional way how BitLocker and Windows does it, for example, is that they store these policies in the BitLocker superblock on disk. Storing this stuff in the TPM is much nicer, because it basically means that you can have 500 different disks, and when you do your PCR predictions, you do not have to touch them. You do not have to go through every single disk and rewrite the superblock, but it's entirely sufficient to store some slightly different value in the TPM. That's a fundamental benefit, like improvement over what Windows can do, because we have the luxury that we are so late to the party. Any questions at this point? I only got like 10 minutes left, so if you have questions, this is the time to start asking. If you do not have questions, I'll continue with parameterization, modelarization. Which we actually kind of covered already. No one has questions. So, yeah, I mentioned already pre-building UKIs and alreadys are problematic, because they are identical and that makes them large. And you kind of parameterize them anymore. So, there's optional parameterization of the UKIs that breaks up the fact that they are one big thing. Our way of mentioning system-deserved credentials, like system-de-creds, encrypted, and individual bits of information, and then there are system-de-confects, which is the overlay thing on Etsy, which is like combinations of configuration. And the third one, which I have not talked about yet, is, but there was a talk yesterday in the VM mini-conf about this. It's kernel command line add-ons, right? Because one of the fundamental ways how you configure your learning system is by making additions to the kernel command line. Now, in all the stuff that I was talking about, the idea is, yeah, you don't get to do that, right? Because it's the most powerful thing in the world, because you can do in it as an agent, do whatever you want, right? So, we lock that down, right? Like, if you're in secure boot mode and you use that kind of stuff, yeah, you don't get to edit that, because security policy doesn't allow that. That, of course, doesn't necessarily fly with everybody. People hate that, right? Like, people want to be able to do this, but they want to have controls on this. One of the things that we came up with, like this guy over there came up with, is kernel command line add-ons. Add-ons is what we call basically, you build a UKI, but actually leave the kernel out and the NRAD out and everything else out. You just put the kernel command line in there. So, you have the UFIPE binary that looks exactly like a UFIPE binary, but you can't actually boot it because it doesn't actually contain any code. But what it contains is a kernel command line. Why would you do such a thing? It's because you can authenticate them and measure them like any other kind of binary that UFI deals with. Or actually, not you can do this, but the firmware will do it for you, because you can just tell the firmware, oh, I'm going to work with this binary now, please load and authenticate it, and then it will do this for you. Do measuring, dance, all in the background, you don't have to care. Because after all, SD boot and things like that are just a stupid boot menu with no understanding of loading and authenticating anything. And that's how it should be, right? Like, we want our boot pass to be stupid and not replicate, like, with Schimel and these kind of things, all the authentication over and over again. So, add-ons are basically a way how you can, yeah, sign a little kernel command line and then you can extend the one that is built in the UKI and then modulate away so that you can have one UKI and a couple of these add-ons that are, extend this thing and with proper authentication. Modularization, I mentioned this already with System DesistEx, yeah, because of NVIDIA drivers in particular, because they're massive firmwares, we have to do something. So, I mentioned these things, add-ons, system extensions, credentials, and config extensions like ConfaxSys, stuff. We call them sidecars, right? Like, because you have the unified kernel, but then it's not so unified, you have these things as well in there. How to manage those? So, the general idea is to extend this drop-in concept, right? Like, so that you have the UKI and you put next to it a directory where you put all these add-ons. So, how does it actually look? In the ESP, you put the UKI in the directory EFI Linux, and next to it, you have a sub-directly called exactly like the UKI with a suffix Xter.d, and there you put cred files for the credentials or confax.raw, that's the suffix we picked for confax.ddis, or sysex.raw and addon.efi, the EFI, those are the P.E. add-ons. So, it's simply relatively simple, right? You lose some of the extreme sexiness of the approach, which is updates and things like that are not single file anymore, but that's on you, I guess, if you actually make use of this functionality. So, this is all optional. Like, the focus, I think if you know your hardware, if you know the environment you want to run your stuff in, don't bother, right? So, I think that's a good side, like, just focus on the UKI's one kernel and everything simple and robust and idiot-proof and things like that. We've got like five minutes left, let's focus more on questions. Alright, so, taking a scenario like you just said, where you don't know what the hardware is, how's your vision of, okay, how do all these sidecars get selected, indicated, put in there? That's a very good question. That's not something I imagine you want like an RPM distro to be dumping stuff in there, but what should we do? That's a really good question, and there's actually two new list items about this in SystemDTree. I'm not sure how many people have seen that, but so, you know, in UDEV, in SystemDT in UDEV, we have already this concept, how we can automatically determine which kernel drivers to load on which machine, right? Like, it's called mod alias. It's basically like for PCI devices and USB devices, the vendor product that you turned into a string and then having a mapping database that maps that to the actual K mod to load. And nowadays, there's all kinds of mod alias for SM bias and things like that. So, we have this already, right, and there is infrastructure to have a database that we use it as input and you get kernel module information as output. And there's another database, the HW database, where you use these strings as input and you get UDEV properties as output. So, to me, that's what you should just use, right? So, a distribution that figures out how to split things up, like they would have one 6-6-6-x for NVIDIA drivers, one for AMD drivers and things like that. And then you would just maintain this in HWDT, basically, where you match against vendor product, add in and then specify the thing. And then we should have some tool that helps you figure that out and then probably should turn it into RPM command lines if you are RPM based distribution or in something equivalent, right? Like that's distribution material, yeah. But I think just using the mod alias stuff, perfect solution for this. It solves exactly that problem except that now it's not just one K mod, it's just a 6-6 that you pick up. Okay, so just for me to understand something, so system debut would be able to parse the add-ons and show you a similar menu to do. So, how can you start to choose something from the add-ons because I assume that the add-on will have multiple commands boot options, for example. Okay, so the whole command line stuff is still working, probably we'll have more stuff later that hooks that up with the menu. But right now the way it works basically is you drop in one kernel and you put the add-ons next to it and it's not as debut that has any understanding of this. As debut only finds the main UKI and turns it into an entry and then boot it eventually. But it's SD stub, right? Like this early code that is glued in front of the UKI that then sees, okay, I got invoked. Let's see, in which directory did I get invoked? Let's see if that has the sub-directory stuff and then loads everything that's in between that, right? So right now it's, you pick the UKI and that pins basically all the stuff next to it. So what you're asking for basically is that it shows up in the boot menu. We have been discussing this for a while and everybody agrees we should do this, we just haven't done it yet and well, we don't actually know how it precisely will look like. But the idea basically is that sooner or later we want to be able to not embed a single kernel command line into your UKI but a choice of them, right? So one is going to be the default one if nobody picks anything and this would then basically mean that, yeah, if SD boot finds one of these UKIs in the directory, it generates one menu item per kernel command line so that you basically have one UKI where you can select the factory set choice or the debug choice or the regular choice. And it, yeah. So everybody agrees that's the way to go, nobody does it. It's really high on my to-do list. Any last question? How much? Do we still have a minute or something? Are these like system extensions, credential extensions only? Sorry, I don't understand where. Sorry, are these extensions only useful in the case when you haven't enrolled your own machine owner key? Like, or is there still an advantage like so obviously these are going to be signed upstream. But if you've enrolled your own machine owner key, is the better approach or do you approach now just to build your own UKIs locally and take advantage of the secure boot there. In fact, did you have an authenticated NITRD? So I'm not sure I understood the full question but I answered it. It's about the machine owner key like the shim thingy. So all these components that I was just described have individual ways how they authenticate it, right? Like the add-ons because they're PE, UFI binary, okay my time's over. But basically let's, I'd like to finish the question. That's okay. So they are authenticated by a secure boot means, right? And that also means shim, right? Like so that's where the mock comes into control. The other ones are preferably authenticated by the kernel key ring stuff, right? Like we asked the kernel key ring to authenticate them. Now kernel key ring, populating that is a mess, right? Like because you can do it via the mock stuff, that works. But I think it's a mess that this is how it has to go, right? Ideally I would have the way how I can upload from user space a new kernel, like a couple of additional keys, so your local one, and then basically blow a fuse so that later nobody can do this, right? Like because that would be the democratic thing, right? So I would take a Fedora kernel and then in the early boot phase I can install an additional kernel and then nobody else can. And that's like for me, that's the perfect security. Well, we don't live in this area. But I added this concept that we can do authentication user space instead. Depending on security policies, the kernel might say no though, but on the kernel in assistive reasons they all say yes. You can mock, easy talk, the data is done. Yeah. Anyway, so, yeah. Thank you very much. Thank you very much.