[00:00.000 --> 00:16.880] All right. Looks like we can begin. So, welcome. And I will tell you about my project which [00:16.880 --> 00:24.080] is called Chimera Linux. But first, let me introduce myself a bit. I'm a software developer [00:24.080 --> 00:30.080] from the Czech Republic. I've been contributing to open-source software since 2007. And currently, [00:30.080 --> 00:37.040] I'm on a break from work, so I'm kind of working on the distro full-time. When I'm not on a break [00:37.040 --> 00:44.560] for work, I work in the WebKit team in Igalia. Previously, I also used to work for Samsung [00:44.560 --> 00:48.880] in the open-source group, where I worked on the Enlightenment Foundation libraries [00:48.880 --> 00:57.680] and the Window Manager. Since 2009, I've been using FreeBSD, also on desktop for about 10 years. [00:58.320 --> 01:06.000] But I've not been using that on desktop since about 2018, because I've been mostly using [01:06.000 --> 01:11.200] Power Architecture computers these days, which FreeBSD doesn't have the greatest support for. [01:11.200 --> 01:18.480] So, for example, my GPU wouldn't work. I'm also a former developer of the Void Linux distribution, [01:18.480 --> 01:25.760] which has served as a huge inspiration for this project, especially in the design of the packaging [01:25.760 --> 01:32.400] system. And I sort of do all sorts of things. Besides distribution development, I also do, [01:32.400 --> 01:39.120] for example, game development, compiler stuff. I did some kernel bits as well. [01:40.960 --> 01:47.600] But now, what's Chimera Linux? It's a new Linux distribution which I started in 2021. And it's [01:47.600 --> 01:54.960] a general-purpose distribution created from scratch. It utilizes core tools from FreeBSD, [01:54.960 --> 02:01.360] which is one of the big differences from standard distributions, which is GNU tools for this, [02:01.360 --> 02:07.440] or busybox, for example. It uses the LLVM tool chain to compile all the packages. As a matter [02:07.440 --> 02:14.720] of fact, there's currently no GCC in the distribution other than for some variants of [02:14.720 --> 02:21.200] few boot for specific ARM devices. It uses the muscle Lipsy, and it's a rolling release [02:21.200 --> 02:27.520] distribution, so there are no releases. It sort of updates continuously. And it's also highly [02:27.520 --> 02:35.280] portable to many architectures right now. We are supporting Arch64, Power, a little Andean. Soon [02:35.280 --> 02:46.000] there will be Power Big Andean, X8664, as well as complete full support for RISC-5 64-bit. [02:46.000 --> 02:53.680] I started this project in early-mid 2021. And it started with a C-Built, which is sort of a meta-build [02:53.680 --> 03:00.720] system for packages. You create your packaging templates, and these basically describe the [03:00.720 --> 03:07.760] package and how to build it, and C-Built builds it. I was a Void Linux developer to this time, [03:07.760 --> 03:13.760] and I started C-Built as a way to investigate if I can fix many of the shortcomings of Void Linux's [03:14.320 --> 03:20.800] XBPS SRC system. So I've created a quick distribution around C-Built, which consisted of GCC [03:20.800 --> 03:26.560] and GNU user lines, as well as the XBPS package manager, which avoid to use this. And to this point, [03:26.560 --> 03:31.840] it was only about 50 packaging templates. So it was very tiny. It couldn't boot, definitely, [03:31.840 --> 03:40.000] because it had no kernel and no bootloader or init system, even, or anything. So it was just [03:40.000 --> 03:45.200] like a little container, which was capable of building itself when hosted on another distribution. [03:45.920 --> 03:52.240] And as I said, I was trying to fix many of the issues, and main focuses of C-Built have been [03:52.240 --> 04:01.600] performance as well as correctness. This was when I first managed to make Kaimira boot in a VM. [04:04.480 --> 04:11.120] Shortly after those 50 packages switched to LLVM and removal of GCC followed, as well as [04:11.120 --> 04:18.240] switched to 3BSD tools, removal of all the GNU stuff and so on, and as well as gradual expansion [04:18.240 --> 04:25.040] of all the packages. I've been sort of iteratively enhancing the distribution ever since and until [04:25.040 --> 04:32.160] it got to the current state. In late 2021, it was possible to boot the system, and it's capable [04:32.160 --> 04:38.320] of bootstrapping itself. In early 2022, there was a full GNOME desktop already. [04:40.000 --> 04:45.200] This was when I got Wayland Compositor running, and of course, everything needs to be able to [04:45.200 --> 04:52.320] run DOOM, so we got DOOM working. There's a terminal and some other basic stuff, but this was, [04:52.320 --> 05:03.440] I believe, around late 2021. I had a talk about distro at FOSDEM 2022 too, and many things happened [05:03.440 --> 05:11.920] during 2022, so last year I did the talk as a sort of chronological thing. I'm not going to do that [05:11.920 --> 05:16.320] this year because there have been too many things, and I couldn't fit it into a 50-minute slot. [05:17.840 --> 05:23.920] Huge focus has been in last year on security hardening and on development of [05:25.200 --> 05:30.720] different new solutions for things which we've been missing. I'm currently aiming for an alpha [05:30.720 --> 05:36.960] release, which will be sort of early adapter release, where things will mostly work on desktop. [05:36.960 --> 05:45.600] In late February or early March, I plan to make it coincide with one of the [05:45.600 --> 05:52.000] betas of FreeBSD 13.2 in order to be able to rebase the tooling. [05:53.600 --> 05:59.600] Now, for some motivations, why did I create this project? I've been unhappy with existing systems. [05:59.600 --> 06:05.280] There are many great things that existing systems have, but there's always at least one thing which [06:05.280 --> 06:12.160] has been annoying me. I sort of wanted to create a thing which would actually suit me in every single [06:12.160 --> 06:19.120] way. I wanted to make a well-rounded practical operating system that wouldn't be just a toy, [06:19.120 --> 06:25.120] but something people could actually use. At the same time, I would like to improve software in [06:25.120 --> 06:32.240] general, that is mainly in terms of portability as well as security when it comes to things like [06:32.240 --> 06:39.440] usage of sanitizers and so on. I would like to make full use of LLVM, not just replace GCC [06:39.440 --> 06:46.960] compiler, but actually utilize the unique strengths of LLVM, which includes a great sanitizer [06:46.960 --> 06:54.400] infrastructure, things like FINLTO, which GCC still doesn't have and so on. Of course, [06:54.400 --> 07:01.760] proving Linux is not a new Linux is also a major thing. While doing all this, I wanted to have [07:01.760 --> 07:07.840] some fun and some people on the internet said I couldn't do this. Of course, it's important to prove [07:07.840 --> 07:15.440] them wrong. I wanted to build a nice community which would be fun to hang around with and make a [07:15.440 --> 07:23.440] good system for both myself and for other people. Now, for some general principles of the project, [07:24.160 --> 07:30.240] I strongly believe that projects which basically are centered around a single goal [07:30.240 --> 07:35.920] are eventually doomed to fail because once you reach this goal, you have nothing else to do, [07:35.920 --> 07:48.000] but at the same time, it creates dogmatic things which you are not allowed to cross [07:48.640 --> 07:55.680] and it really restricts you with the development. On the other side of the problem, there's scope [07:55.680 --> 08:02.000] creep if you have too many things to do and you keep expanding on it. Eventually, you get to the [08:02.000 --> 08:08.240] point where you never get anything done, so it's important to balance these things. I think opinion [08:08.240 --> 08:14.880] development is overall a good thing because it gives you a sense of direction, which is always [08:14.880 --> 08:21.200] nice to have. I think, obviously, quality of the code matters, but quality of the community matters [08:21.200 --> 08:29.760] even more so. I think fun is good, so I would like to try to keep it that way and not get too [08:29.760 --> 08:36.960] technical in the process. I think free and open source software projects are social spaces and [08:36.960 --> 08:43.440] that's why if you let toxic people into your community, it's eventually going to become a [08:43.440 --> 08:50.720] chore for everybody else. I try hard to keep them out, but at the same time, I try to make sure it [08:50.720 --> 08:55.600] does not get overly elitist because it should be an open, inclusive project for everybody. [08:57.440 --> 09:04.320] As for technical principles, I try to make sure things are strict by default and try to avoid [09:04.320 --> 09:13.360] technical debt at all costs. There should usually be just one way to do things. That doesn't mean [09:13.360 --> 09:18.160] there only has to be one way, but more like a good default that people are supposed to follow and [09:18.160 --> 09:26.880] it's sort of intuitive and easy to follow. Things should remain as simple as possible, but not too [09:26.880 --> 09:35.120] simple. There are many people who overly focus on things like minimalist systems and in the process [09:35.120 --> 09:41.760] they end up forgetting what's actually practical. I think security and hardening is also very important [09:41.760 --> 09:50.320] and in many Linux distributions, it's sort of overlooked. That's another thing. I think portability [09:50.320 --> 09:55.680] is also extremely important. There are many kinds of hardware and people are using many different [09:55.680 --> 10:01.520] kinds of hardware. Of course, most people have their x86 computers, but there's more of it than [10:01.520 --> 10:07.680] you may think and things like risk five are taking off and there's power workstations and [10:07.680 --> 10:15.840] there's arm and so on, so it's good to have all these things. Now, good tooling is also very [10:15.840 --> 10:23.840] important and related to that is self-sustainability. That basically means whatever infrastructure you [10:23.840 --> 10:30.080] have should be self-contained and easy to get going and easy to replicate on any new computer. [10:30.080 --> 10:38.000] Related to that is being able to bootstrap the system from source code. I think that's [10:38.000 --> 10:43.360] sort of a double-edged sword because some people don't care about the bootstrap ability at all [10:43.360 --> 10:50.400] and things are massive binaries downloaded from the internet. On the other side of the coin, [10:50.400 --> 10:56.320] there's people who insist on complete bootstrap ability from source code for everything, even [10:56.320 --> 11:06.000] if it involves doing completely cursed things such as ever seeing how to bootstrap the Haskell [11:06.000 --> 11:11.760] compiler from source completely. It's like if you want to do it and you have to go through [11:11.760 --> 11:17.760] an ancient un-maintained Haskell compiler which only targets 32-bit x86 computers and [11:17.760 --> 11:23.680] like compile some stuff on that and it's from like 2004 and then you have to iterate through [11:23.680 --> 11:29.600] newer versions, eventually you get to GHC and then you can cross-compile a partial distribution [11:29.600 --> 11:37.280] for architecture and then go from that and eventually you reach your goal. I think it's [11:37.280 --> 11:46.560] a means to an end and it's important but not that important. Another thing is that I've seen [11:46.560 --> 11:51.280] over the years many things and I think it's something is written in a shell and it's a [11:51.280 --> 11:56.880] complicated program and it probably shouldn't be. It should be easy to do the right thing [11:57.600 --> 12:02.720] but tooling should also make it difficult to do the bad thing, kind of steer people towards [12:03.680 --> 12:10.480] you know doing what's right and doing so out of the box. Documentation is obviously also important [12:10.480 --> 12:16.400] and many people avoid writing documentation, I understand them because I'm also guilty of this [12:16.400 --> 12:23.200] in many cases but yeah we should strive for a good documentation. There's also the question of [12:23.200 --> 12:30.720] SystemD. I believe SystemD is in many aspects not great but also it brought a necessary change to [12:30.720 --> 12:36.960] Linux and there are basically many people on distributions who just stick their hat into [12:36.960 --> 12:45.520] the sand and just avoid even considering that SystemD might have brought some useful things [12:45.520 --> 12:52.240] and that it might kind of be also their fault that it has become so widely adopted. So we [12:52.240 --> 12:59.120] should develop good solutions to counter whatever SystemD has come up with and basically always [12:59.120 --> 13:06.640] try to improve. Now let's take a look at how a BSD system is developed. Usually you have your [13:06.640 --> 13:13.280] entire system in a single tree and a single repository typically SVN and so on and you have [13:13.280 --> 13:19.040] lots of different components in this repository. It's a complete system capable of boot so if [13:19.040 --> 13:24.320] if you invoke the central make file and compile the system you generalize compile your kernel [13:24.320 --> 13:28.000] and compile your userline and if you put it together you will get a system which is capable [13:28.000 --> 13:34.800] of booting and third-party software is which is not required for the base system is distributed [13:34.800 --> 13:40.160] through some kind of port system. Of course this doesn't mean that there are no third-party components [13:40.160 --> 13:48.880] in a base of the BSD system because I'm not aware of any BSD system which is developing [13:48.880 --> 13:55.520] complete replacement for the tool chain for example. You have your LLVM or whatever in base [13:55.520 --> 14:02.720] and usually it has its own built system integrated with existing make files but it's a single tree. [14:03.840 --> 14:09.760] Now let's contrast it to a Linux distribution. In a Linux distribution it's a collection of [14:09.760 --> 14:15.680] software from many different parties which are separate packages and you have Linux kernel [14:15.680 --> 14:20.320] as the base layer that's always the case otherwise it wouldn't be a Linux distribution. [14:20.320 --> 14:25.040] You have your userline tooling which is often supplied by GNU and you have the libc which is [14:25.040 --> 14:30.880] also often supplied by GNU glipc and you have the tool chain to build all this so it's also [14:30.880 --> 14:37.600] often GNU because while client is used for some distributions not too many of them and you have [14:37.600 --> 14:43.920] the service manager and also some auxiliary tooling around the service manager so that's [14:43.920 --> 14:52.960] often system d nowadays. This is tied together with a package manager which handles installing [14:52.960 --> 14:58.560] and removing and so on and sometimes you have well usually have some of the components always and [14:58.560 --> 15:04.400] then you can install or remove whatever you want and Linux plus gcc plus glipc plus core [15:04.400 --> 15:10.080] utils, find utils, diff utils, and so on. It makes GNU Linux or what is called GNU Linux. [15:10.880 --> 15:17.440] Distributions exist to make sure that all these components work together and they combine well [15:17.440 --> 15:22.880] because many different distributions combine them in different ways and they have different [15:22.880 --> 15:30.160] versions of these components and they all have to play nice. So the Linux kernel has a rule of [15:30.160 --> 15:37.360] never breaking user space if a new version of Linux kernel results in a binary not working [15:37.360 --> 15:45.600] it means it's a bug in the kernel even if it was for example originally an unintended behavior [15:45.600 --> 15:53.200] so this can be kind of a pain. But let's get back to Kaimira so starting out the tool chain. [15:53.200 --> 16:01.440] LLVM in Linux is pretty seamless nowadays most of the time. You have it available on most Linux [16:01.440 --> 16:07.920] systems but on most Linux systems it's sort of a different arrangement because you do not have [16:07.920 --> 16:15.120] LLVM provided the runtime. GCC provides this and it's called libgcc. It's mostly ABI compatible [16:15.120 --> 16:23.680] with lipanwine from LLVM but it also includes some of these built-ins which are provided via [16:23.680 --> 16:30.480] separate library in LLVM. LLVM comes with its own runtime called compiler rt and this is used in [16:30.480 --> 16:37.920] Kaimira instead of libgcc. For the C library we use muscle because it's a proven good [16:37.920 --> 16:43.120] implementation of a C library which is used by several distributions already and you can make [16:43.120 --> 16:50.880] most software work on it just fine with maybe with a few patches but better than other lipsies. [16:53.520 --> 17:01.440] When you have a GNU tool chain you usually have GNU benutils to complement GCC as well as LLVTils [17:01.440 --> 17:08.000] to provide lip health. Benutils provides things like linker because GCC does not come with its [17:08.000 --> 17:13.840] own linker. It also provides different tools which are used together with the compiler things like [17:14.640 --> 17:23.840] archiver and read health and these kind of stuff. In Kaimira this is the LLVM from LLVM is used [17:23.840 --> 17:29.040] as a linker and it's used everywhere. As for the other tooling which is provided by Benutils, [17:29.040 --> 17:35.360] elsetoolchain provides this tooling and this is also used on FreeBSD to provide these tools. [17:35.360 --> 17:41.600] LLVM also provides lip health implementation which replaces the one provided by LLVTils. [17:42.320 --> 17:48.960] Lip health is used in many places but for example the kernel requires it. LLVM also [17:48.960 --> 17:54.480] provides most of these tools which are provided by Libutils. They have a prefix LLVM so for example [17:54.480 --> 18:00.160] LLVM read health. We do not use those in the core system most of the time. [18:00.160 --> 18:09.280] So now to sort out the Coruselant you have many GNU components as well as non-GNU components [18:09.280 --> 18:14.960] like Corutils, FindUtils, DeFutils and so on. You have Util Linux also which is used by [18:14.960 --> 18:20.160] premature distributions and provides sort of a mixture of tools for all sorts of stuff. [18:21.600 --> 18:28.480] In non-GNU distros existing ones such as Alpine you often have busybox which is sort of [18:28.480 --> 18:35.840] a single binary which can be configured to include many different tools which are otherwise [18:35.840 --> 18:41.920] provided by Corutils and so on as well as by Util Linux. The main strength of busybox is that [18:41.920 --> 18:47.200] it's a single binary so you can put it in embedded environments and you can have things [18:47.200 --> 18:52.640] mostly work. But the other side of the coin is that it's very spartan when it comes to [18:52.640 --> 18:58.960] functionality and the code is also not very good. But the other alternatives are usually even worse [18:58.960 --> 19:07.280] in terms of available functionalities. So 3BSD tools are the answer here and that's what we've [19:07.280 --> 19:15.760] done. I found this third-party port of 3BSD's tools called BSDutils. It was a sort of incomplete [19:15.760 --> 19:23.040] experimental thing which was not quite ready for an actual system. So I helped complete it and [19:23.040 --> 19:30.560] reached parity with Corutils. I fixed many bugs which were created during porting in the process. [19:31.120 --> 19:36.480] I also ported many other tools to expand coverage and the result is Karmira Utils which [19:36.480 --> 19:44.960] the distro currently manages and it's sort of a single easy-to-build package which includes [19:44.960 --> 19:52.320] all of the tooling you want. And this replaced not just GNU tooling but also for example a portion [19:52.320 --> 19:59.440] of Util Linux which makes things much easier for the distribution especially in terms of [19:59.440 --> 20:05.760] bootstrapping because for example in Void Linux a NXB-PSSRC which is the build system which is [20:05.760 --> 20:11.200] similar to C-Build. You have a stripped down version of Util Linux in the base build container [20:11.200 --> 20:18.480] and you need this because some of these tools are necessary. But this means bootstrap problem [20:18.480 --> 20:25.680] because when you build a full version of Util Linux you have many dependencies which you do not want [20:25.680 --> 20:31.600] during bootstrapping of your system. Things for example UDef or that kind of stuff which you [20:31.600 --> 20:36.080] really don't want to pull in. So it has a stripped down version of Util Linux for that and then it [20:36.080 --> 20:41.920] has a full version which is built separately and it's kind of a mess. If we have a single package [20:41.920 --> 20:47.200] for the user and all of this can be avoided and then only a partial build of Util Linux can be [20:47.200 --> 20:55.200] built if needed. Karmira Utils is lean enough for very environments things like in-it-RAM-FS or [20:55.200 --> 21:02.160] even embedded things but at the same time it's fully featured enough to be used as interactive [21:02.160 --> 21:10.320] tooling so it's a nice all-in-one thing. And of course it helps break up the current monoculture [21:10.320 --> 21:18.320] of tooling as well as it's easy to harden. For example, Karmira Util utilizes Clang Control [21:18.320 --> 21:25.040] Flow Integrity hardening which can be enabled on Karmira Utils very easily and it just works. [21:25.040 --> 21:32.640] Now to get the kernel sorted out. These two photos one is Karmira running on the MNT reform laptop [21:32.640 --> 21:40.800] and the other is running on Raspberry Pi 3. The kernel is mostly compatible with Clang these days [21:40.800 --> 21:47.200] and some patches are needed to support BSD Utilities as well as the LIP-ELF from ELF Toolchain. [21:47.200 --> 21:53.040] I would like to eventually upstream these things and make sure things work out of box. Until recently [21:53.040 --> 22:00.320] there was an issue with the option to use Clang's internal assembler. It did not work on some [22:00.320 --> 22:07.360] architectures notably 64-bit power because of some legacy debugging for Nonsense. So [22:07.360 --> 22:13.520] GNU Bin Utils was used for that until some time but nowadays it's not a problem and the Clang [22:13.520 --> 22:24.320] assembler just works for every architecture. CKMS, what is CKMS? These photos usually use [22:24.320 --> 22:31.280] DKMS which stands for Dynamic Kernel Module System to build out of three kernel modules and it's a [22:31.280 --> 22:38.240] massive 5k inline bash script and it has functionality which seemed like a good idea at the time and [22:38.240 --> 22:45.040] nobody uses it and it no longer seems like a good idea for example. DKMS can package kernel modules [22:45.040 --> 22:48.880] and you can distribute them and of course this doesn't work because every distro has its own [22:48.880 --> 22:55.600] kernel and it can result in in slight differences in ABI and so on so you cannot really do that. [22:55.600 --> 23:01.360] I created CKMS which stands for Karmira Kernel Module System and it's kind of similar to DKMS but [23:01.360 --> 23:08.240] it's much more lightweight, more robust, it's implemented in Python. It has privileged separation [23:08.240 --> 23:13.440] so when you have your package manager built a kernel module in a hook during installation and [23:13.440 --> 23:18.080] you run your package manager as root it will properly drop privileges so it does not run [23:18.080 --> 23:23.120] the whole compilation of the module as root which happens with the KMS in most setups. [23:23.120 --> 23:32.160] Now for the package manager that's an important thing in a distro. I considered the FreeBSD [23:32.800 --> 23:39.600] package manager at some point but it was not in quite the shape I would like for production. I [23:39.600 --> 23:47.120] did contribute back some patches to fix a bunch of things with muscle because that was the main [23:47.120 --> 23:54.720] thing which was really problematic and I got it working but there are things such as version [23:54.720 --> 24:03.120] expressions and the version string stuff which is a work in progress and it's quite obvious that [24:04.320 --> 24:11.920] it's mainly all geared towards FreeBSD series right now. So eventually I ended up investigating [24:11.920 --> 24:19.680] APK from Alpine Linux which ended up proving to be a great fit. For one it's lightweight but it's [24:19.680 --> 24:27.920] also fairly powerful and I really like its virtual package system. It handles things like shared [24:27.920 --> 24:33.760] libraries very seamlessly where shared libraries in packages are provided basically as virtual [24:33.760 --> 24:41.360] packages and this makes it easily searchable, easy for the solver and so on. I eventually [24:41.360 --> 24:47.440] transitioned to APK Tools version 3 which is the next generation of APK which is currently not [24:47.440 --> 24:54.240] used by Alpine and it does not have a stabilizer yet but it works great. The main difference in [24:54.240 --> 25:02.320] APK 3 is that it no longer uses star balls as packages. It has a new custom sort of structured [25:02.320 --> 25:08.880] format which should help with avoiding vulnerabilities in the package manager. [25:08.880 --> 25:15.440] By summer 2021 it was fully integrated in C-Built and it just worked. [25:17.200 --> 25:23.120] Service management is another big thing you need to boot Linux distribution. So many options were [25:23.120 --> 25:30.400] evaluated in the process, for example Runnit which is used by Void Linux, S6 which is sort of [25:30.400 --> 25:37.920] new kid on the block, OpenRC which is sort of classic and built on the same principles as classic [25:37.920 --> 25:47.040] RC systems. In the end I ended up choosing Dinit which is a new service manager. I chose it because [25:47.760 --> 25:58.240] it's both powerful and lean. It's implemented in modern C++ so it's also safer than most other [25:58.240 --> 26:05.120] service managers. Most importantly it took me about one afternoon to get it fully working and [26:05.120 --> 26:12.880] get to the system from not booting at all to having it completely booting. It's supervising which [26:12.880 --> 26:21.360] means most demons are supervised by the service manager by running on the foreground and being [26:21.360 --> 26:27.920] basically child processes of the service manager but you can have a background processes as well. [26:27.920 --> 26:34.720] It's less robust so this should be avoided most of the time. It's dependency based so it can ensure [26:34.720 --> 26:42.640] that your services start in correct order. It has support for things like one-shots which help immensely [26:42.640 --> 26:48.000] during early boot because most things you need to do during early boot is basically things you [26:48.000 --> 26:53.120] run once and they do not have any sort of persistent process. So the early boot process is full of [26:53.120 --> 27:00.880] one-shots. For example Void Linux with Runnit solves it by making these one-shots a bunch of [27:00.880 --> 27:06.960] sequential shell scripts which are run before the actual services are running and it's not a [27:06.960 --> 27:12.800] great solution because it's not very flexible. In any case it's a good base for a solid service [27:12.800 --> 27:19.920] infrastructure. We have a custom suite of core services for Linux written from scratch. It has [27:19.920 --> 27:27.520] full support for fine-grained targets. Basically a target is a logical service which does not [27:27.520 --> 27:36.640] do anything by itself except act as some sort of sentinel. You can for example have a network [27:36.640 --> 27:44.400] just target and then you can have other things say I want to start before this and then you can [27:44.400 --> 27:52.480] make sure that or I want to start after this. You can make sure that your services start only [27:52.480 --> 27:58.960] after network is up for example. It also has first-class support for user services which is [27:58.960 --> 28:04.880] very important and I'll get to that later. The eventual goal is to have all long-running [28:04.880 --> 28:10.400] processes be services and there's also the matter of session tracking which I'll describe in a bit. [28:11.440 --> 28:18.160] Now this is a new project the distro came up with it's called turnstile and it's an answer to [28:18.160 --> 28:26.000] the login D part of system D. Linux mostly uses system D login D for session tracking. What that [28:26.000 --> 28:33.600] does is basically know when a user has logged in or when a user has logged in on another console [28:34.160 --> 28:39.440] and it also knows when the user has logged out and it can be used by say desktop environments in [28:39.440 --> 28:44.960] many different ways. There's E-Login D which exists as a standalone version which is basically just [28:44.960 --> 28:50.640] ripped off, ripped away from system D and the dependencies are stopped out and it's sort of [28:50.640 --> 29:00.960] dirty and not great. This is done by basically running a daemon which is called login D and a [29:00.960 --> 29:07.920] module in the PAM infrastructure which is obviously used for authentication and the PAM module [29:07.920 --> 29:12.240] basically doesn't even know when a new session has started and it also doesn't know when a session has [29:12.240 --> 29:20.160] ended. This plus seat management which E-Login D also does but this is not widely used because [29:20.160 --> 29:25.280] usually you only have one seat. It's used by desktop environments in especially things like [29:25.280 --> 29:32.800] whale and compositors. With system D most importantly also has login D also spawn a user session of [29:32.800 --> 29:39.600] system D basically which acts as just like normal service manager but it runs as your user and it [29:39.600 --> 29:47.680] runs Caesar services. E-Login D cannot do this because it has no idea what other init system [29:47.680 --> 29:52.320] or what user service manager you might be running. So this functionality is removed and there's no [29:52.320 --> 29:59.920] way to access it. This is one of the reasons why I developed this. It aims to eventually [29:59.920 --> 30:07.120] replace E-Login D and it was originally created just to manage those Caesar instances of [30:07.120 --> 30:13.040] of init. The issue with that when running this in parallel with E-Login D was that [30:13.760 --> 30:19.920] sometimes it needs to know something which E-Login D knows but sometimes E-Login D also [30:19.920 --> 30:25.680] needs to know something the user service manager knows. It especially affects things like lingering [30:25.680 --> 30:31.360] for example you can enable things you can enable specific user to linger which means those user [30:31.360 --> 30:39.120] services will stay up even after you have fully logged out. E-Login D manages your runtime directory [30:39.120 --> 30:46.240] for you which is used by many services and upon log out it removes this runtime directory. If you [30:46.240 --> 30:51.760] have still some user services running and E-Login D has removed your service directory then [30:52.640 --> 30:58.240] things go wrong. So it needs to be integrated and I plan to eventually fully replace login D. [30:58.240 --> 31:03.920] E-Login D turns style does not manage seats because there's already a project called [31:03.920 --> 31:10.960] Lipset and CTD which can do this satisfactorily but Lipset does not do the session tracking so [31:10.960 --> 31:17.040] they can be used together and I plan to provide a library alongside daemon. This library will [31:17.040 --> 31:22.560] provide agnostic API. This API will have multiple backends and it will have a backend for login D [31:22.560 --> 31:27.920] it will have a backend for turn style D as well as potentially other solutions and then [31:27.920 --> 31:34.000] things like desktops will be able to use this and be actually portable because [31:35.280 --> 31:40.640] for example right now to have GNOME on free BSD for example it needs many patches to [31:42.000 --> 31:47.440] replace this functionality and it's just not great. Having an agnostic API which is not [31:47.440 --> 31:52.560] provided by system D would be a much nicer solution. Of course I'll also have to convince [31:52.560 --> 32:00.640] subframes to adopt it. One thing which you do with turn style is managing the bus session [32:00.640 --> 32:07.360] bus as a user service. This has an advantage because you have a single session bus per user [32:07.360 --> 32:14.000] just like it's done when you have system D. Well why have a single session bus? This session bus [32:15.040 --> 32:19.760] has a socket. This socket is somewhere on the file system and this socket is used to identify [32:19.760 --> 32:28.160] other things on the bus. The way to locate this session bus is provided via environment variable [32:28.160 --> 32:32.720] so if you have the environment variable in your environment then things can use this to [32:32.720 --> 32:38.320] read the path and actually locate the socket. Traditionally you had the session bus started [32:38.320 --> 32:45.600] by for example your x11 script xinitrc which would run something like the bus run session [32:45.600 --> 32:52.480] something. That means the session bus was only available within your single graphical TTY. [32:53.600 --> 32:57.600] This is not great because then when you switch console and login there and you want to run [32:57.600 --> 33:01.920] something which needs to access the session bus it doesn't know about it. System D solves this. [33:01.920 --> 33:08.720] We also solve this by running the session bus as a user service so when you first login it [33:08.720 --> 33:14.400] automatically spawns the session bus. When you last log out it stops the session bus and it's [33:14.400 --> 33:21.280] available on every single VT. This has also limitless potential for other user services. [33:21.280 --> 33:26.960] We can do things like debas activation without having debas spawn the services themselves. [33:26.960 --> 33:33.280] It's currently also used for the sound server for example with pipe wire. Now let's move on to [33:33.280 --> 33:45.760] C-Built. C-Built is basically a build system for seaports as I ordered. [33:45.760 --> 34:06.240] C-Built is only on the stand library. [34:06.240 --> 34:14.000] This is what a template might look like. This is the template to build the DOOM game. [34:15.680 --> 34:21.440] As you can see it's mostly metadata. There's one hook in there which runs water recon which [34:21.440 --> 34:30.000] has no other way to do this. There's a build style for configure script which basically strips away [34:30.000 --> 34:36.480] all the non declarative things you would otherwise need. How C-Built works is that it builds all [34:36.480 --> 34:42.640] the software in a simple container called a build root in our terminology. It's a minimized [34:42.640 --> 34:49.360] Chimera system. There's some packages which provide the baseline and your built dependencies [34:49.360 --> 34:54.640] which are specified by the template are also installed into this container. This container is [34:54.640 --> 34:59.840] fully unprivileged so you don't need to run anything as root and it's fully sandboxed. This is [35:00.560 --> 35:07.760] done with Linux namespaces. The container is also read only after the build dependencies are [35:07.760 --> 35:14.240] installed which means no package built can actually change anything in the container otherwise [35:15.440 --> 35:21.840] other than in its own build directory. It also has no network access after all the fetch [35:21.840 --> 35:29.680] stage things are done and it has no access to the outside system. Templates are also declarative [35:29.680 --> 35:34.880] as I said ideal just metadata and it has fully transparent support for cross-compiling with [35:34.880 --> 35:39.600] most build systems which means in most templates you don't need to do anything and it will be [35:39.600 --> 35:45.280] able to cross-compiling without any additional effort. It has a clean handling of common build [35:45.280 --> 35:53.680] systems this includes configure script, mason, cmake and so on and it's strict. It has mandatory [35:53.680 --> 36:00.480] linting hooks for many things and unit tests where possible will run out of box. I strongly [36:00.480 --> 36:04.640] believe that being strict by default is good because you can always make things more loose if [36:04.640 --> 36:08.800] you need it but if you have things loose by default and then you need to strict in them [36:08.800 --> 36:13.600] and you have many hundreds, two thousands of packages and you need to adjust every single one [36:13.600 --> 36:20.000] of them it becomes effort which cannot be done because it's just too much. It has support for [36:20.000 --> 36:26.880] things like bulk builds where it can properly order things in the batch to build without [36:27.840 --> 36:33.760] you know having dependency ordering issues. It can check upstream projects for new versions and [36:33.760 --> 36:41.760] so on. Build flags all the basic stuff for hardening which Linux and service typically use [36:41.760 --> 36:49.680] like fortify, position independent executables, stack and so on are used. On top of that we use [36:49.680 --> 36:54.640] system-wide LTO for practically every package. I think there's only about 30 templates out of [36:54.640 --> 37:00.560] close to a thousand which have LTO disabled for different reasons. In some cases it could be [37:00.560 --> 37:08.320] enabled but it's not worth it. We do utilize a system-wide subset of undefined behavior sanitizer. [37:08.320 --> 37:15.280] It deals with things like trapping signed integer overflows in order to avoid [37:16.080 --> 37:22.480] potential problems. Also CFI or control flow integrity is used for many packages. [37:23.040 --> 37:27.120] It cannot be used for all because it breaks on a lot of stuff. It's very strict when it comes [37:27.120 --> 37:31.760] to typing of functions but it's still used on a couple hundred packages. [37:31.760 --> 37:42.160] The allocator. We now use the Skudo allocator from LLVM which is also used for example on [37:42.160 --> 37:48.640] Google Android. It replaces the allocator in Muscle. This is not because of hardening because [37:48.640 --> 37:54.720] Muscle allocator is already hardened but Skudo is also hardened allocator. But it has significantly [37:54.720 --> 37:59.840] better multi-fraided performance because Muscle Malk NG uses single global lock. [37:59.840 --> 38:07.760] This is a trade-off but it also means that the stock allocator in Muscle performs poorly in many [38:07.760 --> 38:14.000] things and it's something people commonly complain about so we now rely on Skudo. There's also the [38:14.000 --> 38:21.520] advantage of being able to eventually deploy GWP ASAN which is sort of sampling runtime version [38:21.520 --> 38:29.200] of address sanitizers which can catch many memory errors at runtime with minimal performance overhead. [38:29.200 --> 38:35.600] This is not enabled yet but it will be at some point. Other core things for the distro. Some [38:35.600 --> 38:42.160] tooling is taken from Debian. For example we use InitramFS tools to generate InitramFS images [38:42.160 --> 38:49.040] because other solutions were generally found to be unsatisfactory. For example requiring Bash [38:49.040 --> 38:54.720] for the hooks and so on. InitramFS tools is very clean and simple and nice to work with. [38:54.720 --> 39:00.640] If we also use console setup from Debian to do console and keyboard configuration as well as [39:00.640 --> 39:07.200] the script for handling encrypted drives. Eventually I also had to add some other things like [39:07.200 --> 39:13.280] the grab rootloader support for ZFS. We now support root on ZFS very easily and so on. [39:15.040 --> 39:21.840] This is Kymyra desktop on RISC 5. You can see it runs things like Firefox for example [39:21.840 --> 39:28.320] which does not build out of bugs but I made it work. This is on the high-five unmatched board from C5. [39:30.320 --> 39:35.200] When I was starting to add to the desktop first thing I added was the western wayland [39:35.200 --> 39:41.280] combustor as well as GTK. This sort of provided a baseline set of dependencies which are also [39:41.280 --> 39:47.520] used by pretty much everything else. So then I expanded with XoDark stack things like the [39:47.520 --> 39:53.920] enlightenment window manager as well as pegwm for a simple x11 window manager. I added the [39:53.920 --> 40:02.160] multimedia stack including ffmpeg, gstreamer, media players and so on. In spring 2022 I added [40:02.160 --> 40:07.520] the GNOME desktop which is the default choice but of course you can use anything else you want. [40:08.240 --> 40:13.520] I also added web browsers. This includes epiphany which comes with GNOME and is built on webkit [40:13.520 --> 40:18.400] and Firefox which is alternative choice and of course some games. [40:20.640 --> 40:25.600] As I said before I would like to release an alpha version in late February or early March. [40:25.600 --> 40:31.120] I'm not sure if this will happen but I hope it will. Before doing this I would like to perform a [40:31.120 --> 40:37.360] complete world rebuilt of all the packages because I have introduced some things in C-belt which I [40:37.360 --> 40:42.720] would like to propagate it into existing packages and it's not. So just to be clean I would like [40:42.720 --> 40:49.440] to build everything with LVM15 as it is right now and basically then release the alpha. I will need [40:49.440 --> 40:55.840] to launch automatic built infrastructure. I currently have a server in Kolo in my city but it's not [40:55.840 --> 41:01.760] on public network yet so I need to set up the public network and launch things like this and [41:01.760 --> 41:08.320] as well as CI. I would like to clean up the remaining fallout from the recent hardening stuff [41:08.320 --> 41:14.640] as well as update every template to its latest version. And after alpha which is [41:15.360 --> 41:21.200] the alpha cycle is expected to take about half a year to one year. I would like to add a libgcc [41:21.200 --> 41:26.480] compatibility shim so we can run existing binaries because right now you cannot run existing binaries [41:26.480 --> 41:32.080] because the system runtime is different. I would like to add support for D-Bus activation so D-Bus [41:32.080 --> 41:38.640] does not run demons by itself through D-Bus service files but instead do it [41:40.320 --> 41:45.520] instead delegate it to the service manager. I would like to investigate additional hardening [41:45.520 --> 41:51.360] things like LVM save stack and I would like to improve the documentation. Right now there's [41:51.360 --> 41:58.000] the beginning of camera handbook which includes some handy basic information like installation [41:58.000 --> 42:03.520] and how to set up encrypted address and so on but it can always use more documentation. [42:04.400 --> 42:09.120] Local support is also another thing I would like to expand. This is the problem in pretty much all [42:09.120 --> 42:14.720] muscle list rows with the local support being sort of limited and you can have translations but things [42:14.720 --> 42:23.040] like you know formats and so on. So in the conclusion we are currently nearing usability [42:23.040 --> 42:28.880] and it should be suitable for early adopters by March. I would like to get all the major changes [42:28.880 --> 42:34.560] done by beta and continue packaging more software as well as cooperate with upstreams including [42:34.560 --> 42:41.760] the free bsdu upstream on sensing fixes and tooling and so on. In any case thank you for [42:41.760 --> 42:46.640] listening and if you have any questions you can ask them. Of course we also have stickers so come [42:46.640 --> 42:55.040] pick them up. [43:06.400 --> 43:12.640] Yeah as I said it supported. I recently introduced it and I recently tested it. Oh yeah sure. He [43:12.640 --> 43:20.160] was asking if ZFS on root is supported. Yeah it's supported. It uses the upstream script for [43:20.160 --> 43:29.440] NETRMFS tools just patched to support the user line because we don't use busybox and it just [43:29.440 --> 43:36.640] works. We also provide ZFS packages with pre-compiled binary modules so it's not necessarily [43:36.640 --> 43:44.160] compiled from source during installation and CKMS can handle things in a non-conflicting way so if [43:44.160 --> 43:48.640] you have the package installed for the stock kernel which provides the binary ZFS modules [43:48.640 --> 43:52.240] CKMS will not try to build the modules again. [43:55.600 --> 43:57.600] Yeah. [43:57.600 --> 44:14.080] Okay well the question was if what's the target audience basically. Well I would say the primary [44:14.080 --> 44:19.600] audience is basically the same people who would use things like Gen2 and so on basically power users [44:20.480 --> 44:26.320] who can find their way around things because there's no insistence on providing graphical [44:26.320 --> 44:31.440] clickable stuff for everything because it just wouldn't be possible. I just do not have the [44:31.440 --> 44:38.160] main power to do all this so yeah it's for power users who can find their way around a simple system. [44:43.120 --> 44:45.120] Yeah. [44:47.440 --> 44:53.840] He was asking where does the name come from. Well Caimira is basically like a [44:53.840 --> 45:00.000] mythical monster made up of three different animals so it should be fairly obvious like where [45:00.000 --> 45:06.080] it comes from. We have Linux kernel and free bsd stuff and other stuff. Yeah. [45:13.680 --> 45:23.280] Okay. Yeah the question was if I'm working with free bsd. I know the project has taken back some [45:23.280 --> 45:29.200] of the changes. I have some patches in Caimira hotels which I do believe would be useful for [45:29.200 --> 45:37.600] upstream and yes I do want to submit to them upstream. For example I have a fix in the sort tool [45:37.600 --> 45:44.000] which fixes a crash with control flow integrity hardening so this would be nice to include for [45:44.000 --> 45:55.680] example. How did you solve the problem of the PAM service program? [46:03.520 --> 46:10.720] Sorry can you repeat? Those turnstile use C groups. Question is if turnstile is this C groups. [46:10.720 --> 46:18.080] No it doesn't. It does the same thing as login. Basically this is a PAM module to report things [46:18.080 --> 46:24.400] and it keeps a persistent socket connection to the daemon as long as the session is active [46:24.400 --> 46:29.760] and then when the socket is closed and when the daemon receives basically a notification [46:29.760 --> 46:34.640] pulse on the socket and once it knows the connection has been closed then it closes the [46:34.640 --> 46:41.680] session inside of the daemon. So you open a socket from the PAM service module? [46:44.400 --> 46:51.920] The question is if I open the socket from the PAM module. Yes the PAM module opens a connection to [46:51.920 --> 46:58.000] the socket which is provided by the daemon and the daemon basically opens the socket in the system [46:58.000 --> 47:04.480] as a unix domain socket and it's only accessible by root obviously so the PAM module access it. [47:06.720 --> 47:13.600] So if you open a socket from the PAM module that socket obviously ends up appearing as a [47:13.600 --> 47:19.600] file descriptor inside the program that ran the PAM. Does that not interfere with anything? [47:19.600 --> 47:32.160] The question is if the PAM module with the socket actually interferes with anything. [47:32.160 --> 47:37.680] No I found it doesn't interfere with anything and in fact as far as I know login D does basically [47:37.680 --> 47:42.640] the same thing and other solutions for handling for example there are several solutions for handling [47:42.640 --> 47:51.040] the runtime directory which is basically a run user uryd and they basically did the same thing [47:51.040 --> 47:58.720] as far as I know but yeah as far as I can tell it works okay. Yeah. So first of all thanks for [47:58.720 --> 48:02.720] your job it must be like really difficult to maintain all these dependencies. Thank you. [48:03.760 --> 48:09.520] So my question is are you planning to integrate like any specific SSL library like for example [48:09.520 --> 48:18.800] Libre SSL or like OpenSSL? Okay the question was about integrating SSL library we do use OpenSSL [48:18.800 --> 48:26.640] version 3. This was actually a pretty big transition when it happened because many things [48:26.640 --> 48:31.840] do not or did not work well with OpenSSL 3. Fortunately the amount of packages right now [48:31.840 --> 48:37.760] is not that huge there are still some which do not work with OpenSSL 3 and which we do rely on [48:37.760 --> 48:44.160] for example the Heimdall implementation of Kerberos which we use instead of MIT Karbi [48:45.520 --> 48:51.440] does not work with OpenSSL 3 yet but it has its own built-in crypto which can be used instead so [48:51.440 --> 48:59.520] we fall back on that for now. Anybody else? [48:59.520 --> 49:07.920] Looks like that's it man and thank you again.