Okay, great. Let me introduce myself. My name is Dmitry Bedovsky. I work in Red Hat for several years and maintain the open-estation, the open-estate. I am also involved in development of open-estate, a member of the Open-estate Technical Committee. And my current work is dedicated to post-quantum transition in Red Hat. So, first, brief reminder. Why do we need post-quantum transition? There is a wide consensus that quantum computers, if they ever happen, nobody knows, nobody knows when, will break the traditional cryptography in sense that digital signature becomes forgible, decay generation becomes reversible, and so on and so forth. So, if a malicious sector records the communication now and they are still secret and confidential to the moment when the quantum computers happen, they can get your secrets. Not sure it will happen soon, but this is considered as a threat and it means that the technical community has to implement quantum resistant algorithms that will be unbreakable even with post-quantum computers. So, some words about challenges we have. First, as quantum computers happens, we can't trust the existing algorithms, as I mentioned before. Second, well, when we implement new algorithms, they are not tested for a long time enough, so we also can't trust them too. For example, in the NIST contest, one of the algorithms that was moved even to the fourth round of the contest was completely broken without any post-quantum computers. It's a pity it was a wonderful algorithm. So, currently, a lot of efforts are related to providing so-called hybrid schemes when we use both classical algorithms and post-quantum algorithms simultaneously and combine them in this or that way. It can be two different signatures. It can be some combination of calculation, but again, if one of algorithms is unbroken, the second still provides some relevant security. The second area where we can expect problems on post-quantum transition is related to key size. Well, let's compare the key size for classical algorithms. RSA, well, practical 3K bits, means 400 bytes of the key and 400 bytes of signature, right? For the deletion, one of the algorithms choose for standardization, and the digits I provide are not for the most strongest version. It's for some intermediate version of it. We will have more than one kilobyte of the key and two and a half kilobytes of the signature. So, as a key and the signature are parts of the certificate, as a certificate doesn't go alone. You have a chain. You can imagine that, well, currently you have, say, four kilobytes of a certificate chain, and switching to deletion, you get, well, 18, 20, something like that. Also, we should expect performance problems because new algorithms will, with high probability, be much slower than existing. We will have compatibility problems because, well, other implementations of algorithms will contain these or that mistakes and probably implement various versions of intermediate standards instead of final, at least at early stages. And sometimes, I am not sure, we will meet problems with middle boxes analyzing traffic, passing through them. Well, is it something known and should go forward or is something bogus and they should be stopped? Well, let me remind that when TLS-103 was in the process of standardization, people have measured and found out that something between five and 10% of TLS-103 traffic don't pass through middle boxes. And the TLS-103 protocol was significantly redesigned to better mimic TLS-102, which was already familiar to middle boxes. And, of course, when we are speaking about network, we also get traditional problems. Well, big keys doesn't fit the TCP or UDP packets. We have to deal something with, for example, DNSSEC, which is currently stateless and expects that the response from server comes in one packet. And, of course, if you send a little request and get a huge response and use UDP protocol, well, all the protocols that rely on post quantum algorithms will be a good chance to implement so-called amplification attack where you send a legitimate request to a server, but spoofing the IP address. And if you use UDP, the response, which is much bigger than the initial request, will go to the victim computer and so distributed denial of service will be implemented. Okay, now when I briefly told about the threes, let's go to something more positive. First, we have several standard bodies that are involved in the process of post quantum standardization. NIST, which organized the post quantum contest, has chosen three, four algorithms for standardization. Here are three of four links to the drafted standards. Kiber is the algorithm for key encapsulation. Deletion is the algorithms for digital signature and Falcon is Sphinx and Falcon are also algorithms for digital signature. The standards, the final version of standards are expected to happen in Q1 of this year, but did not happen yet. Then, when we have algorithms, we should specify the usage in protocol. Okay, sorry, how to switch it off? Yeah, sorry. So, IETF is the standard body which works on protocols. The work happens in, well, in almost any working group that is dedicated to cryptography that is in the so-called security area of IETF. And it was created a dedicated group named PQEAP, which will cover the protocols that currently don't have dedicated working groups, such as SSH, for example. I will briefly speak about it at the end of my presentation. And also for hardware implementation of the keys, for example, tokens and HSMs, the standards are developed by OASIS group. As far as I remember, well, several weeks ago, there were no final version of the standard. There were some drafts, but they're not public. So, despite lack of the final standards, you already are able to use Fedora for experiments. We have chosen LibuQS project. It provides an implementation of a wide set of post-quantum protocols. For Fedora, we build only those which are chosen by NISTO, the standardization, I'm sorry. If you want to play with something else, you will probably have to rebuild it yourself. And, well, LibuQS is a part of OpenQuantumSale project. And they also provide some fork of OpenSSH using post-quantum mechanism for case establishment. And what's also important, OpenSSale provider. Let me briefly remind what is OpenSSale providers. It's basically a plugin style mechanisms that allow you to add or modify functionality of OpenSSale, including providing the new cryptographic algorithms or hardware back implementations. In Fedora 39, that was released at the end of 2023, we have OpenSSale 3.1, we have LibuQS 0.8, we have OpenSSale provider 0.5.1. And we plan to update all these components in Fedora, Rochide, LibuQS, and OpenSSale provider are already updated. And we are currently finalizing the rebasing of OpenSSale to next version. I'm sorry, I am lazy and not brave enough to provide you a live demo. But well, it's quite simple. If you have a Fedora machine, you can do it yourself. You should install, okay, as provider, the first line. You can, then you should generate the K-Pair. I have chosen electric curves, but it's a matter of taste. And then you just run OpenSSale server. But now you should exactly specify what groups for K-Exchange you plan to use. So it can be done with a common line key groups. And here, if you see the group's names are in red, the names consist of two parts. One, X25519 is a classic cryptography algorithms and the second, Kiber is a post quantum stuff. The second group allowed for K-Exchange establishment also have the same structure, but uses a different parameter for classic part. And now when you have run server, yes, it's a demo server, then you can also connect to it. When you run the bad connection, I strongly recommend use the key trace. It provides in more or less human readable form the handshake process. And trust me, you will see that you use the hybrid algorithms for K-establishment. Oh, well, S-Plant and the server is sort of fun, but I don't recommend them for any sort of production use. But you already can also use such a popular web server as engines, but again, for now, I'm speaking about Fedora 39, we will have to load OKS provider in the global open state configuration or in the local copy and provide it explicitly to engines. For demo purpose, I recommend using global. It's just simple, you load the provider, you activate it, it's done by providing the section dedicated for it, and then you configure engines in a regular mode and you add a derivative, say, ACDH curve, which is more or less equal to the groups parameter that I mentioned on the previous slide. So, well, then after restarting Jinx, you have a web server that provides that uses hybrid K-change for groups and you can use URL, which is open state based, at least in Fedora. Again, you should have to specify the curves, but you will get something over quantum protected channel. Of course, it's worth mentioning that big companies, well, also have their post quantum stuff. Google Chrome allows enabling post quantum algorithms, it requires switching on special flags, and you can check that your server setup as it is done on the previous slide, will be able to communicate with a standard Google browser. Also, you can use a CURL to reach, for example, Cloudflare, the demo site, they also use the same algorithms and compatible implementation. Okay, future plans. First, we want to pack all our results to container because do it yourself demo is fine, but for a practical purposes container, it's much more convenient. Then, as I mentioned before, we are going to provide the recent version and it's work in progress in Fedora Rohide. So, you can use the post quantum algorithms also for digital signature. It's currently doesn't work for Fedora, for OpenSS 3.1. And of course, we are involved in upstream work, OpenSS cell, NSS, GNUT LS, we have identified some deficiencies and working on fixing them. And as I promised, there is an opportunity to be involved in that community work because let me speak about SSH. For several years, OpenSSH has implemented post quantum algorithms for K exchange. Unfortunately, that is not any algorithms that is chosen by NIST. There are no standards for it, neither NIST or ITF, there is work in progress on ITF level to write a specification, a formal specification with these algorithms. And there are no specification for no specification in RFC form for using the NIST chosen algorithms in OpenSSH can shape. So, OQS project has the version of OpenSSH which is currently frozen because of lack of contributes. So, if anybody desires to speed up the process of transition of SSH to quantum safe world, I think it's worth organizing some activity both in the development, in cooperation with OQS project and with writing specification of the draft for ITF. Thank you very much. Feel free to ask questions. Sure. Have you analyzed the performance difference between the classic implementation and the one with post quantum? What is the performance impact? I didn't analyze it myself, what I expect performance degradation, just because we are implementing classical algorithms for decades, and first post quantum algorithms will be imperfect by definition. Sorry, the question was about the difference, the performance difference between classical algorithms and post quantum algorithms. Sure? So everybody nowadays is using X509 for services, and you mentioned that it's difficult to trust the new algorithms, and also would be impossible to trust the old algorithms. So did you do any experiments on like dual implementations on X509, and the impact on that, because the certificate will be huge? Yes, certificate will be, so the question is, do I correctly understand? The question is how does the post quantum algorithm affect X509? Yeah, if you use it in dual combination with the old algorithms and the new algorithms. If it's used in dual combinations with old and classical algorithms. So there are several concurrent documents of combination of classic and post quantum algorithms. And yes, the certificate will inevitably be huge, which combination, no matter which combination will be chosen. There are some efforts how the impact can be reduced. For example, let's add intermediate certificates to the trust storage instead of sending them on the wire. But it definitely has its downsides, because well, increasing the size of root storage. But yes, as I mentioned, network protocols will be seriously affected by huge certificates. So just add on to this question, so that means we need more computing power for our applications? No, it means we need to reinvent the CPN DDP. Sure. If we bring these into how we usually provide a very friendly user experience in order to communicate these keys from one device to the other, we sometimes use QR codes, NFC and Bluetooth. Will that be still possible if we go to these size of certificates and keys? Will the user friendly certificates, will the user friendly way of transferring the certificates such as QR codes, Bluetooth and so on and so forth, be still suitable for post quantum keys, right? Okay, yes, for QR code because it's just linked to the URL. Yes, yes, yes, don't know about Bluetooth, sorry. How many time do I have? For minutes, sure, go ahead. Do you have any expectations on when will actually have to deal with post quantum signatures in the wild, like in our products or because of a server we're interacting with or as a client? Well, when do we have an expectation, when will it appear in real world, right? So, I have expectations but don't trust me too much. We have, there is a promise that algorithms are finalized in Q1, right? Presuming this, the ITF process even for near-finalized RFCs is about half a year, right? So, I'd say that first attempts of introducing post quantum certificates into real world will not happen before 2025, especially taking for account that real world CA needs hardware which is capable to keep post quantum keys inside and also it will take time to develop such hardware. You showed the hybrid mode of the hybrid, right? Do I use a hybrid mode of the hybrid and the classical algorithm, right? Yeah, yes. What is its security level, let's say? Is that hybrid mode also quantum safe or is that not fully quantum safe? It's quantum safe. At least the current evaluation of this hybrid mode is that it's quantum safe. As I mentioned, we did not study the post quantum algorithms enough yet. Go ahead. And how do we evaluate the quantum safety in general? Like what is the current evaluation of the current in general? Like what are the approaches to presumed that are going to be quantum safe? Which approaches are presumed to be quantum safe? Sorry, I'm not a mathematician. Yeah, right? I can say some words. I can say some words such as lattice-based cryptography, hash-based cryptography and so on and so forth. But please investigate what this word means yourself, sorry. Okay, the last question. Are the quantum save algorithms, sorry, do I correctly understand the question? Will the quantum algorithms will be resistant to all types of quantum computers? We hope. Thank you very much. Thank you. May I take the question? Yes. Okay, thank you. Thank you very much.