Thanks for coming. Today I'm going to talk about Sequoia PGP, in particular rethinking open PGP tooling. First I want to introduce Sequoia PGP for those of you who don't know about it. I'll talk about its design and implementations and what makes that interesting. And then some day-to-day usage in order to illustrate what I mean by rethinking open PGP tooling. So what is Sequoia PGP? Say an open PGP implementation. And you can see here this is our GitLab site and we have a number of projects. And what is this open PGP thing? Well it's an IETF standard. Go on the internet, you can download it for free. It's derived from PGP which was published in 1991. The first version of the standard was published in 1996 and work is still ongoing. The next version of the standard is expected this year. It's currently in working group last call. And the standard defines the wire format. So how messages look, how certificates look. It talks about algorithms like encryption and decryption and how signing and verification work. And it also defines importantly a PKI, a public key infrastructure. But it's not an implementation. Sequoia is an implementation. And it's not just an implementation of the spec. It's a whole number of services and tooling and applications on top. And for those of you who've used open PGP, it's also a paradigm shift. We've looked at the way things have worked in the past and we have some new ideas about how maybe they can work better at least for some people. Sequoia's technical goals are to be a library-first architecture. So not a command line tool where a library calls the command line tool, but really the library is the source of truth. It's the most powerful thing. We have unopinionated low-level interfaces that are safe by default. That means that you can do a lot of things. You can do a lot of stupid things. But what we really tried to do on our API is to make sure that the easy way to do something is the safe way. Of course, low-level interfaces are hard to use. And so we provide high- level interfaces. And these high-level interfaces are necessarily opinionated. And what do you do when the opinion doesn't match what you need? Well, you can either completely switch to the low-level interface, which is inconvenient. What we've tried to do is to make our interfaces gradual so it's possible to mix low-level data structures with high-level data structures. And we've also designed Sequoia that the services are optional. And I'll get to what that means later. But what was the motivation for building Sequoia PGP? Sure, most of you have heard about it. It's existed for, I think, 23 years. And we've talked to people. And we heard some complaints from some users. I don't want to say all users, but certainly some users. And as we all know, the people who have something negative to say tend to be the loudest. So I don't want to say that as a representative sample. But what we heard was that the CLI was hard to use. And that the CLI-first approach that GNU PG takes, where you have GPGME, which is a library that calls out to the CLI binary, it's brittle. We heard that the APIs are too opinionated. And sometimes you want to do something and it almost matches what GNU PG expects, but not quite. And then you have to write a lot of code in order to work around it. People didn't like that the services are mandatory and the scalability wasn't so good. And I'm not talking about internet scalability. I'm just talking about individual user who has a few thousand certificates locally. Operations just take too long. So that's sort of the negative motivation, but there was also positive motivation. If you go out onto YouTube and you look at the GNU PG channel, there are a number of interviews called the GNU PG stories. And there are a lot of people from different projects, the EFF, the ACLU from OCCRP, from newspapers and reporters and reporters without borders and activists. And there's a common theme that they all were repeating. We use a lot of different encryption technologies, but probably none more important than GPG. And the question that we had was, you know, can we do better? We were inspired by this. So I want to take another step back and talk about Sequoia's sort of prehistory. So Sequoia was started in 2017, but before that, the people who started it that was used as Kai and I, we worked on GNU PG. And while we worked on GNU PG, of course, we worked with code. We talked to people who were using GNU PG as in developers. And we also talked to end users. And we had ideas about how to change things because we had these conversations where people were telling us things that they were unhappy with. And we had many technical conversations with Werner. Werner is the main author of GNU PG. And we couldn't converge on a vision. And so we had this conflict in the room. Werner wanted to go in one direction. The three of us wanted to go in a different direction. Should we continue with the established approach? Should we pursue the Sequoia vision? What does a compromise look like? And sometimes a compromise just isn't possible. And what do you do in that case? Is one person win and dominate? And sometimes that's a solution. In this case, we chose to part ways. And I think that's a perfectly okay thing to do. Werner had a vision. We had a vision. We didn't demand that Werner changed his vision. We left and we started a new project where we wanted to experiment and see if we could solve these problems that we had recognized. And what happens when you have two projects? Do you split the users? Do we have a small number or a big number of GNU PG users and all of a sudden half of them go to GNU PG and half of them go to Sequoia or 10% go to Sequoia and 90% to GNU PG? It could happen. But I think that's a pessimistic view of the possibilities. I don't think that has to be that way. Because it's not just about the GNU PG users. They're all of these non-users out there who are not using encryption technology. We wanted to offer more choice for users. We wanted to explore different options and see if the users out there or non-users could be served by this new paradigm. There's a diversity of needs. We wanted to in particular win over non-users. And the great thing is that there are a lot of non-users or maybe that's a sad thing but there are a lot of non-users. And we have a protocol. OpenPGP. It's interoperable. Can the network effects help? More implementations, more users, more network effects? In this view of the world, the ecosystem wins. There is more privacy and there's more security. And at this point I want to have an ode to Werner. So Sequoia really owes its existence to Werner. He was an inspiration to make GNU PG better. He was our inspiration to work on cryptography and defend privacy. And if Yusis Kai and I are Sequoia's parents, then it's not far to say that Werner is absolutely Sequoia's grandfather. And it turns out that it's not just two implementations. There are many implementations. There's OpenPGP for go, OpenPGP.js, PG painless, PG pi, R&P, RPGP and Sequoia. And these are just the free software implementations that are relatively big. And if you have all of these implementations out there, how do they work together? Well, yeah, we have this standard, but we have to ensure interoperability. And ensuring interoperability prevents vendor lock-in and improves the network effects for everyone. And for this, a standard is not enough. We need more. We need an OpenPGP interoperability test suite. And this was one of the first things that we actually worked on. And it currently has 131 tests and over 1,500 test vectors. And here you can see a snapshot. You can see that most implementations are tested. Currently, there's one implementation that I mentioned that's not there, which is RPGP. But thank you to Heiko, a former Sequoia developer. He's currently adding support for RPGP. All right, now I want to switch gears a bit and talk about the design and implementation of the library or the low-level components. So this Sequoia's architecture, what does it look like? I mentioned before, library-first approach. So applications are built on the library. And on top of the library, we have the CLI. The CLI is using the library, and that makes the CLI necessarily less powerful than the library. And we think that's okay. If you want to do, if you want to program using our CLI, it's possible. If you want to go further, then you're probably in a space where you should be using a library in a high-level language. We have a bunch of high-level components. They're optional. We have services that run as demons, for instance, the key store. But it doesn't have to run as a daemon. It can be co-located. Now, the daemon has the advantage that you have process separation, and this avoids things like heartbleed. It can multiplex resources, it can share state. But it's not always the right solution. And so it's possible to co-locate the service into the application binary in the same address space. And that's good when you need a restricted environment or you want to fall back in order to increase robustness. Now, I mentioned that we have a whole bunch of components. So up top, we have here OpenPGP, which is our library. And next to it, we have PGP-CertD, which is a certificate store or a standard, or not yet a standard, but there is a text that describes how it works. It looks like a mailder. And we have a library implementation. And that doesn't directly depend on our library. And then we have a whole bunch of libraries on top and services. We have the key store for private key operations. We have the cert store, which is the in-memory certificate store. We have the web of trust engine. We have our network library for accessing key servers in WKD and Dane. We have our auto-crypt library for doing auto-crypt operations. And we have another library for configuring the cryptographic policy. And SQ, you know, it exposes all of this functionality. And so it's using all of these things. And RPM is one of the users of Sequoia. Since Fedora 38, the version of RPM that ship uses Sequoia to verify the packages. And it doesn't use secret key material. It has its own certificate store and it uses its own trust model. So all of these components aren't needed. And RPM just links against OpenPGP, the library, and the configuration policy. Now I mentioned before our API design, unopinionated low-level interfaces, opinionated high-level interfaces that are built on these low-level APIs. But what does that look like? So let's imagine that we have a certificate and we want to write it out to disk. So we have a method called serialize. You provide it with a buffer or a file or whatever. And it just writes it out in OpenPGP format. What if there's secret key material in there? That would be a shame if you accidentally leaked that. Well, in Sequoia, we automatically strip that out by default. That's safe. You really, really need to write out the secret key material. Sometimes you do. Sometimes you want to. You have to opt in. And for this, we have asTSK, which converts the data type. And the new data type provides an interface, the same interface, and you serialize it. And then you also get the secret key material. And I mentioned that we have these progressive high-level APIs. What do they look like? Here, we see how to create a certificate. We have a certificate builder. You want to create a general-purpose certificate. You add a user ID, and you generate it, and you're good. But what if you also want to add a decentralized social proof? So, probably heard of Keybase, where you can do these social proofs or link services. There's also a mechanism in OpenPGP, or an extension that allows you to embed them directly into the certificates. And that's not really supported by the library. At least there are no APIs to do that. But you can use the signature builder, and you can add on the appropriate notation. And then the cert builder, you override the how it creates the signature by using this template, and then the certificate that's created automatically has this decentralized proof embedded in it. So that's the library. What does the command line interface look like? So, SQ is our primary command line interface. There are other tools out there, of course, but SQ is sort of the GPG equivalent, if you will. And we opted for a sub-command style interface. So, if you want to encrypt a message, use the encrypt sub-command. And here, I'm encrypting a message to me. So, the recipient email is neil at sequoia-pqp.org. The next thing that's very important is that we have a very clear separation of options. So, there's another sub-command, SQ sign. You can sign a message. And this command does not take the recipient email argument, because it doesn't make sense in this context. And so, if you try to provide it, you get an error. And another thing that we've really tried to do is ensure that there's consistency between the sub-commands. So, if you have, for instance, an email option, it doesn't matter what sub-command we're talking about, it has more or less the same semantics. And we've talked to people who've used sequoia or SQ, and the reactions have been very positive so far. So, we're quite confident that the design maybe is not optimal, but certainly is good. But the really big paradigm shift in SQ and in sequoia in general is the way that we think about certificates. And a certificate is sort of this OpenPGP artifact that you use and that you throw out onto the internet someplace on a key server and a Web key directory. You publish it on your web page. And if I want to send you an encrypted message, then I download your certificate and I encrypt a message using your certificate. And then I send you that encrypted message and you're able to decrypt it with your keys. So, certificates are really important and how we use them is also important. And in SQ, we're moving away from curated key rings. So, a curated key ring means that the data that you have available locally has been checked. It's authentic. It's the right stuff. If you have a certificate for me or that claims to be for me in your key ring, you're assuming that it's good. And it's sort of this say yes to get worked on mentality. So, if you're using GPG and you have a curated key ring mentality, and that's not required in GPG, but it's how many people use it we've observed, and you want to send a message to DKG and you address him by his email address, then GPG is going to warn you and say, do you want to use this key anyway? And it doesn't really provide you any options. The options are get worked on or not get worked on. And certifying user IDs is not easy. So, the amount of energy required in order to certify a user ID means that hitting yes is sort of much easier. And we want to move towards strong authentication. And so, in SQ, we treat the local certificate store as a cache. It's no better than the certificates that are stored on, for instance, the SKS key servers where no authentication is done. By default, we'll just store anything there. What about these self signed user IDs? Right, if you have a certificate, you create a user ID, you add it to that certificate. On my certificate, I have a self signed user ID that says Neil, but anybody can create a certificate where there's a user ID that says Neil on it. We treat it at most as a hint. And SQ certificates can only be addressed by authenticated identifiers. And the way that we do this is we really, really embrace the web of trust. And now the question that you're probably asking is, is this going to be a usability nightmare? And it's a question that we also asked ourselves because we didn't know we had to try it out. And I propose let's take a look and see. But we need to take a step back again and ask, what is authentication exactly? And so there are sort of two aspects to authentication. What we want to know is what certificate should I use when I want to encrypt a message for Alice? Or alternatively, if I have a certificate, who does this certificate really belong to? Is it Alice's certificate? And really self signatures, they don't mean anything in this context, right? This certificate here that we see on the right, there are user IDs that say Alice, but did Alice create those user IDs? Or is it Mallory who's trying to trick you? Or maybe somebody who's just trolling? So what does authentication look like today? Well, we have a centralized authentication, which is easy to use, but it's unsafe in the sense that you're relying on these central authorities. You're relying on hundreds of centralized CAs in X509. These are controlled by governments, not only your government, they're controlled by companies whose interest is to make money. And any one of them can trick you. Certificate transparency helps, but you're still reliant on the centralized CAs. And they haven't done a good job historically. So up here at the top, we see here a Google security blog post and Chrome planned to distrust semantic certificates because they had made too many mistakes. This is not great. But signal, signal is great, right? In a certain sense, technically, signal is even worse. And signal, you have one key server. And it's on the same infrastructure as the message transport. The good news is, and I use signal, you can trust the signal foundation, right? I believe in Moxie, but I don't think belief is enough. I think we want technical solutions. So what about peer-to-peer? Here, we're talking about checking somebody's fingerprint or checking the safety numbers in the context of signal. You can do that, and that is really, really safe. And it's a really good thing to do if you're worried. But it has such high upfront costs that few people do it. We need something in between. And then there's a third model, which is the consistency model. Do you have the same certificate every single time? This is called trust on first use, more or less. And it's really easy for users until they have a problem. And then how do you resolve a conflict? All right. So we have these different models, and maybe they're good enough, I don't know. Maybe they're even good enough for most. So Pearlbuck, who's a Nobel prize winner, said 100 years ago, the test of a civilization is the way that it cares for its helpless members. So the weakest members, the people who need protection, the activists and the people that are being pursued. And so our goal is not to be good enough for most, but to be good enough for even more. We want to provide a progressive system that serves a range of needs. And the way that we're doing it is we're providing different tools in order to increase confidence. And the tools work to support the user. And then based on the individual user's threat model, you can decide if the degree of confidence is high enough. And for this, we use the web of trust, which is a powerful and flexible PKI. In the web of trust, everyone can act like a certification authority. That doesn't mean that everybody is your own personal certification authority. You have to opt in. But maybe you as an individual don't opt in by yourself, but it's your system administrator at work, or it's a family member who you rely on. And the web of trust can use weak evidence. It's not a zero or one decision. It's possible to combine evidence in the web of trust. And the web of trust can work with all of the models that I presented before. It can be used in a centralized manner. It can be used in a federated manner. And it can be used in a peer to peer manner. And traditionally, people think of the web of trust as a peer to peer solution to authentication, where we go to key signing parties and we check fingerprints. But that doesn't have to be. And so if the web of trust is so good, why hasn't it succeeded? Why are we only using it in this very limited way? And I think the reason is because we've been missing the tools to make it easier to automatically integrate evidence into a web of trust and tools that make it easy to manage the web of trust. And I would say until now, because we've been working hard on improving the tooling. So in order to illustrate the power of the tools, I want to do an example. I want to send an encrypted mail to DKG or encrypt a message to DKG. So let's just try it out and see what happens. So we do SQL encrypt. We provide the email address, and we get an error. Well, that's not so great. Let's go to the key servers. Let's go on the network and see if we can find a certificate for DKG. So in SQ, this is the SQ network fetch sub command. And immediately we see something that doesn't give us confidence in the tools I would suspect. We imported four certificates. Ouch. Which one do we use? Which one is the right one? Is one of the four even the right one? Maybe it's a fifth one that we didn't find. What should we do? The best thing that we can do would be to ask Daniel, what is your fingerprint? And then use that one. But what if we can't or it's inconvenient? We could ask somebody else. That's pretty good that we rely on. Or a better solution is to ask multiple entities, combine the evidence, and then weigh the evidence according to the entities, and ideally do this in a completely automated way. And then you have a certain degree of confidence that a binding is correct. And maybe that's enough for you, maybe not. That depends on your threat model. And there's already a whole bunch of rudimentary evidence out there about what certificate we should use for DKG. There are a whole bunch of key servers. There's WKD, which is the Web Key Directory, and there's Dane for looking up certificates in DNS. And it turns out that keys.openpgp.org is a validating key server. That means that if you attempt to upload a certificate to keys.openpgp.org, you get an email where the user IDs on the certificate get an email prompting them to follow a link to validate the user ID for that certificate. keys.mailvalope.com does something similar. Proton mail does something similar where you don't get an email, but you log in, you say, this is my certificate. WKD is controlled by the user or their administrator, and the same thing for Dane. And SQ network fetch already fetches them all. You don't have to do it manually. And by the way, it records the evidence in the Web of Trust. They're stored completely as normal Web of Trust data structures as defined by the ITF standard. But how are they stored? So the way it works is we'll take keys.openpgp.org as an example. It's more or less a de facto CA. So what we do locally is we create a shadow CA. We create a new certificate and we say downloaded from keys.openpgp.org as the user ID. We have a local trust route. The local trust route says this shadow CA is an intermediate intermediary CA. We don't create one for SKS because SKS does not do any form of validation. And so in the case of keys.openpgp.org, we download a certificate from that key server. We go through the user IDs and then we create a certification for the returned user IDs using the keys.openpgp.org shadow CA certificate. And this evidence is automatically combined by the Web of Trust. So we have this trust route and shadow CA's. They're created automatically. And by default, the shadow CA's are trusted minimally. Some users don't want to rely on keys.openpgp.org and that's completely understandable. And as I mentioned, in the Web of Trust, you can have a varying degree of confidence in a binding or in a CA. And so we use the minimum, one out of 120. And what we also do is the trust route and the shadow CA's and the certificates, the certifications that are created, they're all marked as unexportable. And we do this in order to protect the user's privacy. So let's take a quick look here at how the evidence is recorded. So we do sqpki list and we put down the email address and then sq helpfully shows us the three paths that it found. And at the top, we see that there's the local trust route that's followed by an intermediary CA called the public directories that's followed by our shadow CA downloaded from keys.openpgp.org and then that is certifying the certificate from Daniel. And what that looks like graphically is here shown at the bottom. And some observations, the shadow CA's are partially trusted. keys.openpgp.org, we see on the edge leading to it has a one on it. It's one out of 120. The same thing for WKD, the same thing for Dane. And we don't want to completely ever rely on all of the public directories out there. And so we insert in between a public directories shadow CA. And this acts as a sort of electrical resistor where there's a maximum of a 40 that can flow through it. In this case, we see that the trust amount is three out of 120. And that's not enough to authenticate the certificate. But what's interesting is that we have no evidence for other certificates either. So what do we do now? Are we done? Well, if we're sufficiently convinced, then we're done. If not, we need to get more evidence. Where can we get more evidence? Now, we can think about the additional overhead of talking to Daniel or finding people who know Daniel and his certificate. Whatever the case, once we're convinced, we have two options. We can create a public certification. This is what most people do when they go to a key signing party. They create a certification and they publish it on the key servers. Or we can create a private link, which is not exportable, either permanent or temporary. In this case, we're going to create a private link that is permanent. And we do sqpki link add, no password. No password required. It is the local trust route. It just works. Bam. So let's do sqpki list now, fully authenticated. And does it work? It does work. sq encrypt recipient email DKG. What if we decide we want to fully trust keys.openpqp.org? Also pretty easy. This is the general form for trusting any certificate as a CA. sqpki link add. And we say that we want this to be a CA for anything. I'll get to what that means in a minute. In this case, we're saying keys.openpqp.org should be fully trusted. So let's try another email address sqpki list. It's fully authenticated, going from the local trust route to download it from keys.openpqp.org to the email address that we entered. And there's more information that we can incorporate. And some of it we already do. We have usage information, for instance, tofu. If you download a certificate from a URL, like you're downloading, for instance, Fedora, or you're downloading tales, you can monitor the URL. We can use auto-crypt information. And we can even easily introduce CA's. So what are organizational CA's? You have an organization, say a company, or a group of activists, and they are willing to delegate sort of these authentication decisions to a trusted entity. Maybe it's the admin or the nerd. And if I want to talk to somebody inside of that organization, then I don't have to authenticate every individual. I just have to authenticate the CA. And now I've bootstrapped trust into the organization. And by the way, we have a CA, chriscoi.sh, pgp.org. So if you want to contact us, you can use it. How does it work? You first have to think about how much you want to trust it from 1 to 120. Do you want to scope the trust? Because it's our CA. We might trick you. But you can rely on us, probably, to say what are the correct certificates for people in our organization. So here we can partially trust the CA, sqpki.link.add. And we're limiting it to sequoia.pgp.org. So it won't be used for other certificates. And here you can see Justices' email address. I do sqpki.list. And it is fully authenticated using the CA. And by the way, if you want to run your own CA, there's tooling for that too. There's open pgpca. It's a great way to bring up a CA. It's easy to use. And it's written by hyco. And I encourage you to check it out. But there's more tooling out there where pki can help, where you need pki. Let's look at open SSH for a moment. In open SSH, the authentication keys are the identity keys. And if the authentication key is compromised, users have to update. Is that a problem? We have a great case study, just a few months ago, GitHub accidentally leaked their private key. The good news is it wasn't leaked for long. They immediately removed it, or seconds, minutes later, I don't know. They rotated the key. The bad news is every single GitHub user who used their RSA key had to update their known host file. Quite the pain in the butt. What if you could use OpenPGP's pki and OpenPGP's certificates, where you have a separate identity key, and you keep the identity key offline. And when something like that happens, you can, of course, make an announcement, then you rotate the subkey, and that's it. There are two former Sequoia developers who are currently working on that, Victor and David. The project is called SSH, OpenPGP off. And I encourage you to check that out as well. What about commits? I'm sure many of you have signed a commit. What does it mean? I don't know. It doesn't mean anything if you don't have a policy. So we have a tool called Sequoia Git. It defines, as a document that talks about how to define a signing policy for a project. You put it into a Tomo file. The Tomo file is directly embedded into your Git repository. It evolves with the project. And then you're able to check whether or not commits are authentic according to the project developers. So there's a whole bunch of tools that I think change the way that one could use OpenPGP and interact with the ecosystem. So what if you want to use Sequoia today? Of course, SQ, I presented. You can use it. It's packaged for Debian, it's packaged for Fedora, it's packaged for Arch, it's packaged for other distributions. But SQ is not integrated into a lot of existing tools. So do you want to live in sort of a split brain world where you have some of your tooling using some state and other tooling using other state? So we have the GPG Chameleon and it is an implementation of GPG's de facto interface. So you can just drop it in and use it and here you see GPG-version reports that it's the chameleon actually. And it uses both GPG state and SQs, which means that you immediately profit from SQ's PKI tooling. It automatically uses that when doing web of trust calculations. So you don't actually have to do any migration. And if you're using Thunderbird, we have the Octopus, which again is a drop-in equivalent API to the RMP interface that Thunderbird is currently using. And it includes web of trust support and GPG agent support. Now if you want to integrate OpenPGP, there's a standard. You can read the standard as long as it gets complicated. But recently, just two months ago, Hico, Paul, Ms. Uppedy, Victor and David published a book, OpenPGP for application developers. It's the book that should have existed 20 years ago. It didn't exist and now it exists. It's a few hundred pages talking about some of the details of OpenPGP as they relate to the needs of application developers. And I think that this is really the game changer. Who's been funding Sequoia? The project started in 2017. For six years, the PEP Foundation funded Sequoia. We received money from NLNet and currently we are being funded by the sovereign tech fund, at least until the end of the year. And post-2024. Well, that's an open question. Maybe somebody can help us with. Thanks for listening. I hope that I've convinced you that users have different needs. There are different users. They have different needs. And I don't think that there is one universal solution. There's not one implementation that is going to make everybody happy, necessarily. And if that implementation were to try to exist, the fact that it tries to be everything to everyone means that it's going to make some people unhappy. Sequoia has a different architecture. It has different paradigms. Maybe it's the right one for you. Maybe it's the right one for some non-users to convert. And I don't think that it's going to divert them to be open PHP users. I firmly believe that diversity in an ecosystem is a strength. I believe that we are better together. And I believe that winning is not dominance of a single implementation, but improving privacy and security for individuals. And as a small aside, by the way, implementing your own PKI, that's the new implement your own crypto library. Please don't do that. Thank you very much. Thank you very much. Are there any questions? A question there. Thank you. Um Okay. Thanks for your talk. I have a question. I currently use GPG agent as SSH agent. And is it possible with Sequoia too? Okay. I can't hear anything because the microphone or the speakers are pointed this direction. Can you say it again? I'm currently using GPG agent SSH agent. Can it be done with Sequoia too? Can you use Sequoia as a GPG agent? SSH agent. SSH agent. Yeah. Okay. Okay. It's okay. Hi Neil. Thanks for the talk. Lots of interesting points. Oh, sorry. I'll try to speak loud. Thanks for the talk. Thanks for the points raised. Thanks for the bows. I'm wondering a bit about compatibility and interop. Could you speak on that topic a bit? Because of, well, recent developments. Where do you see Sequoia in the future? Like, especially this year and going forward when it comes to interop with other implementations and newer versions of OpenPGP. I know this is a bit of a larger topic, but maybe you can share some of your thoughts with that. Okay. So I understood the question is what is the future compatibility with OpenPGP? And our intention is absolutely to implement whatever the ITF decides to standardize in the next revision of the OpenPGP protocol. And I believe what your question is sort of asking about is the LibrePGP thing? And I think that's a good question. I think that's a good question. I think that's a good question. And I believe what your question is sort of asking about is the LibrePGP thing? And I mean, that's a whole can of worms. And I think it's an extremely unfortunate situation. And my personal hope is that we're all going to implement the things that the standard bodies say is the standard because it improves interoperability. One of the arguments around LibrePGP, which is a different, which is the GNU PG format or the GNU PG and RMP format alternative to the ITF standard, is that they say we already shipped it. Well, they already shipped it. I think it absolutely makes sense to write down what it is that they shipped. But I hope that future developments are going to go in a direction where they also support the standard. Hi. Do you integrate with hardware backed private keys? So for example, Fido keys. Right. So there are two ways that we do integration. So the first one is if you're currently using GNU PG and you're using the GPG agent. And then you decide, okay, I want to try out Sequoia. And you're using the Chameleon GPG. Chameleon will automatically use GPG agent. That means that there is zero configuration required. You automatically get access to all of the things that you had access to before. So that's sort of the easy thing. The other half is what does it look like in terms of Sequoia sort of native support. And for this, we have a private key store, which has a device driver style architecture. And then there are different back ends implemented by that. Again, one of the back ends is the GPG agent back end. But Hico, for instance, did a lot of work on the smart card area. And so if you're using an open PGG smart card, then in the future you'll be able to use the private key store and it will be able to talk to your open PGG smart card. Likewise for PIV tokens and we expect to add additional things in the future. Are there any concerns or ongoing work with regards to post-quantum? Post-quantum is a good question. Right, of course. The whole ITF is very interested in addressing the question of how do we deal with the post-quantum threat. And there, as I mentioned, the ITF working group has submitted the document to the ITF for ratification. And it's currently in working group last call, or last call, I'm not entirely sure of the terminology. But we expect within the next couple of months that it will be ratified. And the working group has a nuke shatter. The nuke shatter has been accepted and the nuke shatter includes post-quantum work. The post-quantum work has more or less already been done and it was a collaboration between the BSI and Proton primarily. So the BSI a few years ago had a call and they asked, or the MTG, which is a company in Germany, applied to do the post-quantum work in the open PGG space. Proton joined in and there is an entire draft or there have been multiple versions of a draft. Everybody is more or less happy with the draft. It is much less controversial, one might say, not that the crypto refresh is terribly controversial. And this is the direction that we're moving in and I expect that it will also be ratified very quickly. The tricky part, of course, is the actual deployment in real life. It is not a very long time but it seems that we do still have a couple of years. Thank you very much. I think if there are further questions, your email was on the slide. Feel free to ask him, I'm assuming. It was a very enlightening talk. It was a challenging talk too. And as Belgians, we'd like to give you a token of our appreciation for your effort. Thank you very much. Have a nice day.