[00:00.000 --> 00:09.480] Okay, hi. [00:09.480 --> 00:10.480] My name is Thomas. [00:10.480 --> 00:11.480] I'm an engineer at TARM. [00:11.480 --> 00:13.800] I'm also a contributor to Project Verazin. [00:13.800 --> 00:20.480] This presentation is about Verazin and wants to give you an idea of what the project is [00:20.480 --> 00:28.760] about and how it fits in the wider translation of natural computing picture. [00:28.760 --> 00:33.720] I hope that, you know, after listening to this, you'd be motivated to at least have [00:33.720 --> 00:38.000] a look at the project and maybe become an early adopter or even a contributor in the [00:38.000 --> 00:39.000] future. [00:39.000 --> 00:40.640] Who knows? [00:40.640 --> 00:45.600] So without further ado, let me go through what I've put together here, starting with [00:45.600 --> 00:49.760] a bunch of trees via about the project. [00:49.760 --> 00:55.720] So we have a logo and the colorful ovals there are supposed to represent grapes. [00:55.720 --> 01:04.280] In fact, Verazin is a term used in winemaking and is the moment when grapes start ripening, [01:04.280 --> 01:10.320] at which point they can be of different colors, and in fact, the word means change of color. [01:10.320 --> 01:14.920] And this is sort of visual metaphor of the blurry nature of trust, which can be yes, [01:14.920 --> 01:21.160] can be no, green grape, red grape, or really anything in between. [01:21.160 --> 01:29.400] There's some debate around the way the word is pronounced, the French say verison, the [01:29.400 --> 01:35.320] English say Verazin or Verazin, so you have a choice, you have multiple choices. [01:35.320 --> 01:38.920] In fact, please do not spell it like the American technical company, because that would be really [01:38.920 --> 01:41.840] confusing. [01:41.840 --> 01:47.120] And it's also a background for verification of attestation. [01:47.120 --> 01:52.120] It started within ARM, in the architecture and technology group, to which I belong, together [01:52.120 --> 01:57.880] with my colleagues, as I'm on the team, as well. [01:57.880 --> 02:03.480] It's been adopted by the Confidential Computing Consortium in June 2022. [02:03.480 --> 02:12.280] It's currently in incubation stage, meaning it's in the early phases of adoption. [02:12.280 --> 02:16.160] And we're looking at growing it across a few different metrics, the most obvious being [02:16.160 --> 02:25.360] code and documentation maturity, but also, we are also trying to grow the community a [02:25.360 --> 02:26.360] bit. [02:26.360 --> 02:29.960] And being here is part of that effort. [02:29.960 --> 02:33.920] But yeah, the headline here is that we've moved from being an ARM project to be under [02:33.920 --> 02:40.600] the Linux Foundation umbrella, therefore, we are completely open governance and of course [02:40.600 --> 02:44.120] open source. [02:44.120 --> 02:48.880] Before we dive into the core of the presentation, though, let me give you a bunch of pointers [02:48.880 --> 02:56.680] also useful, the code base, GitHub, the chat, the mailing list hosted by the Confidential [02:56.680 --> 03:04.760] Computing Consortium, and these confusing URL, which is one for our regular weekly calls, [03:04.760 --> 03:12.520] which I'm not expecting you to memorize, in fact, you could just follow the first link [03:12.520 --> 03:20.400] in the list, the GitHub work, and the splash page should have all of these other links. [03:20.400 --> 03:27.080] But general, feel free to join any of these channels, pop up with our calls, ask questions, [03:27.080 --> 03:28.080] whatever. [03:28.080 --> 03:32.840] You will be always very much welcome. [03:32.840 --> 03:37.760] So now we do a quick recap on remote attestation, starting with the problem. [03:37.760 --> 03:41.560] So you suppose you have these two guys, a test for an underlying party that needs to [03:41.560 --> 03:44.880] engage in some sort of distributed computation. [03:44.880 --> 03:50.160] Suppose also that initially they don't trust each other, they're mutually distrusting, [03:50.160 --> 03:55.800] and then you can't make progress in the computation until a trust relationship can be established [03:55.800 --> 03:57.320] between the two. [03:57.320 --> 04:02.120] And instances of this kind of situation abound, for example, in Confidential Computing, the [04:02.120 --> 04:06.920] test is typically the guy that excuse the workload, the Confidential Workload, and the [04:06.920 --> 04:11.960] underlying party needs to know that these guys is trusted before it shares some input [04:11.960 --> 04:12.960] with it. [04:12.960 --> 04:22.120] Say, for example, you need to ship an ML model, or some privacy sensitive data, or both. [04:22.120 --> 04:29.520] And there's obviously obviously the dual problem where the underlying party is the consumer [04:29.520 --> 04:34.400] of the output of the computation, say you are an actuator of some sort that acts upon [04:34.400 --> 04:38.680] signals that are coming from the tester, and you want to trust the tester before making [04:38.680 --> 04:39.680] a mess, right? [04:39.680 --> 04:45.200] Say, there's stuff in critical infrastructure, especially. [04:45.200 --> 04:50.000] So to break this impasse, the natural thing to do is for one party to convince the other [04:50.000 --> 04:53.920] that they can be trusted. [04:53.920 --> 04:59.760] And here's where attestation comes in the frame, so attestation, what is attestation? [04:59.760 --> 05:07.080] It's a technique that uses specialized components, which is rooted in hardware called the root [05:07.080 --> 05:13.400] of trust, that the attest can use to do basically two things, sampling the current state of [05:13.400 --> 05:21.160] its TCP, of its trusted computing base, and put together a report, and be signed this [05:21.160 --> 05:29.920] report with the secret identity keys that are securely stashed inside the root of trust [05:29.920 --> 05:30.920] in hardware. [05:30.920 --> 05:35.440] All of that in a trustworthy way, in such a way that no one, even a co-located software [05:35.440 --> 05:39.800] or firmware, can subvert these sequence of actions. [05:39.800 --> 05:44.760] And this signed report is called attestation evidence, or evidence. [05:44.760 --> 05:49.760] And the evidence, which is basically a very, very strong authentication signal, can be [05:49.760 --> 06:00.960] sent to the RP, which, once it receives it, needs to verify by checking that the signature [06:00.960 --> 06:07.320] of the report is correct, and that the identity of the signer is known and trusted, and also [06:07.320 --> 06:13.880] to check that the reported TCP state is acceptable, slash good, for some local policy-defined [06:13.880 --> 06:17.320] interpretation of what good means. [06:17.320 --> 06:24.880] And the process of verification of evidence is the process that covers at least these [06:24.880 --> 06:28.720] two checks that we just discussed. [06:28.720 --> 06:35.480] And in the remote attestation architecture, which is RFC 9334, freshly published, this [06:35.480 --> 06:40.640] process of verification of evidence is taken care of by the verifier role, which basically [06:40.640 --> 06:45.920] mediates between attestors and reliable parties. [06:45.920 --> 06:53.120] And I know that the lines in this picture are not real channels between real devices. [06:53.120 --> 06:56.160] They are logical channels between architectural roles. [06:56.160 --> 07:04.440] So they can be recomposed and put together in a very different way, depending on the protocol. [07:04.440 --> 07:12.640] Logically, these are the relationships between various architectural roles. [07:12.640 --> 07:17.880] And in order to do this function, the verifier needs to know a few things, a few very important [07:17.880 --> 07:18.880] things. [07:18.880 --> 07:24.800] First, how to verify the identity of A of the attestor, and typically that is done by [07:24.800 --> 07:31.320] knowing a public key associated with the secret signing key of the attestor, and knowing that [07:31.320 --> 07:35.240] in a reliable way. [07:35.240 --> 07:44.480] And B, what is the expected state of the attestor TCP, of course. [07:44.480 --> 07:49.400] And this can become pretty messy, depending on how complex the attestor is. [07:49.400 --> 07:53.240] On one end of the spectrum, you have very simple attestors that can be described with [07:53.240 --> 07:59.040] a single measurement, does not change or changes glacially. [07:59.040 --> 08:05.200] And on the other hand, there are composite testers that are made of more than one attestor, [08:05.200 --> 08:11.080] each attestor sporting its own TCP, each TCP made of multiple separate independently moving [08:11.080 --> 08:17.280] parts, you know, software components configuration and whatnot, with multiple different supply [08:17.280 --> 08:23.320] chain actors involved, you know, think can become really hairy. [08:23.320 --> 08:28.280] So if the complexity is very low, it makes sense to co-locate a simple verifier with [08:28.280 --> 08:34.080] the underlying party, but if that's not the case, you know, it might be a reasonable choice [08:34.080 --> 08:40.080] to design a system where the verification function is effectively offloaded to a separate [08:40.080 --> 08:44.400] architectural component. [08:44.400 --> 08:49.600] These options are not, you know, aggregation, disaggregation, they are not mutually exclusive. [08:49.600 --> 08:54.400] Take for example, the case where the tester is composite and its evidence can be clearly [08:54.400 --> 09:01.440] split along a platform slash workload axis. [09:01.440 --> 09:06.320] So in that case, one might want to call an external platform verifier, and if everything [09:06.320 --> 09:10.840] is okay, then move to a local verifier for the verification of the workload. [09:10.840 --> 09:17.200] You know, as I said, the things are logical, they can be reassembled and put together in [09:17.200 --> 09:19.640] very, very different way. [09:19.640 --> 09:24.320] But in general, whatever the system architecture you come up with, the verifier role needs [09:24.320 --> 09:34.880] a bunch of trusted links to the supply chain that is involved in the verification, because [09:34.880 --> 09:43.320] the supply chain endorses the attesta, right, and so it's critical that the link between [09:43.320 --> 09:51.120] the supply chain and the verifier is a trusted one, and that endorsements are genuine. [09:51.120 --> 09:55.760] And these are the green boxes in this picture. [09:55.760 --> 10:01.560] But if I could also have an owner, I typically have an owner, that can provide the verification [10:01.560 --> 10:07.000] policies for evidence to it, save for, you know, to customize the process of appraisal [10:07.000 --> 10:14.880] or, you know, in a special situation to provide out batching. [10:14.880 --> 10:22.400] And to complete the RATS architectural picture is the RP owner, Reliant RATS owner, can feed [10:22.400 --> 10:27.520] the policy for how to act on the attestation results coming from the verifier, right, to [10:27.520 --> 10:34.920] extract the reliability to make decisions based on the appraisal done by the verifier [10:34.920 --> 10:37.240] regarding the attesta. [10:37.240 --> 10:39.640] So this finishes the RATS recap. [10:39.640 --> 10:45.320] And the question now is, where is Verizio in this picture? [10:45.320 --> 10:49.880] And as you may have guessed, is the blue box at the center. [10:49.880 --> 10:53.920] But it's not just that, it's also all the lines that are attached to the blue box, that [10:53.920 --> 10:56.880] is all the verifier interfaces, which are quite a few. [10:56.880 --> 11:06.400] Let's start from the bottom left and then move clockwise. [11:06.400 --> 11:11.120] So evidence, we've built a bunch of libraries for manipulating various evidence formats, [11:11.120 --> 11:16.520] both from point of view of the verifier, so the coding and verification, but also from [11:16.520 --> 11:19.480] the attesta point of view, which means encoding and signing. [11:19.480 --> 11:23.000] This way, if one needs to put together an end-to-end flow, say for integration testing [11:23.000 --> 11:27.560] or for the demo, it's quite easy to build this, you know, attesta remulator. [11:27.560 --> 11:31.000] You don't need to deal with the real horror, which can be tricky, especially if you're [11:31.000 --> 11:34.240] dealing with CI environments. [11:34.240 --> 11:41.480] The current list of supported evidence includes two ID profiles, PSA for Cortex-M, so IoT [11:41.480 --> 11:47.560] Gizmo and NCCA for the new ARM-V9 confidential computing architecture. [11:47.560 --> 11:51.560] We also have a TPM profile, and this came from an integration project we did with our [11:51.560 --> 11:56.880] friends at Inact Trust, who have a product for monitoring device health that can use [11:56.880 --> 12:00.080] Verizio as a backend. [12:00.080 --> 12:10.680] We have a bare-bones implementation of DICE, say TCG, but even Open is okay, I think. [12:10.680 --> 12:16.840] And then we have AWS Nitro, contributed by our friends at Veracruz. [12:16.840 --> 12:23.840] Veracruz is another triple-C project, so a confidential computing project that uses [12:23.840 --> 12:34.440] Verizio as the backend for their proxy CA. [12:34.440 --> 12:40.720] On the endorsement, RefValueFront, we have implementation for Quorum, which is a format [12:40.720 --> 12:46.920] we co-designed with Intel and Fromhofer in the DICE working group in TCG, and has now [12:46.920 --> 12:53.720] been adopted in the ATF Rats working group on the standards track. [12:53.720 --> 13:02.640] It's basically a specialized format for describing how the attester looks like to the Verifier, [13:02.640 --> 13:14.520] which aggregates different subformats for specifying a bunch of orthogonal but cool [13:14.520 --> 13:22.440] existing dimensions of the platform, so software, firmware, trust anchors, other things. [13:22.440 --> 13:27.960] For policy, we have integrated OPPA, Open Policy Agent, which is an existing successful [13:27.960 --> 13:32.160] general-purpose policy engine. [13:32.160 --> 13:37.320] We found something that existed that was fit for purpose, so we didn't feel the need [13:37.320 --> 13:40.760] to reinvent this specific bit. [13:40.760 --> 13:44.400] And for attestation results, there's an info model that's being standardized in the ATF [13:44.400 --> 13:53.280] called R4C, which allows to normalize the appraisal output so that the relying party [13:53.280 --> 14:00.280] policies can be simplified greatly, because now they're fully decoupled from the specifics [14:00.280 --> 14:03.800] of the attester. [14:03.800 --> 14:07.680] But because R4C is just an info model, we had to create a serialization for it, which [14:07.680 --> 14:10.120] we based on it. [14:10.120 --> 14:21.560] We call EAR, EAR is a good candidate for standardization, because there's nothing at this point in time, [14:21.560 --> 14:26.840] and so we're trying to push this through the RAS working group in ATF as well, and it's [14:26.840 --> 14:30.280] the only output format we support for now. [14:30.280 --> 14:37.320] As far as attestation result policies, we do nothing because this is completely autoscoped, [14:37.320 --> 14:44.160] this is entirely on the underlying party. [14:44.160 --> 14:52.280] So yeah, now I wanted to give you the map of what we just discussed with what exists [14:52.280 --> 14:54.320] in the Verison Convey. [14:54.320 --> 14:59.360] And note that nearly the entirety of what follows is goal line packages, and also command [14:59.360 --> 15:09.200] line tools and services slash demons in good old UNIX patterns, and this was a conscious [15:09.200 --> 15:10.200] choice. [15:10.200 --> 15:16.880] We choose to go with Golang, because one of the main goals of the project is to provide [15:16.880 --> 15:22.560] a bunch of component 3 to assemble system that provides verification as a service. [15:22.560 --> 15:29.240] And Golang is a very good language in terms of library support, native library support [15:29.240 --> 15:44.480] ecosystem, and also in terms of business, the learning curve is very non-steep, I would [15:44.480 --> 15:45.480] say. [15:45.480 --> 15:52.920] So it's trivial to learn, and therefore we hope that more people can be, more contributors [15:52.920 --> 16:00.920] could come by lowering the barrier here. [16:00.920 --> 16:05.400] So let's start with a couple of layer zero packages, Eat and Gokuzi. [16:05.400 --> 16:10.120] Eat is the entity that is the decision token, it's the main format for attestation messages, [16:10.120 --> 16:15.280] either evidence or attestation results that have been put forward by the ATF. [16:15.280 --> 16:23.320] It's basically a framework that extends caught and jotted by adding a number of claims with [16:23.320 --> 16:29.680] attestation specific semantics, and also a way to instantiate the framework through so-called [16:29.680 --> 16:37.840] profiles, and PSA and CCA being just two such examples. [16:37.840 --> 16:45.240] The Eat package is very neat package is not tracking the most recent version of the draft. [16:45.240 --> 16:52.160] We decided to wait for the draft to become RFC before doing a final alignment pass because [16:52.160 --> 16:57.200] it has all the claims that we need for the dependent packages, and so for the moment [16:57.200 --> 17:01.400] is good as is. [17:01.400 --> 17:12.840] But we will need to make it up to date, as soon as the document goes through the final [17:12.840 --> 17:16.280] stages of the standardization process. [17:16.280 --> 17:22.920] And Eat builds on caught, which in turn uses CosiSign and SineOne, so early on we realized [17:22.920 --> 17:28.360] that we needed a Cosi implementation in Go, and the only thing we could find at the point [17:28.360 --> 17:36.560] in time was Gokuzi, which was originally developed by Mozilla for their autograph service, but [17:36.560 --> 17:43.400] it supported only CosiSign, so we forked it and extended to support SineOne, but then [17:43.400 --> 17:50.400] Mozilla discontinued it, at the point in time when we wanted to contribute it back to the [17:50.400 --> 17:57.600] main line, they discontinued the project, so we took responsibility of it alongside Microsoft, [17:57.600 --> 18:01.800] and together with Microsoft we did a fair amount of work, including improving the ergonomics [18:01.800 --> 18:10.440] of the interfaces, and also making implementation generally more robust, and to that respect, [18:10.440 --> 18:15.240] we went through a couple of external security reviews and addressed older comments, so we're [18:15.240 --> 18:24.840] fairly confident in that this package has production quality, and we shipped 1.0.0 just [18:24.840 --> 18:32.160] a couple of weeks ago, and this is the first ever release bit of Verizon, so we're particularly [18:32.160 --> 18:35.760] excited by that. [18:35.760 --> 18:41.200] It's an interesting package among the lots, also because it's used not just by Verizon, [18:41.200 --> 18:52.040] but also by other quite big projects, Notary in CNCF and SiegStore, OpenSSF, so it's a [18:52.040 --> 19:01.600] nice, it's come out as a nice, useful building block that has relevance in the community. [19:01.600 --> 19:07.400] Now moving on to the evidence, you know, these are the evidence bits, the red, the orange [19:07.400 --> 19:18.520] things, we have the two-it profiles for CCAMPSA, the DICE thingy, and the sort of house-like [19:18.520 --> 19:25.080] shape is the common line interface called AV-CLI, which is an attester emulator that [19:25.080 --> 19:30.280] also talks to the Verizon verification API client side, so it can be used to play with [19:30.280 --> 19:35.600] the services quite easily on the common line. [19:35.600 --> 19:40.800] The yellow things are the attestation results packages, so here, there's only one package, [19:40.800 --> 19:46.480] basically, it's Verizon here, and there's an associated CLI arc, it can be used to verify [19:46.480 --> 19:54.400] and pretty print results, so one can pipe AV-CLI outputs into arc to see what happened [19:54.400 --> 20:02.360] during appraisal, and the green thing is the bunch of endorsement draft file-related packages, [20:02.360 --> 20:08.720] including the top-level manifest format, Corem-Corem, and the dependent bits for firmware verification [20:08.720 --> 20:16.160] keys and software and so on, so Corem-Coremade, Corem-Quartz, and Swede respectively, and [20:16.160 --> 20:22.320] also Cochly, which is the common line interface for assembling Corems from a bunch of JSON [20:22.320 --> 20:28.920] templates, you sign them and then you can submit them to the Verizon services using the [20:28.920 --> 20:38.240] provisioning API, which is linked into Cochly, and finally, the blue box, including microservices, [20:38.240 --> 20:43.400] which I will talk about in the next slide, and the client-side interface of the Verizon [20:43.400 --> 20:51.280] API, which exists also in Rust with C-bindings, and I think a pure C implementation has been [20:51.280 --> 20:58.160] contributed as well, I don't know where, if it's been merged fully or is still an open [20:58.160 --> 21:08.760] PR, but anyway, so yeah, let's have a quick look at the services architecture. Basically, [21:08.760 --> 21:14.240] there are two pipelines, one for verification, the bottom one, and one for provisioning, [21:14.240 --> 21:22.640] the top one, that converge on to the core services, VITIS, which connects to the data [21:22.640 --> 21:33.320] and the plug-in stores, it's called VITIS, VITIS is another winemaking one, and it's [21:33.320 --> 21:41.360] basically Verizon's services, tracel computing base, it has a whole, the security-related [21:41.360 --> 21:49.720] computation in it. The interesting thing is that the processing in both pipelines is [21:49.720 --> 21:58.960] in stages, and with very precise hook points, where plug-in code can be supplied by the [21:58.960 --> 22:06.000] user in order to customize certain aspects of the processing. So basically, what you [22:06.000 --> 22:16.160] do is as a user that wants to add their own format, either of evidence or for endorsements [22:16.160 --> 22:25.160] and ref values, you build your plug-in code so that you basically, you adapt your formats [22:25.160 --> 22:38.720] to the standardized processing pipeline. Right, so next steps, we have a few exciting [22:38.720 --> 22:45.560] new things in the pipeline, adding new formats, of course, opportunistically though, depending [22:45.560 --> 22:51.800] on contributions, integrations, et cetera. We want to improve documentation, not a very [22:51.800 --> 22:59.920] exciting bit, but a necessary step in making the project useful and usable. And this is [22:59.920 --> 23:06.760] a big one, we want to work on the identity and access management to support the more [23:06.760 --> 23:13.640] generic multi-tenant service, and this is the single missing bit that prevents Verizon [23:13.640 --> 23:20.520] from being used in a production context at this point in time. And finally, we want to [23:20.520 --> 23:31.520] allow a static plug-in list build in order to reduce the potential attack surface on the [23:31.520 --> 23:41.800] services. Yeah, this is what the next months will bring us. So, questions? Thank you very [23:41.800 --> 23:44.120] much, by the way, for listening.