[00:00.000 --> 00:28.400] We've been working on it for a couple of years and its sign points are focused on appliances [00:28.400 --> 00:39.360] for data centers, lights out, hands off environment. And over years ago, we've presented at the [00:39.360 --> 00:44.840] Linux security summit, we've seen details about securing secrets in TPMs for data center [00:44.840 --> 00:53.240] appliances, with some of the information about how we are going to keep those secrets [00:53.240 --> 00:59.600] in these environments. And so the key pieces that we've incorporated from that is on the [00:59.600 --> 01:07.040] discussion on machine arrest, focus on the use of security platform and the TPM2 demands that we [01:07.040 --> 01:16.800] need to guard a unique identity and keys that would be placed into a TPM book we use for identity [01:16.800 --> 01:25.840] and device authenticity. And further the operating system, we'll be able to extract the deep game [01:25.840 --> 01:31.560] secrets if we have securely looped it all the way up to kernel and user space and that the [01:31.560 --> 01:40.080] chain of trust is still verified, ensures that we are on the device we expect and we're running [01:40.080 --> 01:45.560] the software we expect as well before we can access these secrets. It also includes a port for [01:45.560 --> 01:52.920] unintended storage for access to data protection and incorporating continuous updates. [01:52.920 --> 02:04.760] The new trust for machine arrest is Certificate, an associated key pair which we protected in [02:05.200 --> 02:13.080] hardware. Certificate holds a product ID and a serial number that sort of binds it to the physical [02:13.080 --> 02:22.120] system that people are using it. And then the Certificate key pair and the TPM, those are inserted [02:22.120 --> 02:29.600] at manufacturing time and then Certificate will give us a mutable identity to verify that the device [02:30.360 --> 02:38.280] is authentic and that we'll be running the software that we expect the product expects and that it [02:38.280 --> 02:49.120] hasn't been tampered with. At the runtime of the body screen from the last, we start with our body [11:20.120 --> 11:30.040] loaded and we're executing sort of a normal view of how the kernel comes up. We'll transition into [11:30.040 --> 11:39.200] the loaded in the MFSEs startup and then we get into the machine arrest. We're going to unlock [11:39.200 --> 11:45.880] the storage to pull out our EA policies that's loaded into the TPM and then we attempt to access [11:46.000 --> 11:53.240] the registers to make sure that the values and these registers are what we expect. If they're not [11:53.240 --> 11:59.640] then we help the system run off onto the board for something that's been modified and we don't have [11:59.640 --> 12:07.000] access to any of the secrets in the TPM that we've locked away. On success though, we can go over [12:07.160 --> 12:15.560] trust keys and certs, which has phrases that are protected in the TPM, loaded into the TPM and used [12:15.560 --> 12:22.640] for only 5 minutes. And then once we have extracted that information, we extend the XR7 with the [12:22.640 --> 12:29.200] wrong measurement, which will protect further access to the TPM from any of the runtime services. [12:29.640 --> 12:41.680] Before we start the containers of the runtime service, we have to go through steps to ensure that [12:41.680 --> 12:50.560] the images that were included on the system match the product and the product's expectations and [12:51.320 --> 13:02.480] have been signed by the product certificates to verify certificates, signatures, and so [13:02.480 --> 13:08.680] for each of the OCI images in the manifest, those will be mounted and then the services [13:08.680 --> 13:15.200] can be started. We also optionally can put this into a particular OCI that we call the [13:15.960 --> 13:35.960] OCI. We want to ensure that the images stored haven't been modified and the way we do that is [13:35.960 --> 13:42.320] the stacker builds as a squash effect for all the layers in the OCI, which is a real [13:42.480 --> 13:48.680] and final system is mounted. In addition to the squash layer, the OCI images has very matches [13:48.680 --> 13:53.840] that were calculated at build time. With that, we can build a deemed very block device in Linux, [13:53.840 --> 14:01.800] which on access, as we read the first time we access that block device, the kernel will do the [14:01.800 --> 14:10.640] work of patching and evaluating whether that block is valid or it's been tampered with. [14:12.320 --> 14:42.240] For our updates, we have continuous updates as we can sync from our [14:42.360 --> 14:47.440] image repository and an update is much like what we do when we boot up. We have to go through the [14:47.440 --> 14:53.440] same sort of verification of signatures such that we know that this is the software that's [14:53.440 --> 14:58.920] expected for the device that we're running on and that all of the layers that we've built have been [14:58.920 --> 15:04.720] signed correctly. Once we've applied that, we can update our configuration database, the point [15:04.720 --> 15:09.880] to those versions for current data containers, we can send them to restart them, but if we have [15:09.880 --> 15:16.760] changes to reboot our bus or the UKI, we'll need to reboot the system so that we can execute [15:16.760 --> 15:27.600] the new versions. Lastly, for offline protection or other prevention of maybe that is sort of a [15:27.600 --> 15:32.320] maximum down, there are zero days for another kernel, we need to talk about verification.