[00:00.000 --> 00:11.400] So today we will talk about trust for the platform module project. [00:11.400 --> 00:15.920] So, hello, I am Matej. [00:15.920 --> 00:19.120] I am currently an engineering manager at 3MDep. [00:19.120 --> 00:22.520] I'm an open source contributor for several years. [00:22.520 --> 00:25.960] I'm interested in various stuff, build systems. [00:25.960 --> 00:29.240] I enjoy build reliability especially. [00:29.240 --> 00:32.400] I like embedded systems in general. [00:32.400 --> 00:36.080] I worked to several layers with embedded Linux projects. [00:36.080 --> 00:40.840] Now, I'm also working with some stuff with Carboot. [00:41.160 --> 00:46.400] Also, security aspects is what I'm interested in. [00:46.400 --> 00:51.480] You can have some contact information on the slide if you want to reach me. [00:52.000 --> 00:56.240] Some of you already heard that we are 3MDep, [00:56.240 --> 01:00.000] a parent-based company over severance in the market. [01:00.000 --> 01:05.040] We work mostly in open source firmware and embedded Linux areas. [01:05.520 --> 01:10.920] We are a part of various organizations, [01:10.920 --> 01:14.320] various open source initiatives like Carboot [01:14.320 --> 01:18.320] licensed service providers or reactor participants. [01:19.760 --> 01:22.760] So, to the agenda. [01:22.760 --> 01:31.680] So, let's start with explaining what the TWPM project is, [01:31.680 --> 01:34.400] why we decided to start one. [01:34.400 --> 01:38.760] We'll talk about some stuff about TPM modules. [01:38.760 --> 01:43.960] Then, we'll explain how we started such project, [01:43.960 --> 01:46.760] what challenges do we expect, [01:46.760 --> 01:53.840] what roadmap we have and what's the current state of the project. So, [01:53.840 --> 01:55.560] Trust World Reversed Trusted. [01:55.560 --> 02:00.080] So, you probably know that Trusted platform modules. [02:00.080 --> 02:05.080] So, we came up with a name Trust World 3 platform module. [02:05.080 --> 02:10.480] So, to indicate that the goal would be to make it a bit more [02:10.480 --> 02:15.200] trust-worthy than it is today by providing the open source [02:15.200 --> 02:18.640] firmware for that one. [02:18.640 --> 02:22.000] And the goal also would be to be compliant with [02:22.000 --> 02:24.800] the TCGPC client specification, [02:24.800 --> 02:28.560] which might be in fact quite difficult or maybe even impossible. [02:28.560 --> 02:31.360] We will see, we will discuss also that later. [02:31.360 --> 02:34.800] But, yeah, that's the goal. [02:34.800 --> 02:42.400] The project is funded by Nernet, by NGI Azure Fund. [02:42.400 --> 02:48.440] So, why we came up with the idea, [02:48.440 --> 02:55.920] the traditional TPMs are dedicated microcontrollers and they, [02:55.920 --> 02:59.480] not typically, they always have proprietary firmware, [02:59.480 --> 03:05.040] which can be easily audited or at least not by regular users, [03:05.040 --> 03:10.200] maybe by some governance, maybe. [03:10.200 --> 03:15.920] If there are bugs, they might not be fixed, [03:15.920 --> 03:19.800] depending on what the vendor is planning for, [03:19.800 --> 03:24.040] site-line TPM module, TPM chip. [03:24.040 --> 03:29.360] There are also capabilities which might be limited in some cases [03:29.360 --> 03:31.800] and if there is no firmware update from vendor, [03:31.800 --> 03:35.720] they might not be modified by a user. [03:35.720 --> 03:40.960] There are also several different interfaces, LPC, [03:40.960 --> 03:45.520] which is present mostly on older motherboards, [03:45.520 --> 03:47.640] but it's still commonly used. [03:47.640 --> 03:49.400] There is also SPI, [03:49.400 --> 03:54.040] which is typically present on newer PCs. [03:54.040 --> 03:56.520] And there is also iSquare-C, [03:56.520 --> 04:02.200] which is present mostly on mobile devices or also on TALAS. [04:02.200 --> 04:09.560] So, another problem is that if you, [04:09.560 --> 04:12.560] at some point, wanted to buy a TPM module, [04:12.560 --> 04:14.520] you probably know that there are a bunch of [04:14.520 --> 04:15.960] different types of connectors, [04:15.960 --> 04:21.000] you need to get a one which is compatible with your mainboard. [04:21.000 --> 04:23.040] Even if they look the same, [04:23.040 --> 04:25.920] they are really not and if you plug in [04:25.920 --> 04:28.520] incompatible one, you may even break [04:28.520 --> 04:31.800] your module or even your mainboard. [04:31.800 --> 04:40.760] For example, we can see LPC connectors for MSI and ASUS. [04:40.760 --> 04:43.720] They look the same, the MTP is in the same place, [04:43.720 --> 04:47.360] but the ground and power is swapped. [04:47.360 --> 04:55.400] So, some smoke may happen if you look ASUS to MSI or all the way around. [04:57.160 --> 05:00.320] Similar stuff for SPI. [05:00.320 --> 05:03.320] Also, connectors look the same at the first glance, [05:03.320 --> 05:07.360] but they are very, very different for different motherboards. [05:07.360 --> 05:10.280] Even for the same vendor, [05:10.280 --> 05:15.360] you can have different type of connectors and there are many, many more. [05:16.920 --> 05:22.600] So, at first, we wanted also to address the hardware problem, [05:22.600 --> 05:27.640] but it looks too complex. [05:27.640 --> 05:31.520] We want to focus on just the firmware for start. [05:32.040 --> 05:36.120] The virality of connectors is huge, [05:36.120 --> 05:38.680] bigger than anticipated. [05:41.360 --> 05:43.720] So, at other states of the project, [05:43.720 --> 05:47.640] we focus purely on the firmware and connect it even by [05:47.640 --> 05:49.760] some jumper wires to the motherboard, [05:49.760 --> 05:52.120] just to have some proof of concept in the firmware, [05:52.120 --> 05:58.120] then we can worry about some other stuff. [05:59.120 --> 06:01.560] So, how can we start the project? [06:01.560 --> 06:08.560] We need some code to process our TPM2 commands. [06:08.560 --> 06:13.960] There are a few command processors out there. [06:13.960 --> 06:16.000] At least, we know of two of them, [06:16.000 --> 06:19.080] the Microsoft Implementation, [06:19.080 --> 06:24.200] which provides simulator code for various OSs. [06:24.200 --> 06:28.600] There are also some interesting samples in that repository. [06:28.600 --> 06:31.560] There is FTPM trusted application for us, [06:31.560 --> 06:37.920] ARM trust zone and there are also some samples for STM32, [06:37.920 --> 06:45.760] Nucleo, which is what we will be interested the most. [06:45.760 --> 06:50.640] Then, we also have a simulator from IBM. [06:50.640 --> 06:52.600] Maybe there is some more. [06:52.600 --> 06:57.360] I think these two are the most popular ones. [06:58.360 --> 07:02.520] So, we took a look at the Nucleo sample, [07:02.520 --> 07:09.040] which was mentioned just before from the Microsoft TPM reference stack. [07:09.040 --> 07:13.160] So, it was created like four years ago, [07:13.160 --> 07:18.040] contributed and it's not like either maintained or tested. [07:18.040 --> 07:24.480] So, there is no single person who knows if it ever worked, basically. [07:25.960 --> 07:28.560] It looks like it was developed in [07:28.560 --> 07:31.960] Atolic True Studio, which is not an existing software. [07:31.960 --> 07:40.680] Currently, it was replaced by some other STM32 integrated development environment. [07:40.680 --> 07:47.560] We also asked about the status in general of the sample codes. [07:47.560 --> 07:50.800] So, the answer is that these are [07:50.800 --> 07:56.800] those contributed at some point in time or when it may not work currently. [07:58.000 --> 08:02.840] So, we took a closer look at that piece of code. [08:02.840 --> 08:07.360] We have converted the product into the newer programs. [08:07.360 --> 08:12.640] We were able to build at least to a certain point to at least run it. [08:12.640 --> 08:15.360] On the Nucleo board, [08:15.360 --> 08:18.080] we noticed there is some Vicom application. [08:18.080 --> 08:21.320] It was presumably used for testing the solution on. [08:21.320 --> 08:24.560] The application is only for Windows. [08:24.560 --> 08:28.040] We haven't even tried to bother with that. [08:28.040 --> 08:30.440] In fact, as we know, [08:30.440 --> 08:36.680] as what pain it might be to build some all-to-visual studio stuff. [08:36.680 --> 08:39.680] But we look at the code. [08:39.680 --> 08:42.040] We saw that the STM32 code can [08:42.040 --> 08:45.920] accept TPM commands via USB CDC. [08:45.920 --> 08:47.200] It can process it. [08:47.200 --> 08:54.920] It can return some response or some simple command response was there. [08:55.280 --> 09:00.400] But what was important is some custom protocol was involved here. [09:00.400 --> 09:03.640] So, there was no interoperability with existing tools, [09:03.640 --> 09:08.760] such as TPM tools stack or with existing TPM interfaces [09:08.760 --> 09:10.800] because it was very custom one, [09:10.800 --> 09:16.280] and the interoperability is a major goal of this project as well. [09:17.280 --> 09:24.320] Also, the resources on the MCU was quite low. [09:24.320 --> 09:30.720] So, at this point, we had a lot of idea of what we have to face with. [09:30.720 --> 09:38.560] So, we want to also show how different the flow [09:38.560 --> 09:44.120] is for the TPM2 simulator versus the TPM2 actual hardware. [09:44.120 --> 09:46.920] The block on the top is, [09:46.920 --> 09:52.520] we can say what Microsoft or IBM reference implementation provides. [09:52.520 --> 09:56.320] It just provides TPM2 commands processing. [09:56.320 --> 09:58.320] For the simulator, that's fine. [09:58.320 --> 10:03.520] It uses like sockets and communicates with [10:03.520 --> 10:07.320] the TPM2 software stack in the OS directly. [10:07.320 --> 10:09.240] But in case of the hardware, [10:09.240 --> 10:11.720] you need to plug in the actual module to the motherboard. [10:11.720 --> 10:13.520] So, we need all of the plumbing. [10:13.520 --> 10:19.080] So, there are also dedicated specifications on that in the TCG. [10:19.080 --> 10:24.520] So, those are the orange blocks that [10:24.520 --> 10:28.440] needs to be implemented in the firmware. [10:28.440 --> 10:32.400] So, we need all of the plumbing to pass the commands from [10:32.400 --> 10:39.200] the microcontrollers through the motherboard connectors to the OS drivers, [10:39.200 --> 10:45.800] and then at last to the TPM2 software stack in the operating system. [10:47.520 --> 10:52.560] What are the challenges, current and expected? [10:52.560 --> 10:57.560] One of the first was the global chip shortage, [10:57.560 --> 10:59.040] which happened in the meantime. [10:59.040 --> 11:03.320] So, even if you wanted to use the STM32L4, [11:03.320 --> 11:07.560] it was no longer available to source. [11:07.560 --> 11:13.120] The other microcontroller was also quite difficult to get. [11:13.120 --> 11:20.000] The project in the samples was developed using [11:20.000 --> 11:25.320] the hardware abstraction layer from STM. [11:25.560 --> 11:28.520] If you want to switch to another hardware, [11:28.520 --> 11:32.360] you would need to rewrite that thing at least to some point, [11:32.360 --> 11:34.040] to another hardware abstraction layer, [11:34.040 --> 11:40.160] and then the chips becomes unavailable and that will go on forever. [11:40.160 --> 11:44.960] Also, we do not have the requirements [11:44.960 --> 11:47.600] clarified that were at that point. [11:47.600 --> 11:51.680] So, we believe that what must be done is to have [11:51.680 --> 11:55.120] some OS handling the TPM stack. [11:55.120 --> 11:58.960] So, we are looking into ZFROs to have [11:58.960 --> 12:03.560] some better hardware coverage and [12:03.560 --> 12:08.000] just implement the TPM stack in the ZFROs. [12:08.000 --> 12:11.840] So, we can switch between boards more easily. [12:11.840 --> 12:19.400] Another challenges are different types of timing problems. [12:19.400 --> 12:23.160] In the TCG specification, [12:23.160 --> 12:28.800] it requires some timings of different levels on the hardware level. [12:28.800 --> 12:36.080] For example, some registers must respond in some milliseconds and so on. [12:36.080 --> 12:40.600] It might be difficult to achieve such responsiveness [12:40.600 --> 12:43.640] on just a microcontroller. [12:43.640 --> 12:47.960] Maybe we will need to fall back to FPGA in some cases. [12:47.960 --> 12:52.200] Definitely, we need to fall back to FPGA for LPC interface, [12:52.200 --> 12:56.240] which is non-existent on general-purpose microcontrollers. [12:56.240 --> 13:02.040] So, we are developing a hardware block in FPGA for that. [13:03.200 --> 13:08.760] There are also some environment improvements which can be made. [13:08.760 --> 13:13.520] As I said before, a full compliance might be quite difficult or [13:13.520 --> 13:16.680] might be even impossible to achieve. [13:16.680 --> 13:21.520] For example, in case of the strict initialization time and [13:21.520 --> 13:25.800] power requirements because the power requirements are quite low. [13:25.800 --> 13:28.760] I don't remember the exact value right now, [13:28.760 --> 13:35.800] but typical microcontroller plus FPGA and also the boot time might be challenging here. [13:35.800 --> 13:37.760] It's a lot of moving parts. [13:37.760 --> 13:40.280] We are still looking into that. [13:42.400 --> 13:45.240] So, the roadmap. [13:46.120 --> 13:52.080] The first step on the project plan was public site-duty documentation. [13:52.080 --> 13:53.880] So, that one is already live. [13:53.880 --> 13:56.800] Although, there is not much content there yet. [13:56.800 --> 14:00.160] We are just starting, we can say. [14:00.160 --> 14:02.960] But once there is already there, [14:02.960 --> 14:07.720] we have already some structure and the roadmap is also public. [14:07.720 --> 14:13.360] Then the hardware, as I mentioned before, [14:13.360 --> 14:18.240] we started with the nuclear as it was supported by the Microsoft samples. [14:18.240 --> 14:20.760] We're still exploring that, [14:20.760 --> 14:26.640] but that's probably not enough of a hardware to accomplish the goals of this project. [14:26.640 --> 14:34.000] Another one we are exploring right now is one which is based on the EOS S3 SoC. [14:34.000 --> 14:39.320] It combines Cortex-M4 and FPGA in a single chip. [14:39.320 --> 14:44.280] That might be interesting to have a Cortex-M4 for TPM to stack, [14:44.280 --> 14:51.480] and to have some FPGA for fast response and LPC communication. [14:54.200 --> 15:02.080] The third point which we are working on right now is LPC implementation. [15:02.080 --> 15:12.200] FPGA, so that requires to target the many mainboards which are currently in the market, [15:12.200 --> 15:17.160] which have only the LPC interface. [15:17.880 --> 15:24.680] Then we need to implement also all of the plumbing I showed before on the slides. [15:24.680 --> 15:29.040] So, TPM registers as defined in the specification. [15:29.040 --> 15:42.800] Also, there is some specialized FIFO protocol to pass the commands between the TPM and the host. [15:44.920 --> 15:51.480] So, that's a screenshot of some testing at the LPC module. [15:51.480 --> 16:03.280] We are using very like Icarus, I believe, for creating the data and GTK wave for drawing that out, currently. [16:05.920 --> 16:08.680] Then some testing is also necessary. [16:08.680 --> 16:12.160] We are already preparing some test cases. [16:12.160 --> 16:16.360] We can start preparing them based on the regular TPM modules, [16:16.360 --> 16:22.520] and we want to expect that our module will meet the same criteria. [16:22.520 --> 16:27.440] Of course, we'll add some specific test cases later on as well. [16:28.800 --> 16:33.680] SPI protocol is also tricky, even though it should be simpler in theory, [16:33.680 --> 16:38.400] because the typical MCUs have SPI, [16:38.400 --> 16:46.560] but typically, they are tested, let's say, in master mode, not in the slave mode. [16:46.560 --> 16:51.480] That's the most common case, and also the frequency is limited. [16:51.480 --> 16:59.240] The frequency of SPI on motherboards can be 40 MHz or more, [16:59.240 --> 17:07.160] but typical SPI, for example, the STM32 is advertised up to 24 MHz, [17:07.160 --> 17:11.360] but in reality, we have not been able to achieve that event so far, [17:11.360 --> 17:15.040] so that also might be difficult, [17:15.040 --> 17:21.920] and maybe also FPGA can help you to achieve the timing requirements we need. [17:24.920 --> 17:32.080] The eight point is exactly exploring the usage of simpler hardware platforms, [17:32.080 --> 17:40.960] so maybe we were hoping that maybe we can achieve at least SPI connection [17:40.960 --> 17:42.640] on a board without FPGA. [17:42.640 --> 17:50.600] That would be simpler and would give more users ability to test that out, [17:50.600 --> 17:54.680] at least on even Raspberry Pi or with the motherboard, [17:54.680 --> 18:04.320] so they can push that forward and have some TPM module on some regular development boards, even. [18:10.320 --> 18:16.240] I already mentioned some TPM stack improvements, [18:16.240 --> 18:21.680] sorry, the flash drive for TPM stack improvements. [18:21.680 --> 18:28.520] For example, the more test suites, [18:28.520 --> 18:32.680] there are some problems we already seen with what was there, [18:32.680 --> 18:34.920] there is no redundancy at least, [18:34.920 --> 18:41.880] maybe also encryption because the SPI could be read out from the board, [18:41.880 --> 18:45.320] but we may leave the encryption for later, [18:45.320 --> 18:48.200] but at least we would like to have the redundancy [18:48.200 --> 18:53.560] because otherwise it would be quite easy to prick the TPM board, [18:53.560 --> 18:56.200] which is not what we want. [18:57.200 --> 19:02.920] We also need some unique identification and some source of entropy, [19:02.920 --> 19:08.920] that would be required to have some unique ID for each device, [19:08.920 --> 19:13.480] and we need some entropy for cryptographic operations. [19:13.480 --> 19:15.720] This is left for later, [19:15.720 --> 19:19.280] we will see what kind of hardware we will end up in the end. [19:19.280 --> 19:22.880] We might use some hardware specific features, [19:22.880 --> 19:29.520] and if not, maybe we can use FPGA finally. [19:31.720 --> 19:36.240] Manufacturing, as you might know, [19:36.240 --> 19:39.880] each TPM must contain unique key, [19:39.880 --> 19:44.360] endorsement key, and also certificate related to that. [19:44.360 --> 19:50.840] So, we might provide a way for a user to generate [19:50.840 --> 19:55.440] the endorsement key for their TPM, [19:55.440 --> 19:58.520] so just to provide some scripts and procedures, [19:58.520 --> 20:03.480] how they can provision the TPM device. [20:03.480 --> 20:14.520] Then also we imagine that it would be nice to have some nice build system, [20:14.520 --> 20:18.840] so we can configure what kind of interface I want to target. [20:18.840 --> 20:22.040] I have a motherboard with SPI, [20:22.040 --> 20:24.040] so I just flip the SPI switch, [20:24.040 --> 20:25.360] rebuild the project, [20:25.360 --> 20:29.120] I check the hash algorithm I'm interested in, [20:29.120 --> 20:36.400] I choose what kind of hardware entropy I'm interested in, [20:36.400 --> 20:40.440] and the goal here is to make the transition between the boards more easy. [20:40.440 --> 20:43.280] I can plug out my TPM, [20:43.280 --> 20:44.880] flash another firmware, [20:44.880 --> 20:46.560] and use it on my new motherboard, [20:46.560 --> 20:51.880] which now supports SPI and previously it used like LPC. [20:54.480 --> 20:57.440] What's currently in progress? [20:57.440 --> 21:01.960] Yes, the LPC module I showed a screenshot from. [21:03.440 --> 21:09.520] The next TPM registers will be implemented. [21:09.520 --> 21:14.960] We want to pursue as much as we can in the FPGA also in the simulation, [21:14.960 --> 21:18.200] and before we stick to two certain hardware, [21:18.200 --> 21:25.400] and we also determine how big of the FPGA would be required here. [21:25.400 --> 21:31.720] We also in the parallel exploring the path of not using FPGA and using [21:31.720 --> 21:37.360] SPI for on the STM or another microcontroller. [21:37.360 --> 21:41.600] Maybe even if we could reduce the frequency to lower, [21:41.600 --> 21:43.720] use it with the Raspberry Pi as a proof of [21:43.720 --> 21:47.800] concept that would already be quite an achievement I believe. [21:47.800 --> 21:53.520] Maybe even if it wouldn't work with the regular mainboards, [21:53.520 --> 21:55.800] maybe after some time it could be improved, [21:55.800 --> 21:59.880] but it could be a nice step forward. [22:00.400 --> 22:02.920] Dogs, we are working on the dogs, [22:02.920 --> 22:07.480] they will be progressively filled with [22:07.480 --> 22:10.720] the results of our current work. [22:11.480 --> 22:17.960] Yes, if you are interested in the progress and maybe want to contribute, [22:17.960 --> 22:22.480] maybe want to discuss about the project and created stuff, [22:22.480 --> 22:28.240] you can join our communication channels. [22:28.240 --> 22:35.600] On the other hand, if you want to join the team or work daily [22:35.600 --> 22:38.280] on open-source similar projects, [22:38.280 --> 22:41.040] you might approach me directly or use [22:41.040 --> 22:44.480] contact information from the next slide. [22:44.480 --> 22:53.640] So that would be it and we might have some questions. Yes. [22:53.640 --> 22:56.880] You mentioned the timing of the problems. [22:56.880 --> 23:02.840] Have you looked into how the system behaves if you delay, [23:02.840 --> 23:10.520] maybe they or will things not work if you are too late answering some things? [23:10.520 --> 23:19.120] I'm not sure if I followed exactly, you're asking about some timing issues we had. [23:19.120 --> 23:22.800] There are some timings defined by the TCG specification. [23:22.800 --> 23:26.720] So for example, they say the given command, [23:26.720 --> 23:31.640] TPM to command must respond with the given time. [23:31.640 --> 23:33.320] So that's one. [23:33.320 --> 23:35.920] If you don't respond in that time, [23:35.920 --> 23:39.240] what happens and did you look into that? [23:39.240 --> 23:42.480] We are not at the stage even currently. [23:42.480 --> 23:44.600] We are at the lower layer, [23:44.600 --> 23:50.080] so we are trying to get the plan being done so we can even send one command. [23:50.080 --> 23:52.200] But on the lower level, yes, [23:52.200 --> 23:55.040] there are also problems with timing too. [23:55.040 --> 23:57.800] At least on the SPI part, for example, [23:57.800 --> 24:01.320] we had a problem that the STM32, [24:01.320 --> 24:05.880] we've been using doesn't give us the enough flexibility [24:05.880 --> 24:09.640] that we needed to follow the spec in some cases. [24:09.640 --> 24:12.560] There is like SPI IP block, [24:12.560 --> 24:16.520] which can just send and receive some data over the SPI. [24:16.520 --> 24:21.320] But if you would want to specify [24:21.320 --> 24:26.800] some more on the start of the transmission, [24:26.800 --> 24:28.440] pull that line down or up, [24:28.440 --> 24:31.240] that might be more tricky to do as well. [24:31.240 --> 24:38.640] So maybe the FPGA would also help here. Yes. [24:38.640 --> 24:41.680] I have a couple of questions about FPGA. [24:41.680 --> 24:44.160] Have you selected an FPGA? [24:44.560 --> 24:48.560] So the question was if we selected FPGA, no. [24:48.560 --> 24:53.840] As I said, we are just exploring the hardware on the right. [24:53.840 --> 24:56.360] So the ESS3 SoC, [24:56.360 --> 25:02.400] so that one has Cortex-M4 and FPGA integrated into one SoC, [25:02.400 --> 25:07.720] and that's the one we are analyzing right now. [25:07.720 --> 25:11.080] Maybe it would be fit for the project. [25:11.480 --> 25:14.800] If you wanted to use FPGA only, [25:14.800 --> 25:18.960] because what is also important here is that we require, [25:18.960 --> 25:23.960] we need CPU to process the TPM stack commands. [25:23.960 --> 25:33.160] So either we can use hard CPU like that with FPGA combined, [25:33.160 --> 25:39.160] or we need to use full-blown FPGA, soft core, and so on, [25:39.160 --> 25:41.200] and that's another complexity. [25:41.200 --> 25:44.600] We might be also not that much experienced with that, [25:44.600 --> 25:54.400] because we need to run the TPM stack on the CPU. [25:54.400 --> 25:57.600] When do you select an FPGA you need to use when you control flash, [25:57.600 --> 26:02.360] or because of so that it can't be tampered with it's using? [26:02.360 --> 26:04.720] Yes. So the question is if we want to, [26:04.720 --> 26:09.440] if we need to use internal flash so that flash is not tampered with, [26:09.440 --> 26:11.240] yes, that would be preferable of course, [26:11.240 --> 26:16.680] but at this point we are flexible enough to just have anything working, [26:16.680 --> 26:20.000] prove that it's even feasible with some limitations, [26:20.000 --> 26:23.320] even if the flash is separate, it can be read out. [26:23.320 --> 26:25.240] We might not care right now. [26:25.240 --> 26:32.320] We focus on the most important parts to even prove that it's feasible or not, [26:32.320 --> 26:43.000] and maybe other stuff can be handled later once we prove that we can do it or not. [26:43.000 --> 27:03.440] The problem is that you're trying to receive data over SPI essentially, [27:03.440 --> 27:06.080] your device tries to be an SPI slave, [27:06.080 --> 27:05.160] but the SPI is not flexible enough, and you want to run it, is that right? [27:05.160 --> 27:11.840] So yes, it's right that we want to be a SPI slave because the main board is master, yeah? [27:11.840 --> 27:20.440] So are you aware of chip line-up from Cypress, the P-talk? [27:20.440 --> 27:28.440] It's like a Cortex-M plus a little bit of not really FPGA but programmable clue, [27:28.440 --> 27:31.480] which can configure just some degree. [27:31.480 --> 27:32.600] Probably. [27:32.600 --> 27:48.720] Okay, so there is a suggestion from the audience on the some Cypress series, B-talk, yeah? [27:48.720 --> 27:53.480] So yeah, so I don't think we have considered that so far, so thank you for that. [27:53.480 --> 28:05.880] From the hardware with hard CPU plus FPGA, we found so far the EOS S3 which we're looking into, [28:05.880 --> 28:09.840] it has some Zephyr support, which is a plus, of course. [28:09.840 --> 28:17.360] There are, I think, also one board from Go-in, like Tank Nano, it was the name, I think. [28:17.360 --> 28:22.600] Yeah, so thanks, we'll check that out after the Cypress B-talk. [28:22.600 --> 28:23.920] Yeah? [28:23.920 --> 28:30.520] Did you consider trying to get an emulator-based development environment working, [28:30.520 --> 28:35.920] mainly using something like QMU, so that you are sort of decoupling your hardware debug [28:35.920 --> 28:45.600] from your software debug, and I know that QMU, you can implement a virtual LPC and SPI devices, [28:45.600 --> 28:55.720] and it also has a Cortex-M emulator in QMU as well, so perhaps that might be a possibility. [28:55.720 --> 29:02.160] Okay, so the question is if we consider some emulator development environments. [29:02.160 --> 29:11.800] So as I showed, we're right now exploring some, at least for the FPGA part, we start with simulation, [29:11.800 --> 29:17.600] and we develop LPC slave and host, and we can test that out in a simulation [29:17.600 --> 29:23.480] for simulating the whole Cortex-M, we have not done so far, [29:23.480 --> 29:25.880] maybe that's also something to consider. [29:33.080 --> 29:35.880] Any more questions? We can have one more, I believe. [29:38.360 --> 29:40.520] Okay, sec, oh, okay. [29:40.520 --> 29:45.520] Isn't it a big risk to, let's say, move away from the hardware part first, [29:45.520 --> 29:52.520] because a big part of using a TPM is actually being a trusted device, [29:52.520 --> 29:57.520] so if it's somewhere and you come back, then you will know it's what you want, [29:57.520 --> 30:04.520] and by now putting a lot of focus on the software, and just kind of ignoring the hardware part, [30:04.520 --> 30:07.520] what would happen if this software would work really nicely? [30:07.520 --> 30:10.520] Let's say SCM32 is in the field, I'm probably testing that. [30:10.520 --> 30:17.520] Wouldn't that weaken everything a lot more than just trusting these CC evaluated TPMs? [30:17.520 --> 30:25.520] So the question is, I believe, if we have a, let's say, software firmware running on the SCM32, [30:25.520 --> 30:32.520] and how we can make sure that the software was not tampered with, for example, yeah? [30:32.520 --> 30:35.520] Yeah, tampered or maybe extracted or whatever. [30:35.520 --> 30:40.520] Okay, so we know there are existing mechanisms, for example, [30:40.520 --> 30:47.520] for different values have already microcontrollers, different types of secure boots and so on, [30:47.520 --> 30:50.520] so we can verify if it's our firmware, if it's signed. [30:50.520 --> 30:56.520] The user can sign its own firmware and otherwise the firmware wouldn't boot, for example. [30:56.520 --> 31:00.520] I believe once we have the functionality, there are already existing mechanisms [31:00.520 --> 31:07.520] that can prove that the firmware was not tampered with. [31:07.520 --> 31:11.520] That's my understanding, yeah. [31:11.520 --> 31:14.520] Okay, so we are out of time, thank you. [31:14.520 --> 31:31.520] Thank you.