[00:00.000 --> 00:13.760] So, welcome to my talk about Trixity, one matrix SDK for almost everything. [00:13.760 --> 00:21.520] I added it written in Kotlin a few days ago, so maybe there are some Kotlin fanboys here. [00:21.520 --> 00:24.400] Yeah, let me first introduce myself. [00:24.400 --> 00:30.240] I am Benedict and my friends often see me as a killjoy when it comes to data protection [00:30.240 --> 00:35.760] and data security, but I convince them to come to matrix anyhow. [00:35.760 --> 00:42.440] So I have 20 users, family and friends on my own matrix home server. [00:42.440 --> 00:48.920] My first contact with matrix was four to five years ago and I gained a lot of experience [00:48.920 --> 00:51.760] with it since then. [00:51.760 --> 00:59.880] And so much I found it connect to X and this is just a company that is developing a Timmy [00:59.880 --> 01:06.280] and that is a TI messenger for the medical health factor in Germany. [01:06.280 --> 01:08.600] Now let's start with Trixity. [01:08.600 --> 01:19.040] Trixity is a matrix SDK and it is for developing clients, bots, app servers and servers. [01:19.040 --> 01:27.240] It is multi-platform capable, so everyone thinks Kotlin is JVM only, it is not. [01:27.240 --> 01:33.520] You can compile it to JS and you can compile it to native, that's important for iOS. [01:33.520 --> 01:39.520] So we all have that targets with Trixity. [01:39.520 --> 01:46.920] And it's also developed test-driven, so we have a high test coverage and it is licensed [01:46.920 --> 01:51.640] under the Apache 2 license. [01:51.640 --> 01:55.360] You may wonder why another SDK? [01:55.360 --> 02:03.680] So back in January 2020 there were only a few multi-platform SDKs to choose from. [02:03.680 --> 02:12.000] If I remember correctly there was the Matrix Rust SDK, but it was in a very early stage [02:12.000 --> 02:21.520] and there was the Dart SDK, but very likely this forces you to use Flutter in the UI. [02:21.520 --> 02:28.440] So there is no real free choice which UI framework you want to use, especially when you want [02:28.440 --> 02:35.600] to use native UI technologies, for example on Android Compose or on iOS Swift. [02:35.600 --> 02:43.760] Additionally, most SDKs didn't have a very strict typing of events and the rest end points. [02:43.760 --> 02:48.080] And also the extensibility was a bit limited. [02:48.080 --> 02:54.500] Even if the next point is not that important, SDKs were really bound to its purpose. [02:54.500 --> 03:00.240] So you've had a SDK for a client, for a server, for a bot and so on. [03:00.240 --> 03:10.680] So you have to learn a new SDK for each target, each purpose of application for Matrix. [03:10.680 --> 03:12.280] Why choose Kotlin? [03:12.280 --> 03:19.320] Choose Kotlin because it is a statically typed language which compiles, as I mentioned, to [03:19.320 --> 03:21.560] JVM, JS and native. [03:21.560 --> 03:29.800] And you don't need bindings like in Rust, when you use JS you get JS, you don't need [03:29.800 --> 03:33.480] to make bindings over VASM or something. [03:33.480 --> 03:39.760] And on native you can just call it from your Swift or Objective-C code in X code and have [03:39.760 --> 03:42.440] access to trixity. [03:42.440 --> 03:51.280] Moreover, besides shared common code, it is possible to write platform-specific code. [03:51.280 --> 03:56.040] You just define a common interface and depending on the platform, the actual implementation [03:56.040 --> 03:57.960] can be different. [03:57.960 --> 04:03.840] This way you have access to platform-specific APIs and libraries which can be very helpful [04:03.840 --> 04:08.840] when implementing encryption like AES for attachments. [04:08.840 --> 04:13.840] So you have on each platform, can you use the native encryption algorithm the platform [04:13.840 --> 04:16.640] gives you already. [04:16.640 --> 04:23.320] And last but not least, you can define your own domain-specific language. [04:23.320 --> 04:26.600] You will see later what I did with that. [04:26.600 --> 04:29.880] So let's start with the core of trixity. [04:29.880 --> 04:38.400] The core contains all basic data structures of the spec and its serialization algorithms. [04:38.400 --> 04:45.760] This includes events, identifiers like user IDs, event IDs and so on and other things [04:45.760 --> 04:51.680] like cross-signing keys, device keys. [04:51.680 --> 04:58.720] One goal of developing trixity was the ability to add custom events which are strictly typed. [04:58.720 --> 05:05.160] So this is achieved by mapping event types to just a serializer. [05:05.160 --> 05:14.000] In this example, we add a new type of m.room.pum.cat of the Kotlin type cat event content. [05:14.000 --> 05:19.560] So you have access to all fields of this cat event content and don't have to mess around [05:19.560 --> 05:23.720] with the JSON. [05:23.720 --> 05:28.240] The next layer of trixity is the API layer. [05:28.240 --> 05:34.240] Each API has its model which defines all endpoints of the API. [05:34.240 --> 05:39.400] The actual client and server implementation just use these endpoints. [05:39.400 --> 05:44.640] And so as a consequence, there is no need to define things twice. [05:44.640 --> 05:46.880] They are using the same Kotlin object. [05:46.880 --> 05:53.680] So a Kotlin object represents an endpoint on the matrix side, Kotlin class, not Kotlin [05:53.680 --> 05:56.120] object, sorry. [05:56.120 --> 06:00.920] The best way to show this to you is with an example. [06:00.920 --> 06:04.240] This example is the endpoint leave room. [06:04.240 --> 06:13.160] You just implement matrix endpoint, give him the types and in this case, unit is the response. [06:13.160 --> 06:20.680] So we don't get a JSON as response, just a hated HTTP OK or an empty JSON. [06:20.680 --> 06:28.200] And you can also define a request, a URL, HTTP method and all that. [06:28.200 --> 06:40.760] And you can use this to call, to use a client, a matrix client on the client side to call [06:40.760 --> 06:41.760] these endpoints. [06:41.760 --> 06:46.840] So you create a leave room object, a request and you get the response. [06:46.840 --> 06:48.880] That's all on the client side. [06:48.880 --> 06:52.600] And the same thing on the server side. [06:52.600 --> 06:59.440] So you define an endpoint, give it the type, you expect as a request and in the context [06:59.440 --> 07:07.040] object you have access to the request and can answer with a response. [07:07.040 --> 07:12.360] To make it a bit more easier for developers, there is a bit of abstraction on top of that. [07:12.360 --> 07:15.840] So you can also just call leave room. [07:15.840 --> 07:20.640] So you don't have to know which endpoint are there existing. [07:20.640 --> 07:26.760] You just type point on your IDE and see, OK, there's a leave room, I can leave a room. [07:26.760 --> 07:28.640] And the same on the server side. [07:28.640 --> 07:34.680] So you just need to implement an interface and see all your endpoints you have to implement [07:34.680 --> 07:40.920] to be a fully-featured matrix server API. [07:40.920 --> 07:46.320] Regardless of the API, there is Trick City ARM and Trick City Crypto. [07:46.320 --> 07:49.680] Trick City ARM is just a wrapper for a lip-arm. [07:49.680 --> 07:57.640] As mentioned, a platform-independent implementation doesn't need to worry about the actual platform-specific [07:57.640 --> 07:58.960] implementations. [07:58.960 --> 08:08.800] So you have, when you use Trick City ARM, you don't need to know how lip-arm is accessed. [08:08.800 --> 08:11.280] So on the JVM, I use JNA. [08:11.280 --> 08:20.960] On JS, this is done via VASM and on native, just see interrupt from Kotlin. [08:20.960 --> 08:23.560] Lip-arm is also packaged into Trick City ARM. [08:23.560 --> 08:31.320] So as a developer, you don't need to ship the build-c library. [08:31.320 --> 08:34.000] And it is just loaded automatically. [08:34.000 --> 08:42.560] So you don't need to init your encryption, like in other libraries, as it's just loaded. [08:42.560 --> 08:50.480] My plan is to migrate that to Vodosemak, but currently, UNIFFI, we heard of that in another [08:50.480 --> 08:54.880] talk, does not support VASM targets. [08:54.880 --> 09:03.360] So currently, I can Vodosemak just only use in Kotlin, JVM and native, but I also want [09:03.360 --> 09:04.600] to use JavaScript. [09:04.600 --> 09:09.040] So this project is currently on eyes. [09:09.040 --> 09:14.400] Trick City Crypto currently implements the key management and allows to decrypt and encrypt [09:14.400 --> 09:15.800] events. [09:15.800 --> 09:21.880] And in the future, it will be more, so you can reuse the completely, complete crypto [09:21.880 --> 09:28.760] stuff, for example, in app services. [09:28.760 --> 09:33.200] Trick City Client allows you to, oh, sorry. [09:33.200 --> 09:40.160] The most abstract layer are Trick City Client and Trick City App Service. [09:40.160 --> 09:44.840] While Trick City App Service is still very basic and does not have a persistent layer, [09:44.840 --> 09:49.840] Trick City Client allows you to choose which database and which media store implementation [09:49.840 --> 09:51.960] you want to use. [09:51.960 --> 09:56.840] And on top of that, there is something that isn't released yet. [09:56.840 --> 10:04.000] We are not sure how to release, because we have to make money with our company. [10:04.000 --> 10:05.920] It is Trick City Messenger. [10:05.920 --> 10:11.000] This is just the view model representation of a messenger. [10:11.000 --> 10:17.840] So you only have to implement a thin UI layer, where when the user clicks the button, the [10:17.840 --> 10:25.880] UI sends this to the view model, and the view model says, OK, send a message, or go to [10:25.880 --> 10:29.120] this room, or any other stuff. [10:29.120 --> 10:40.680] And with this approach, we have implemented an iOS client in a few weeks with one person. [10:40.680 --> 10:46.000] So Trick City Client allows you to implement a fully-featured Matrix client or bot. [10:46.000 --> 10:55.920] So if you were at the Matrix Rust SDK talk, you can just use their representation and [10:55.920 --> 10:58.720] instead of Rust, you write Kotlin. [10:58.720 --> 11:05.600] So everything that Matrix Rust SDK does also does Trick City. [11:05.600 --> 11:13.040] Some features like Sliding Zinc aren't there, because we want to follow the stable Matrix [11:13.040 --> 11:19.240] specs, so we don't implement any MSCs. [11:19.240 --> 11:26.560] So we have all the E2E features, the exchangeability data stores and media stores, we have reactive [11:26.560 --> 11:32.440] cache on top of that, notification, thumbnail generation, all that stuff you need to implement [11:32.440 --> 11:36.080] the client. [11:36.080 --> 11:45.360] There are already some media store wrappers that we implemented for all targets, targets [11:45.360 --> 11:54.920] expect browsers, we just use the FI system and on browsers we use indexDB. [11:54.920 --> 11:59.440] Next I want to talk about how I accidentally created a cache. [11:59.440 --> 12:05.040] So on the left side you see the relation between the UI, Trick City and the storage [12:05.040 --> 12:06.720] layer. [12:06.720 --> 12:11.160] And because reactive UIs are really common, I wanted Trick City to give the UI access [12:11.160 --> 12:13.840] to the data in a reactive way. [12:13.840 --> 12:20.560] So if anything changes, the UI should immediately know about this. [12:20.560 --> 12:22.320] But the question is how? [12:22.320 --> 12:30.760] On the one hand there are a few databases which support listeners to react to changes [12:30.760 --> 12:38.720] to the database, but on the other hand this would limit support for multiple supported [12:38.720 --> 12:44.080] databases because finding a common interface for listeners would be hard. [12:44.080 --> 12:50.960] So I started implementing an intermediate layer based on Kotlin flows. [12:50.960 --> 12:54.520] The flow in Kotlin is a reactive data structure. [12:54.520 --> 12:59.640] So you have a producer on the one side and a consumer on the other side. [12:59.640 --> 13:08.800] So if the producer changes anything, the consumers immediately know about that. [13:08.800 --> 13:16.280] And what does the intermediate layer, it talks to a very thin database layer which only knows [13:16.280 --> 13:21.440] about save, read and delete data. [13:21.440 --> 13:29.360] And if someone wants data from this layer, it just reads it from the database or if someone [13:29.360 --> 13:36.000] changes something in this layer, it just writes it to the database. [13:36.000 --> 13:41.640] And the values are kept in this layer as long as they are subscribed from anyone. [13:41.640 --> 13:48.600] So this means that if anyone else subscribes to a value, he will immediately get the current [13:48.600 --> 13:53.640] value because there is no additional database call needed because it is persisted in the [13:53.640 --> 13:56.000] intermediate layer. [13:56.000 --> 14:01.400] This goes so far that even if there are no subscribers anymore, I just keep the value [14:01.400 --> 14:03.680] a bit longer in this layer. [14:03.680 --> 14:11.520] So if someone asks for a value for example 10 seconds later and the value is still stored, [14:11.520 --> 14:20.920] he gets the value and there is no database call needed. [14:20.920 --> 14:28.000] And you can now guess what I implemented, it's just cache. [14:28.000 --> 14:32.760] So as you see with this cache, everything in Trixity is reactive. [14:32.760 --> 14:38.880] These are just a few examples, so you can just get all users or check if a user can [14:38.880 --> 14:50.920] invite another one, you immediately get the notification if anything has changed. [14:50.920 --> 14:57.720] As mentioned, the database layer is really thin, so we implemented many database layers. [14:57.720 --> 15:04.040] So SQL-based one, we are via exposed for JVM-based targets. [15:04.040 --> 15:11.440] We implemented one with Realm that can be also used on native targets like iOS and for [15:11.440 --> 15:16.720] browsers we have IndexedDB. [15:16.720 --> 15:23.960] So most of the data changes, when a Zinc is processed, most of the data changes when a [15:23.960 --> 15:25.240] Zinc is processed. [15:25.240 --> 15:33.800] So it is way more performant to make a large transaction around the Zinc. [15:33.800 --> 15:40.600] So you don't have a transaction, every time the cache writes something into the database, [15:40.600 --> 15:45.800] Trixity just spends a large transaction around Zinc, so you have thousands of writes in [15:45.800 --> 15:48.520] one transaction. [15:48.520 --> 15:56.800] So everything fine, no, then there was Realm, and Realm is just a really fast database, [15:56.800 --> 16:02.640] but Realm only allows one write transaction at the time. [16:02.640 --> 16:08.920] So if another one wants to write to the database, he needs to wait until the first transaction [16:08.920 --> 16:11.160] ended. [16:11.160 --> 16:19.960] And the problem is that while the Zinc is running, it may be needed that we have to [16:19.960 --> 16:25.160] wait for outdated keys to be updated to decrypt on stuff. [16:25.160 --> 16:31.760] So if the outdated keys part of Trixity want to write something in the database, he needs [16:31.760 --> 16:36.520] to wait until the Zinc is ended, but the Zinc waits for the keys to be updated, so we have [16:36.520 --> 16:41.000] a deadlock there. [16:41.000 --> 16:46.440] This is one of the reasons why I introduced Azure Zinc transactions. [16:46.440 --> 16:52.640] The other reason was that most of the time the Zinc processing, as I find out with some [16:52.640 --> 16:57.160] benchmark, was wasted due to writing to the database. [16:57.160 --> 17:04.640] So processing a Zinc takes a long time because there are so many IO operations that the user [17:04.640 --> 17:10.340] have to wait until all operations are done. [17:10.340 --> 17:18.200] So what does a Zinc transaction in Trixity mean that all changes to the database are [17:18.200 --> 17:21.840] collected and processed in the background? [17:21.840 --> 17:29.840] So database operations are decoupled from the cache layer, and they are just written [17:29.840 --> 17:31.200] in the background. [17:31.200 --> 17:37.080] If everything fails, it is war-backed, but that's irrelevant in the normal use case. [17:37.080 --> 17:45.120] So we can process even more Zincs at the same time as if we would wait that the transaction [17:45.120 --> 17:48.000] has finished. [17:48.000 --> 17:51.000] And this gave Trixity a huge performance boost. [17:51.000 --> 18:01.200] Actually I released it last week, and I've wrote an integration test which just fails [18:01.200 --> 18:04.680] if it is not 50% faster. [18:04.680 --> 18:11.120] So it is always green, I don't know. [18:11.120 --> 18:15.720] The next thing I did completely different in Trixity are timelines. [18:15.720 --> 18:22.040] So normally Zincs are sent as fragment from the server to the client. [18:22.040 --> 18:29.280] So one fragment contains a few timeline events, and if there's a gap, you get a token. [18:29.280 --> 18:34.280] So you know as a client, okay, there's a gap, I need to fetch to fill that gap, and so on. [18:34.280 --> 18:41.160] And these fragments normally are saved as is to the database in clients. [18:41.160 --> 18:45.760] In Trixity, I use another approach. [18:45.760 --> 18:50.240] There I have each timeline event pointing to each other. [18:50.240 --> 18:56.280] And if there's a gap, the timeline event knows about this. [18:56.280 --> 19:05.200] So this allows Trixity to again benefit from content flows. [19:05.200 --> 19:11.880] So we have a producer that is the room starting from a timeline event, and a subscriber who [19:11.880 --> 19:15.320] wants the next timeline event to fill its timeline. [19:15.320 --> 19:24.160] So this allows us to go really fast through the timeline and bid the timeline under the [19:24.160 --> 19:31.960] top, and it makes it easier to fill the gaps, because we don't have another layer, fragments, [19:31.960 --> 19:37.440] we just have timeline events. [19:37.440 --> 19:42.440] And this way, it's also possible to very easy connect upgraded rooms. [19:42.440 --> 19:49.320] So that one I released yesterday, I think, or two days ago. [19:49.320 --> 19:55.360] So the timeline event just shows to another timeline in another room. [19:55.360 --> 20:01.640] So timelines with room upgrades are invisible for users of Trixity. [20:01.640 --> 20:07.080] You just get an infinite timeline until you reach the oldest room and the first timeline [20:07.080 --> 20:10.840] event. [20:10.840 --> 20:14.120] And finally, a small example. [20:14.120 --> 20:20.320] So if you want to write a bot, that's a good start to use Trixity just to get a feeling [20:20.320 --> 20:22.840] about it, how it works. [20:22.840 --> 20:26.480] You can just call get timeline events from now on. [20:26.480 --> 20:34.440] And what this does is it subscribes to the flow that I mentioned, which built the timeline, [20:34.440 --> 20:39.200] and rates until the timeline event is decrypted, because the timeline itself also is a flow. [20:39.200 --> 20:46.640] So if everything changes, it is redacted, or there's a reaction, or a replacement, the [20:46.640 --> 20:48.920] timeline event flow changes. [20:48.920 --> 20:56.200] So this get timeline events from now on just wraps it down, so you get a timeline event [20:56.200 --> 20:58.280] that is decrypted. [20:58.280 --> 21:04.480] And you can see we can just check what type it has, and when we have checked the type, [21:04.480 --> 21:10.840] we have access to body, and then we have send message. [21:10.840 --> 21:15.200] So when you call send message, you don't have to worry about if the room is encrypted [21:15.200 --> 21:16.200] or not. [21:16.200 --> 21:22.640] You can just use the DSL that I created to write text messages, image messages, and so [21:22.640 --> 21:26.680] on, and you can also form relations with that. [21:26.680 --> 21:34.520] So you can say like here, yeah, this is a reply to the timeline event I just got. [21:34.520 --> 21:40.120] And this has extensible events in mind, so if in the future there are other content blocks [21:40.120 --> 21:49.920] that are added, we can just extend the DSL, and you can very, very intuitive write your [21:49.920 --> 21:57.520] content with Trixity into an event, into an extensible event. [21:57.520 --> 22:02.960] So here are some projects that are using Trixity and that I know about. [22:02.960 --> 22:07.440] There is a Spotify bot, a Mensa bot. [22:07.440 --> 22:13.480] Someone has created some extensions to better use it for bots and so on. [22:13.480 --> 22:16.480] And there is Trixity examples. [22:16.480 --> 22:23.160] That is from me, this is just a ping bot, part of it you saw here. [22:23.160 --> 22:29.520] It is E2E enabled, you can just run it in your browser, on your Linux machine, or on [22:29.520 --> 22:35.440] your iOS client, or via the JVM on Android. [22:35.440 --> 22:39.920] And there is also Timmy Messenger, that's our messenger from our company, but it's not [22:39.920 --> 22:40.920] open source yet. [22:40.920 --> 22:45.200] We plan to, but we don't know how, because licensing. [22:45.200 --> 22:54.040] Yeah, just try it out and come to me if you have questions. [22:54.040 --> 22:55.480] I'm a bit around. [22:55.480 --> 22:59.960] This is the matrix room, this is my matrix ID. [22:59.960 --> 23:06.120] And if we have a bit of time, I just can show you a small demo, I think. [23:06.120 --> 23:11.880] Yeah, I made a small performance comparison. [23:11.880 --> 23:18.600] It's not representative, because it just runs once on my machine, and there was no warm [23:18.600 --> 23:22.120] up or multiple runs. [23:22.120 --> 23:29.360] Yeah, on the left side you see our Timmy client, but basically it's just using Trixity. [23:29.360 --> 23:33.920] On the right side there is Element, and in the middle is Fluffy Chat. [23:33.920 --> 23:38.120] And now you can give me your bets, who is the fastest. [23:38.120 --> 23:46.040] Yeah, let's see, when the red zoom comes, the response from the server reached the client. [23:46.040 --> 23:51.400] So I just looked into the Synapse logs when the response was sent. [23:51.400 --> 23:57.120] So we just wait a few seconds, and then we see who is first. [23:57.120 --> 24:01.640] And you can look into opening rooms, because we have this caching, it is very fast in our [24:01.640 --> 24:07.200] client, but I must say Fluffy Chat is also very fast regarding opening rooms. [24:07.200 --> 24:14.200] So, oh, Trixity was the fastest. [24:14.200 --> 24:21.160] And we can open rooms, and you see open rooms is also a lot faster than on Element. [24:21.160 --> 24:28.520] And there was Fluffy Chat, and Fluffy Chat also is very fast. [24:28.520 --> 24:36.840] Yeah, I also have a desktop demo, but there Neko is the fastest. [24:36.840 --> 24:39.840] This is Neko, this is Timmy, Element on the web. [24:39.840 --> 24:47.120] It's a bit hard, this comparison, because Element runs in the web and does not have the [24:47.120 --> 24:49.160] multi-threading other clients have. [24:49.160 --> 24:53.320] So Neko, just three seconds. [24:53.320 --> 25:01.880] I can just chat around, and the next is Timmy on the left, top left side, also very fast [25:01.880 --> 25:07.000] opening rooms, and switching rooms, because it is cached all the time in events. [25:07.000 --> 25:14.800] Then there was Element, and I think also Fluffy Chat, yeah, Fluffy Chat also. [25:14.800 --> 25:21.800] Yeah, okay, that was my talk, thank you. [25:21.800 --> 25:32.200] Questions? [25:32.200 --> 25:36.720] How do you prevent data loss with your async transactions? [25:36.720 --> 25:39.880] The transactions are run each after. [25:39.880 --> 25:46.760] So if one transaction fails, the other transactions are just run, and the next one starts with [25:46.760 --> 25:47.760] the alt token. [25:47.760 --> 25:53.760] What happens if, say, your battery runs out whilst a bunch of transactions are queued? [25:53.760 --> 25:59.200] If your battery runs out when all those transactions are queued, so they haven't been written to [25:59.200 --> 26:00.200] the database. [26:00.200 --> 26:02.000] Yeah, then they are gone. [26:02.000 --> 26:05.920] Your client has to do the work again, but mostly this doesn't happen. [26:05.920 --> 26:12.400] If you close your client, all transactions are written that are just opened, but it depends [26:12.400 --> 26:18.280] on your platform if it is killed hardly, or a trick city have a bit of time to write [26:18.280 --> 26:20.360] the transactions back to the database. [26:20.360 --> 26:28.200] But it's still very fast to write, so it's just a bit snappier on mobile devices, which [26:28.200 --> 26:29.960] are not that fast. [26:29.960 --> 26:37.680] Like my smartphone from 2016, Element, I can't run Element on that, because it's too slow, [26:37.680 --> 26:42.360] and sending messages, 10 seconds later, the message is, oh, okay, yes, yes, now. [26:42.360 --> 26:43.360] Now we send the message. [26:43.360 --> 26:49.960] I don't have this problem, because zooms are just faster than the slow I owe writing to [26:49.960 --> 26:57.480] the database we have on old smartphones, for example. [26:57.480 --> 26:58.480] Another question. [26:58.480 --> 27:12.480] It's nice to write. [27:12.480 --> 27:14.480] I like DSLs. [27:14.480 --> 27:22.320] In Kotlin, we have them all over the language, and it feels very intuitive, because your [27:22.320 --> 27:35.440] IDE gives you suggestions, what methods they are, and it's a lot easier to read, I think. [27:35.440 --> 27:53.320] There's Rust, and there's Kotlin, but is there any way to realize the amount of things that [27:53.320 --> 27:54.320] the user has to learn to use all these things? [27:54.320 --> 27:55.320] I didn't understand the question, acoustically. [27:55.320 --> 28:06.000] There's a lot of language learning to make any progress. [28:06.000 --> 28:12.080] Is there any effort to unify this, or towards Rust, maybe? [28:12.080 --> 28:16.360] To be honest, I don't like Rust. [28:16.360 --> 28:23.320] I just like a higher level of implementing stuff, so we didn't spoke, I didn't spoke [28:23.320 --> 28:25.320] with the Matrix Rust team. [28:25.320 --> 28:30.320] I think we are done with the time, and the last question from the audience would be that [28:30.320 --> 28:36.320] we can open the windows and the doors a bit to get more air in. [28:36.320 --> 28:37.320] Thank you very much. [28:37.320 --> 28:38.320] Thank you very much. [28:38.320 --> 28:39.320] Yep. [28:39.320 --> 28:56.160] That's it.