[00:00.000 --> 00:09.640] Hello everyone, my name is Svan and I've been working at Almond as an intern for more [00:09.640 --> 00:11.560] than a half year now. [00:11.560 --> 00:15.560] Over the time I've been a part of several projects, but I think the most exciting of [00:15.560 --> 00:21.800] all has been working with the VoIP team on Gascaded 4 Pi and selected forwarding units. [00:21.800 --> 00:26.000] So let me introduce you to all of that. [00:26.000 --> 00:30.520] Firstly, we'll see what is at call and how it works. [00:30.520 --> 00:35.520] We'll also look at waterfall, the matrix approach to foci and this is news. [00:35.520 --> 00:42.040] We'll have a short demo of how all of this looks in action and see what the future holds [00:42.040 --> 00:45.040] at the end. [00:45.040 --> 00:47.880] So what is Almond call? [00:47.880 --> 00:51.680] It is the video calling app which we've been working on at Almond. [00:51.680 --> 00:56.760] It has all of the features which you would expect such as screen sharing and also allows [00:56.760 --> 01:03.760] you to reorder all of the tiles to switch your leads, among other things. [01:03.760 --> 01:10.680] It uses two main building blocks, one of them being the RTC for the media transmission and [01:10.680 --> 01:13.200] matrix for the action signaling. [01:13.200 --> 01:20.160] That means doing all of the things such as setting up a connection. [01:20.160 --> 01:23.560] We use what we call full mesh connection model. [01:23.560 --> 01:28.360] That means that in order to create a conference, all of the clients have to connect to all [01:28.360 --> 01:31.120] of the other clients. [01:31.120 --> 01:36.760] While this has several benefits such as not requiring a backend and being decentralized [01:36.760 --> 01:45.360] unlike GCR Zoom or even supporting encryption out of the box, it is harder to make it stable. [01:45.360 --> 01:54.760] And this main downside being that it requires a lot of upstream on every user's side. [01:54.760 --> 02:02.240] Since for you to share your video to other users, you have to upload it to each individual. [02:02.240 --> 02:06.440] So at about 8 users, things might get a little choppy. [02:06.440 --> 02:09.720] So the question is, can we do better? [02:09.720 --> 02:12.560] But first, let's look at why we want to do better. [02:12.560 --> 02:15.680] What are the use cases for large conferences? [02:15.680 --> 02:22.680] The obvious one is large calls for us to be able to compete with GCR Zoom and the like. [02:22.680 --> 02:27.320] Another use case, which is also quite interesting, is webinars. [02:27.320 --> 02:32.360] There you might only have a few presenters, but these presenters would need to publish [02:32.360 --> 02:38.880] their video to a bunch of listeners, and that would also require a lot of upstream bandwidth. [02:38.880 --> 02:44.840] The last use case, but possibly the most interesting one, is thirdly the Immersive Virtual [02:44.840 --> 02:47.960] World type that is built on Matrix. [02:47.960 --> 02:54.240] This would also greatly benefit from being able to host virtual worlds with hundreds [02:54.240 --> 02:58.760] and hundreds of users. [02:58.760 --> 03:02.120] But how do we actually do all of this? [03:02.120 --> 03:07.320] Well the solution is to introduce a backend or what we call at the video conferencing [03:07.320 --> 03:09.520] world, focus. [03:09.520 --> 03:13.920] This might either be an MCU or an SFU. [03:13.920 --> 03:17.960] So let's look at what these are. [03:17.960 --> 03:22.840] An MCU or a multi-point control unit is a server which trumps code and mixes all of [03:22.840 --> 03:28.720] the user's streams into one, which it then forwards to all the users. [03:28.720 --> 03:35.440] This has the great benefit of requiring only one upstream and one downstream per user, but [03:35.440 --> 03:37.000] it has many downsides. [03:37.000 --> 03:41.360] The streams are mixed together, so the clients have no control of how they are laid down [03:41.360 --> 03:43.440] on the screen. [03:43.440 --> 03:49.880] The server also breaks into an encryption, since it has to trunscode and mix the streams. [03:49.880 --> 03:57.600] The server also must be powerful to do all of this, and trunscoding and mixing also takes [03:57.600 --> 04:02.520] some time and so the server has delay. [04:02.520 --> 04:10.960] Also it has the obvious downsides of requiring a backend and a bit more signaling. [04:10.960 --> 04:16.920] That is why we prefer the cyclic forwarding unit or SFU solution, an SFU is a server [04:16.920 --> 04:23.320] which takes the user's stream and then forwards it to all of the other users. [04:23.320 --> 04:29.200] This is quite a bit more reliable than the full mesh connection model, easier to setup. [04:29.200 --> 04:34.840] It requires much less upstream bandwidth, scales very well to large conferences and [04:34.840 --> 04:41.680] also does not break into encryption, if done with something like insertable streams. [04:41.680 --> 04:45.960] The streams are also set separately, so the clients are in control of laying them out [04:45.960 --> 04:48.880] on the screen. [04:48.880 --> 04:56.920] The server can also work in a federated manner, which will be important for the next slide. [04:56.920 --> 05:01.040] So what is waterfall and why is it special? [05:01.040 --> 05:07.560] It is the SFU that has been contributed to us by Sean, the creator of Pion. [05:07.560 --> 05:12.360] It uses the matrix protocol for the actual signaling. [05:12.360 --> 05:20.240] That means that it is interoperable, so anyone implementing MSC341 and MSC3898 can build [05:20.240 --> 05:24.800] their own SFU or connect to our. [05:24.800 --> 05:27.720] We also support what we call skating. [05:27.720 --> 05:33.800] That means that waterfall or SFU can connect to a bunch of SFUs. [05:33.800 --> 05:38.520] This is very similar to what we do with matrix home servers and creates federated matrix [05:38.520 --> 05:41.160] RTC worlds. [05:41.160 --> 05:45.600] Conferences done in this manner can dynamically grow, since the load is balanced between all [05:45.600 --> 05:51.000] of the foci that are part of the conference. [05:51.000 --> 05:55.520] So if we assume that the foci interconnect in high quality and that they are placed near [05:55.520 --> 06:01.080] participants, it means that the network hiccups are minimized and we have much less delay [06:01.080 --> 06:04.360] and packet loss. [06:04.360 --> 06:09.840] Also our foci are a little dumb and the clients are actually the smart ones, so they are in [06:09.840 --> 06:15.880] control of what foci collect to and can subscribe to tracks on their own. [06:21.400 --> 06:27.280] So how does a client actually connect to a foci and how does it choose one? [06:27.280 --> 06:31.760] On the left we can see an example where Alice and Bob are a part of a conference which is [06:31.760 --> 06:35.600] happening on the SFU in Brussels. [06:35.600 --> 06:39.360] And Charlie wants to join this conference. [06:39.360 --> 06:44.480] While Charlie is located at London and has London SFU, it is actually much easier for [06:44.480 --> 06:48.640] Charlie to connect directly to the Brussels one. [06:48.640 --> 06:51.040] But they will still prefer the London one. [06:51.040 --> 06:55.440] And this is all indicated in the m.call.members state event. [06:55.440 --> 07:01.360] In the m.fauxguide active field you can see that Charlie is actually connected to Brussels [07:01.360 --> 07:07.760] SFU, but in the m.fauxguide preferred field we can see that they would prefer connecting [07:07.760 --> 07:10.360] to London one. [07:10.360 --> 07:17.640] Then on the right we can see even another user joining the call, then who is also located [07:17.640 --> 07:18.760] at London. [07:18.760 --> 07:26.040] And so now there is a benefit for both Charlie and then to use the London SFU and for the [07:26.040 --> 07:28.720] SFUs to federate with each other. [07:28.720 --> 07:35.040] And this is all again indicated in Charlie's member state event, because now they are connected [07:35.040 --> 07:39.440] to the London SFU instead of the Brussels one. [07:39.440 --> 07:45.080] You can notice that both m.fauxguide active and preferred are race. [07:45.080 --> 07:52.040] So Charlie or anyone else can specify multiple SFUs they prefer, but also multiple SFUs they [07:52.040 --> 07:54.760] are publishing to do so perhaps. [07:54.760 --> 08:01.320] If you know that a connection to a certain SFU might be unreliable and you want a backup, [08:01.320 --> 08:07.440] you can do that by publishing to multiple SFUs. [08:07.440 --> 08:11.800] Now that we have chosen an SFU, how do we actually connect to it? [08:11.800 --> 08:19.240] This is done very similarly to what we do in full mesh calls, using two device messages [08:19.240 --> 08:20.240] over matrix. [08:20.240 --> 08:26.400] We exchange the SDB using the m.call.invite and answer event and the candidates using [08:26.400 --> 08:29.920] the m.call.candidates event. [08:29.920 --> 08:35.960] Once the connection is actually established, the client and the focus continue exchanging [08:35.960 --> 08:39.600] messages using the WebRTC data channel. [08:39.600 --> 08:44.960] This is quite useful, since it's super fast and so where you need to quickly negotiate, [08:44.960 --> 08:54.000] it is much better than using the matrix to device messages. [08:54.000 --> 08:58.800] So now that we are connected to the conference, how do we make it happen that the user actually [08:58.800 --> 09:01.520] see something? [09:01.520 --> 09:09.360] On the left we can see an SFU publishing our system using the SDB stream method at a field. [09:09.360 --> 09:14.080] This field is present on multiple events, such as the m.call.sdbstream method at a changed [09:14.080 --> 09:21.080] event or the m.call.invite, answer or negotiate events. [09:21.080 --> 09:28.960] In this specific case, we can see that Alice is sending stream ID 1, which has two tracks, [09:28.960 --> 09:36.640] track ID 2 which is audio and track ID 1 which is video and has resolution of 12 ADP. [09:36.640 --> 09:41.440] On the right we can see Bob subscribing to Alice's stream. [09:41.440 --> 09:45.720] This is done using the m.call.track subscription event. [09:45.720 --> 09:51.040] In the subscribe field we can see two tracks we are subscribing to, the track ID 2 which [09:51.040 --> 09:56.520] is Alice's audio track and the track ID 1 which is Alice's video track. [09:56.520 --> 10:04.560] We can also notice that Bob is subscribing at a specific resolution, 640 x 360. [10:04.560 --> 10:07.680] This will be very important for the next slide. [10:07.680 --> 10:13.360] We can also notice that Bob is unsubscribing from track ID 3, since the SFU is no longer [10:13.360 --> 10:19.840] publishing it, since it is not present on the left. [10:19.840 --> 10:25.400] While SFUs do solve the issue of extreme bandwidth, there is still the issue of a lot of streams [10:25.400 --> 10:29.520] going down and therefore downstream bandwidth. [10:29.520 --> 10:32.360] So how do we actually limit that if we can? [10:32.360 --> 10:36.960] The solution to that problem is simulcast. [10:36.960 --> 10:43.080] Simulcast means that all of the clients publish multiple versions of their streams, each at [10:43.080 --> 10:49.040] a different resolution, for example 1080p, the half of that and the quarter of that. [10:49.040 --> 10:55.800] The focus for SFU then forwards the ideal resolution for a given client. [10:55.800 --> 11:00.600] So on the right we can see two clients, the light green one and the dark green one. [11:00.600 --> 11:09.120] The light one is looking at one tile on stream and it is quite large on the user screen and [11:09.120 --> 11:14.000] therefore the SFU is forwarding the 1080p version of that stream. [11:14.000 --> 11:21.120] But at the bottom the dark green client is actually subscribing to three streams and [11:21.120 --> 11:28.160] they are displayed in quite small tiles on the user screen and therefore we only forward [11:28.160 --> 11:34.600] the 360p version of that stream. [11:34.600 --> 11:44.000] The focus might also send over a solution depending on the available bandwidth. [11:44.000 --> 11:45.920] Now let's look at the demo. [11:45.920 --> 11:49.360] Hi, thanks very much Simon. [11:49.360 --> 11:54.000] So I'm here to run you through an actual demo of Element Call and just show you generally [11:54.000 --> 11:57.520] how the application works and what the features look like. [11:57.520 --> 12:02.280] You can see me here in this little box down on the bottom right and in the rest of the [12:02.280 --> 12:04.880] boxes are some of the rest of the VoIP team. [12:04.880 --> 12:08.200] Obviously you've got Simon who has just been speaking to you. [12:08.200 --> 12:09.200] Florian. [12:09.200 --> 12:11.040] Hello, I'm Florian. [12:11.040 --> 12:13.280] I'm the engineering manager of the VoIP team. [12:13.280 --> 12:18.080] Yeah, and we're having a lot of fun developing Element Call, right? [12:18.080 --> 12:19.080] And Enrico. [12:19.080 --> 12:20.080] Hello there. [12:20.080 --> 12:21.080] I'm Enrico. [12:21.080 --> 12:22.080] Nice to be here. [12:22.080 --> 12:23.080] Excellent. [12:23.080 --> 12:32.440] So the two modes of the thing I'll show you first is we are in freedom mode at the moment [12:32.440 --> 12:36.960] and what freedom mode means is that I can take these tiles and throw them around the [12:36.960 --> 12:40.120] screen in a rather fun way like that. [12:40.120 --> 12:44.760] So if I want whoever on the top right, that can do that. [12:44.760 --> 12:48.840] The other thing I can do in freedom mode is make any of the tiles bigger if I want by [12:48.840 --> 12:52.080] just double clicking them. [12:52.080 --> 12:56.680] So if I want to want to focus on one tile, I can do that. [12:56.680 --> 13:02.840] So to show you the other mode, spotlight mode, now that has highlighted Enrico as the last [13:02.840 --> 13:03.840] person that spoke. [13:03.840 --> 13:06.200] Florian, if you may be you speak. [13:06.200 --> 13:09.240] Yeah, so now I should be in the focus right now. [13:09.240 --> 13:10.240] Excellent. [13:10.240 --> 13:14.880] So yeah, that's highlighting the person that's speaking at the moment. [13:14.880 --> 13:20.960] I'll put this back on freedom mode for now. [13:20.960 --> 13:25.480] And now of course, yeah, so the most interesting thing about this is that it's powered by the [13:25.480 --> 13:26.640] SFU. [13:26.640 --> 13:34.440] So when I make one of these screens larger, you should actually see that the quality will [13:34.440 --> 13:35.440] improve. [13:35.440 --> 13:43.720] I'll do this with, I've got actually, there we go, yeah. [13:43.720 --> 13:48.600] So I think it's quite subtle and you may not necessarily be able to see it on the video, [13:48.600 --> 13:53.960] but if you look at Sim on the books in the background, you can see that you can read [13:53.960 --> 13:57.880] the spines on the books better when you previously couldn't. [13:57.880 --> 14:01.480] What might be a better demo of this is we've actually got some debugging tools here. [14:01.480 --> 14:07.720] I can go into developer and this is brand new, show a call for debug info that we can [14:07.720 --> 14:12.880] tell actually with the exact video sizes that I'm getting from everybody here. [14:12.880 --> 14:15.600] So I'm getting 640 by 360 from Sim on at the moment. [14:15.720 --> 14:19.920] And if I make him larger after a couple of seconds, there we go. [14:19.920 --> 14:23.480] He's now in beautiful 720p, 1280 by 720. [14:25.000 --> 14:31.520] So that is automatically changed resolution pretty seamlessly without, if you weren't [14:31.520 --> 14:35.080] looking at the spines on his books and trying to read the covers and trying to read that [14:35.080 --> 14:40.680] he had his Mathematica book in the background, then you probably wouldn't even notice. [14:41.680 --> 14:47.240] But yeah, and then just back down to lower quality. [14:47.240 --> 14:50.280] You will see a slight pause there. [14:50.280 --> 14:53.400] That is something that we haven't finished yet. [14:53.400 --> 15:00.240] And it actually waits for a keyframe to arrive before forwarding the lower res stream. [15:00.240 --> 15:01.240] We don't need to do that. [15:01.240 --> 15:05.640] We could continue forwarding the higher res stream and then only actually switch when [15:05.640 --> 15:08.520] there's a keyframe and that will make it much smoother. [15:08.520 --> 15:10.520] We just haven't got around to that yet. [15:10.520 --> 15:16.960] That is basically the next thing that we will do in SFU Dev. [15:16.960 --> 15:22.720] I think there's, right, the next thing I'm going to show you while I am on debug settings [15:22.720 --> 15:28.760] is back in developer are the other optional call inspector. [15:28.760 --> 15:31.360] You can see this down here. [15:31.360 --> 15:33.200] It's a bit bigger. [15:33.200 --> 15:35.000] There we go. [15:35.000 --> 15:38.760] So these are the automatically generated usernames. [15:38.760 --> 15:45.560] In this case, I think it's Peach Outdoor Earthworm, somebody. [15:45.560 --> 15:49.920] So here we've got, yeah, I can select one of these people and I can see all the signaling [15:49.920 --> 15:56.720] that's going backwards and forwards between these, the various parties in the call. [15:56.720 --> 15:59.240] One of these calls is the SFU. [15:59.240 --> 16:03.240] So you can see all the signaling that's because I only actually have signaling with the SFU [16:03.240 --> 16:07.080] because my only call is with the SFU, but you can see all the candidates and all the [16:07.080 --> 16:14.000] invites that we were talking about in the talks going backwards and forwards, which [16:14.000 --> 16:16.000] is super useful for debugging. [16:16.000 --> 16:19.000] There we go. [16:19.000 --> 16:21.840] Let's turn that off again. [16:21.840 --> 16:30.960] Yes, I think the last thing, the one we want to demo is screen sharing, which is a really [16:31.040 --> 16:33.320] good demo of these quality switching as well. [16:33.320 --> 16:38.080] So maybe Florian would like to start sharing his some interesting content. [16:38.080 --> 16:40.480] Oh, yes, I would do. [16:40.480 --> 16:43.720] So this is from the talk earlier the day. [16:43.720 --> 16:44.720] There we go. [16:44.720 --> 16:45.720] Yeah. [16:45.720 --> 16:48.560] So yeah, that's actually how screen sharing works. [16:48.560 --> 16:54.720] Again, you can put back on the call for debugging. [16:54.720 --> 16:58.840] You can see that's coming through at a fairly high resolution. [16:58.880 --> 17:00.720] I am still on freedom mode. [17:00.720 --> 17:07.560] So that's automatically switched into spotlight mode because it's screen sharing. [17:07.560 --> 17:08.560] Screen sharing has started. [17:08.560 --> 17:12.880] That's usually what you want, but I can switch back into freedom mode at which point you [17:12.880 --> 17:13.880] should see. [17:13.880 --> 17:14.880] Yes, there we go. [17:14.880 --> 17:18.640] The quality of that adjusts down again, just the same as any other feed or go back into [17:18.640 --> 17:19.640] spotlight. [17:19.640 --> 17:20.640] So you can see that. [17:20.640 --> 17:21.640] There you go. [17:21.640 --> 17:27.800] So yeah, it works just as well with screen sharing as it does with any other video feed. [17:27.800 --> 17:29.480] The text is all nice and readable. [17:29.480 --> 17:31.480] And there you go. [17:31.480 --> 17:32.480] Yeah. [17:32.480 --> 17:37.720] Thank you very much for watching and I've got to say about everything for us. [17:37.720 --> 17:38.720] Thank you. [17:38.720 --> 17:39.720] Bye-bye. [17:39.720 --> 17:42.560] Have a nice party. [17:42.560 --> 17:45.240] Let's look at one of the future holds. [17:45.240 --> 17:51.760] Firstly, we'd like to look at actually implementing the focus selection logic and cascading. [17:51.760 --> 17:56.960] This has been partially specced out, but hasn't been implemented in the real world just yet. [17:56.960 --> 18:05.880] So we'd like to do that to see how it works in action and if necessary, amend the specification. [18:05.880 --> 18:11.080] Another thing which is quite important to us is enter an encryption via insertable streams [18:11.080 --> 18:18.240] to avoid potentially malicious folk-eye admins to listening in on conferences. [18:18.240 --> 18:24.920] And the last big thing which we have on our minds now is ravaged track switching by pre-negotiating [18:24.920 --> 18:27.840] conceivers of tracks. [18:27.840 --> 18:35.520] This allows us to avoid having to do the renegotiate dance and allows us to lower the delay between [18:35.520 --> 18:38.360] actual switching between different tracks. [18:38.360 --> 18:40.600] This is useful in two main scenarios. [18:40.600 --> 18:45.240] One of them is running through a crowded area in Furbu, where you have to switch between [18:45.240 --> 18:50.680] multiple audio sources of all of the people in that area and you want to avoid an latency [18:50.680 --> 18:53.440] delay. [18:53.440 --> 19:02.080] Another use case for that is quickly switching the speaker to avoid any delay in that situation. [19:02.080 --> 19:04.440] We need help. [19:04.440 --> 19:09.440] If you have time, you can check out Element Call, both the stable version and the developed [19:09.440 --> 19:18.000] version or even the MSI 389 version which supports SFUs. [19:18.120 --> 19:23.360] Also, if you have experience with Matrix, WebRTC or coding in general, you can check [19:23.360 --> 19:26.520] out the repositories, pull requests and MSCs. [19:26.520 --> 19:43.360] Thank you for listening and I hope you have a good time at Fauston. [19:43.360 --> 19:48.880] So if the question, and I'm probably using it with other Matrix on servers, if by the [19:48.880 --> 19:53.800] time you then write or can't do it, then no, it should work with any Matrix on server. [19:53.800 --> 20:00.720] It doesn't really matter which one you use. [20:30.720 --> 20:44.920] That's a good question. [20:44.920 --> 20:49.320] Simon, will you go first? [20:49.320 --> 20:50.320] You can go first. [20:50.320 --> 20:51.320] Okay. [20:51.320 --> 20:57.280] Yeah, so the question is, what are you most looking forward to achieve with the SFU? [20:57.280 --> 21:04.120] So basically the idea is that the SFU is quite dumb in terms of an application logic [21:04.120 --> 21:10.720] and yeah, we want to implement also the cascading bit of it and then test and see if you really [21:10.720 --> 21:14.400] can scale conferences virtually to any size. [21:14.400 --> 21:18.440] I think that would be the long-term pretty cool goal. [21:18.440 --> 21:26.400] The other one is at the near-term to get end-to-end encryption in it and not in the SFU, but on [21:26.400 --> 21:32.240] the clients such that you have end-to-end media encryption and to have a same control [21:32.240 --> 21:37.240] of network bandwidth and media quality. [21:37.240 --> 21:43.680] So I would definitely agree with that and among other things also I'm looking forward [21:43.680 --> 21:49.720] to just getting rid of Qtzi so that's all the Matrix science can just rely on something [21:49.720 --> 21:55.520] based on Matrix because the Qtzi experience isn't exactly great in all situations. [21:55.520 --> 21:56.520] Yeah. [21:56.520 --> 21:57.520] Okay. [21:57.520 --> 21:58.520] Okay. [21:58.520 --> 21:59.520] Okay. [21:59.520 --> 22:00.520] Okay. [22:00.520 --> 22:01.520] So I'm looking forward to this. [22:01.520 --> 22:02.520] Okay. [22:02.520 --> 22:03.520] Okay. [22:03.520 --> 22:04.520] Yeah. [22:04.520 --> 22:05.520] Okay. [22:05.520 --> 22:06.520] Yeah. [22:06.520 --> 22:07.520] Okay. [22:07.520 --> 22:08.520] Okay. [22:08.520 --> 22:09.520] Yeah. [22:09.520 --> 22:10.520] Okay. [22:10.520 --> 22:32.520] Okay. [22:32.520 --> 22:34.520] Okay. [22:34.520 --> 23:02.520] Okay. [23:02.520 --> 23:12.520] Hi. [23:12.520 --> 23:19.520] So I can try to emulate Travis and what he used to say last year. [23:19.520 --> 23:39.520] So if you have any other questions like what is your favorite color, then you can ask to [23:39.520 --> 23:55.520] Actually an interesting question. Can we share two things in Chrome and try that? [23:55.520 --> 24:18.520] So in today we are using the new large grid layout as you can see it's not completely [24:18.520 --> 24:27.520] perfect for screen sharing since you can't make it full screen, but we are going to merge it with the layout, you know, from the current version of element call. [24:27.520 --> 24:43.520] So to be able to see it in the regular spotlight view you have now. [24:43.520 --> 25:12.520] Hey, cool. You have the first person joining you. Hi. [25:12.520 --> 25:41.520] Hi. [25:41.520 --> 26:10.520] Hi. [26:10.520 --> 26:36.520] Hi. [26:36.520 --> 26:53.520] Okay, so the talk seems to be ending in about one minute. So thank you for watching the thing. And I guess we'll be sent over to the other room. She gets created or the viewers will be sent over to the room in which we are. [26:53.520 --> 26:58.520] Yeah, thanks for attending and have fun with the remaining talks. [26:58.520 --> 27:03.520] Yeah, thank everybody. [27:03.520 --> 27:06.520] Cool. [27:06.520 --> 27:11.520] Both like quite eager for the talk to end since it sent the message. [27:36.520 --> 28:00.520] Thank you. [28:00.520 --> 28:19.520] Yeah, that's true. So I think we can close the session right. [28:19.520 --> 28:24.520] We'll be over to element call. So I leave here. See you. Bye bye. [28:49.520 --> 28:54.520] Thank you.