I'll try my best. So I'm Nicolas Dufresne. I work for Collabora. I've been hacking on G-Streamer for over 10 years now. I also hack a bit on Lib Camera. I've earned a T-shirt this week. I'm going to give you an update of what is happening in G-Streamer since last Fosdame, basically. So in numbers, at last Fosdame, we were releasing 1.22.0 of G-Streamer framework. And since then, we did nine bugfix releases. So the Rust component, oh, that's off. So the Rust component, I know why. Let me fix that quickly. Sorry about that. That's using time. Sorry about that. There we go. And just a different version of the tool. So as we mentioned last year, the Rust bindings and Rust plugins actually lives in separate repository with their own release cycle. So there was 13 releases of the Rust bindings, 18 releases of the Rust plugins. There has been 13 security fixes that are basically security issue reported by researchers all in C code and over 600 backports on the C side and 600 backports on the Rust side. As for the development, we haven't released 1.24 yet, even though Tim actually mentioned that it might have been released. It should have been released. We'll work on it. I think we're super close. But understand that we had 1,400 merge requests on the C side, about 5,000 commits, plus 750 commits. That has been quite a lot of work. And it also introduced a bit of instability in the development tree that we need to take care of. On the community side, one of the big changes that we're slowly, so we didn't kill it, we're slowly moving from IRC to Matrix. So you can join us to the Matrix community, which brings basically a different topic challenge. Different topic channels, so you can actually have a channel dedicated to a topic which makes the channels a bit less noisy. We also introduced this year for support, a discourse, which is a forum. It's something that has been requested for a long time. People always wanted to use our issue tracker as a forum, and we didn't want to. Now we give the users a forum to do that. And it actually killed the mailing list, literally. I don't really get anything there anymore. Now the rest is going to be quite fast. Might skip some of your contribution if you're a G-shrimmer contributor because there's too much. But I'm going to give you another view of what change this is, things that didn't fit. So at the very core in the GST meta, we've introduced the ability to serialize and deserialize the meta for IPC purposes. We introduced the GST analytics relationship relation method, which is an attempt to standardize the analytics data exchanges between different ML systems, AI systems, or standard open CV kind of analysis. Small but there, and sort bin is now in your registry. So it's an element that you can discover and that you can start learning about four years later. ONNX have gained inference elements, zero copies have been refactored. This is the most active machine learning set of element in G-shrimmer. Orc has gained AVX to support, which increases a lot the software processing on recent laptop. You don't need to muck within code bin, which makes encode bin a lot more useful in a lot more situation. WebArt with WP source has been updated to the latest API. And finally, I think this feature is kind of cool and it didn't fit anywhere. QML6 has a mixer. So it's a mixer to display, which is a lot more efficient than doing tons of render paths to get a single buffer to then render it. On the codec and parser side, we've did quite an important change to the H264 parser to make our frame splitting actually spec compliant. It's not yet fully stable. We still have some regression to look around. But at least it means that you can do fun stuff with your bitstream now. And G-shrimmer won't think it's multiple frames. Small addition codec to JSON is just a set of plugins that takes your bitstreams, dumps the headers, stream headers, and slice headers, and frame headers, and makes a JSON out of it so you can actually read it. It's very useful if you develop an encoder. There was no GPEG parser in G-shrimmer officially. Now there is one. You no longer have to implement a parser in every single GPEG decoder. MPEG123 became our primary MP3 decoder, replacing MAD. Vulkan video, actually, we were early adopter of the Vulkan video standard. And we now have an H264 decoder that has been merged. And there's more work coming. There's a. Then we added a couple of codecs, LC3, audio codecs from Google's, PHQ, which I'm less familiar with. We've enabled support for the SVT AV1 software encoder. And on NVIDIA codec side, HDR, stream sharing, on AMD property codec side, we got HDR, HEVC, and AV1. Streaming side. So I put a little ROS logo next to what is actually being written in ROS. And I'll explain a bit further why later. So you can see that the WebRTC sync now can input pre-encoded stream support. It has the 3D11 and QSV encoder support. There's a base, I think it's a base class, a base class for the WebRTC syncs. And out of that, there's Genus VR WebRTC sync, VR for video room not for virtual reality. And AWS sync also, that makes your life easier. It basically handles the signaling for you. And it also handles the encoding for you. It handles the bitrate adaptation for you. So it's much easier than going straight to WebRTC bin. We also have some enhancement on the WebRTC source, including turn support. We now have a web server source to go with the web client source. We have an HTML API. I must admit, I didn't go through deep on what that thing do. But it's on top of it, and it offers an HTML. It's a kind of separate project inside the project. And another mentioned, there's basically a new feature you can do, synchronized playback in JitterBuffer using your system clock if your system clock is synchronized with your media clock. That's more of an embedded use case. But yeah. More about streaming. NDI sources gain zero copy. There's a new element called AWS put object, which not very good for throughput, but it's a much lower latency for a small chunk of data that you want to upload to AWS Cloud. HLS CMaf sync finally, so you can serve CMaf, which is the common fragmented storage for all the fragmented implementation, whether it's dash or HLS. You can now serve HLS using CMaf in Gstreamer. The fragmented Moxer has gained VP8 and MV1 support. And another one, which is a bit niche, we now expose a W3C media source extension API that mimics the web API that you can bind into whatever you want. The intention with that is basically to be able to share your JavaScript between something that runs in your browser and something that runs in kind of standalone player. Specifically on the binding itself, I probably missed a couple of things there, but there's GES, so Gstreamer editing services bindings. There's bindings for the VBI parser and encoder. The handling, I mean, there's accessors for the pad probe, which makes using pad probe a lot less verbose. And there's bindings for some metadata, like the video SE user data on register, the RTP source meta, and yeah, a lot more. This binding is extremely active and maintained. An interesting one, it's a bit uncommon to have such an amount of activity on video formats, especially pixel formats. And yeah, we've seen 20 new pixel format on the software transformation side, 27 new pixel format in GL25 and D3D11. And I'm actually skipping D3D12 and CUDA here, which probably have the same numbers. We introduce quite something. It's basically the ability to pass through Linux DRM pixel format and their modifier that enables GPU compression on Linux mostly. And that should make your video playback on your laptops a lot faster. It comes with helpers to do CAPS negotiation and all the design for the negotiation also is published on the website. It's implemented in VA, MSDK, Wayland, and there's more support coming. Small, but because it came from a new contributor, someone added 10-bit support to the WebMalpha support. We also introduced 10, 12, 14, 16-bit software debiring, which I mean, I think nobody had touched the debiure for 10 years. So it was a bit of a surprise, but then we're back to the modern days. I think it's mostly because of lib camera. And yeah, but a lot of activity this year has been on the Windows side, mostly done by a single person. So let's get the Windows updates. So on D3D11 support, there's an IPC sort of sync that has been created. So you can share basically D3D textures around across processes. You now have another lay element, a bit like the Cairo element. You get a draw callback, and you get to use the system context to draw. That's about it. That was a question from last year Fosdame. We now have D3D11 support for QT6. Lots of improvement and pattern and stuff in the D3D11 test. You can now pre-compile your HLSL shaders and cache it on disk. There's basically the NVIDIA decoder can now output to D3D textures. And you have the right support to optimize the rendering of overlay, which is basically a companion to D3D11. On D3D12, that's a much newer subsystem, but it's equivalent, and it's getting almost all the same features now as the D3D11. So added MPEG-2, H264, HEVC, VP9 decoder. I'm actually surprised not to have AV1 there. It's probably there, or probably just missed it. And H264 and HEVC stateless encoders, so they're a much harder encoder to implement because you have to do more work. And compositor overlay, screen capture, color space converter, same pre-compilation support and shader support. It didn't do zero copy until this year, so zero copy, threaded decoders to improve that throughput, and it's interoperable with the D3D11 framework. So that's most of the Windows update. On open GL side, well, most of the work has been to be able to negotiate the DRM modifier so we can get compressed video frame into GL, zero copy into GL and render it. Just recently, we also added the ability to pass through the DMO formats so that we can upload to GTK4, the rendering to your compositor. So that's coming. You'll be able to basically skip your GPU completely so it can shut down while you watch your video. We added surface display support and GTK4 paintable sync in Rust now has GL window support. More specifically to UNIX-like systems, VA, Wailin, and MSDK, I mentioned that already, have the DRM modifier support. We have a VA and V1 encoder now implementation for the newest brand of Intel laptop mostly. The Android media codec now has been ported from being fully C calling into Java to be implemented using the NDK, so it's implemented in C, except for I think one callback, it's reduced a lot the overhead for codecs on Android and we added AV1 support. There's a new set of elements for UNIX, which is the UNIX FD source and FD sync. It kind of replaces SHIM source and SHIM sync. The main difference is that what goes, it actually passes through the caps and it serialized the G streamer method to the other side and it also passes MMFD or DM above or any file descriptor that you would be streaming. So zero copy on like the SHIM source and SHIM sync element. OSS that's opens on a system from that is still used on a BSD these days. Now you can animate your audio device, which was quite a missing feature. The Wailin sync have gain support for DRM DOM allocator on Linux. It's not exactly automatic yet, that's coming, but what it does is that when you do software decoding, you can decode into a GPU importable buffer and that means that your Wailin compositor is gonna be able to use a hardware layer, which remove some copies and you're already software decoding, so whatever gain you get, that's a gain and that makes software decoding better. Now embedded Linux, we added V4L2, everyone's stateless decoder. For me, it's two years of work because we also did the kernel side of it. People from Pengatronics added the UVC sync element, which is a companion to the Linux UVC gadget framework which basically lets you use Gstreamer to create a webcam out of your little chip that supports the appropriate USB protocols. In V4L2, stateless codecs are now CI tested using QMU and a virtual driver called Vizzle, so it does not really decode, but it produce image that is modified based on the parameter, so it's basically good for catching changes, which are usually just a regression and it actually works, so we're pretty happy, it run in 10 seconds. Our fastest decoder out there. V4L2 source now cares about frame rates, so when you use V4L2 source, you may get something better than two to five frames per second by default, it's pretty nice. The GC allocator gain at DME buff, DRM DOM allocator, hopefully we'll gonna get a GBM also allocators in the future and we also have buyer support integrated into V4L2 source and yeah, we've improved the, oh yeah, we've improved a bit the V4L2 state for decoders in order to try and get the Raspberry Pi 3 and 4 experience a bit better, so and that included the new dynamic resolution change method in the specification and HDR10. There was quite a lot of work on close captioning, I kept just few line, but basically there's a boxer for C8608, the CC combiner has been greatly improved, there's a C8608 renderer overlay and it looks like the development of that is moving toward Rust now and there's a converter now for the older close captions to the newer close caption standard in the Rust plugin, Revisdery. We've retired some stuff actually this year, so the GSTOMX is now gone, it was completely unused by Raspberry Pi, we never implemented any Android support because it was not exactly a real OMX and basically there's been no contribution for years, so we decided to remove it. Kate, which is based on LibTiger, there's basically, it's becoming difficult to get LibTiger, so we were like, yeah, let's stop supporting that and finally we removed the use of G-Slice all over the code base, simply because G-Slice has been outperformed by the system heaps now, so we'll use the fastest one, which is the simplest one also, so Malachy is faster than G-Slice. We have future plans, I have no intention to actually spoil anything, but I'd like to underline that lately there's an effort that I've started to actually rewrite the RTP stack of G-Strema, mostly for security reason, because it deals with a lot of network input, but it's an opportunity also to fix some of the intrinsic performance issue of the current implementation, so we really hope that that's gonna be better. It's definitely a multi-year work ahead of us before that replaces the existing stack. I also personally strongly consider contributing a replacement for our parsers. Our bitstream parsers has been responsible for 50% of the security issue this year, so that's definitely just the AV1 parser, I think it's the fourth in a row, so we have four bug fix releases that are made quickly because of the AV1 parser, and the first one actually came with a nice code execution example from the researcher, so it's very concerning. If you want to learn more about these things, I've been really on the surface, but all our G-Strema conference across all the years are recorded by UBTV, UBcast, and you can go watch them and you get more detail. We had one in October, which all fresh and covers a lot of this, and please if you want to get more in touch, there's gonna be a G-Strema conference against this year in October, and it's most likely going to be in Montreal, actually, first time out of Europe. Thank you, and we have three minutes for questions, I think. Five, oh, okay. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Sure. Can you say a bit more about the W3C media source extension, why you chose that and if you're seeing any active media extension? So the question is, can I give more context around the W3C media extension? So our use case was, basically my team did that, so our use case was that we had some people that wanted to use Shaka Player, which is a JavaScript player, but without the browser because their device is not strong enough to host a browser. So basically there's not enough RAM, there's not enough CPU, but they really wanted to share the same player, and the idea is that the first step was to make, basically an API that is like MSC inside of it. We didn't create it, we ported it from WebKit, and that is really easy with the object to expose then a JavaScript API that you can use a node or something like that. Yes. How hard is it for someone who knows the C API quite well to get started with writing Rust plugins? Yeah, so the question is, how hard is it for someone that knows well the GStreamer C API to write it in Rust? Do you have a better answer than me on that? I don't know. I've personally been for last year in kernel development, so I haven't really been into the Rust thing, but from my experience, my tutorial trials, if you really know GStreamer, because we have such great bindings, and you have so many examples now, it's not that hard. The hard part is basically the discussion that you have with the compiler that is telling you that you don't really understand what you're doing initially, that you don't really understand what's ownership, and it's part of Rust. You really get intimate with the compiler. But could you say if you want to develop a new plugin and you should go to Rust, or should you still seek with C for time being? I think it's the best recommendation to give. So if you were to develop a new plugin, should you go to Rust? I think my answer is yes. Yes, it's great to have a new plugin for Rust. Are there any plans for the new and what kind of work in writing Rust? You'll have to repeat the question for myself. Is there any plans for writing some of the core GStreamer Rust? So the question is if there's any plans to rewrite some of the core in Rust. For now, no. I mean, you don't consider the RTP stack as a core because it's actually a plugin. So yeah, there's no current, nobody actually shout out that they were doing that. There's few things that could actually be in Rust. Like someone, Sebastian actually made the PTP helpers in Rust that was kind of autonomous and independent that fits, but yeah, in the short term, no. I think we're good. Well, thank you very much. Thank you. Thank you.