[00:00.000 --> 00:15.000] Okay so we start again. Our next talk is about liquid soap. Please welcome Romain. [00:15.000 --> 00:20.960] Hello and good morning. I'm really happy to be here after not two but three actually. [00:20.960 --> 00:27.400] And we gave a talk here before the whole thing happened that I don't want to talk about. [00:27.400 --> 00:32.880] A lot of things happened and I want to go back to it and talk about what we did both in the code [00:32.880 --> 00:38.560] and with the community and the kind of things that happened. So yeah first of all for those [00:38.560 --> 00:43.040] who weren't there at the first talk, what is liquid soap? It's a programming language. [00:43.040 --> 00:48.680] Technically it's a scripting language, scripted language. It's statically typed with inferred [00:48.680 --> 00:54.520] type. So if you're familiar with type script, it's type script but you don't have to write the [00:54.520 --> 01:01.080] type so everything here is a string, an integer, something. What does the language do? It allows [01:01.080 --> 01:10.240] you to create online stream and it's not a low level tool. So we delegate that and I'll talk a [01:10.240 --> 01:16.040] lot about it. What we want is to empower the user to rezone about the tool. That's what the language [01:16.040 --> 01:21.520] really does well. Programming, logic, business logic. I want to play that song. I want to switch [01:21.520 --> 01:28.480] to a live source. So that's an example of a full code that we can use to run two outputs on [01:28.480 --> 01:35.440] IceCast based on playlists, a file, a list of requests, an input from HTTP, all sort of stuff. [01:35.440 --> 01:43.520] Yeah. So what has happened with liquid soap since first time 2020? We worked with Radio France. [01:43.520 --> 01:49.840] That's the reason we came in 2020 and it was that starting point of a lot of new work because it [01:49.840 --> 02:00.320] really created a new cycle of work interest. We had a lot of community growth that reflect back [02:00.320 --> 02:04.880] on it in the first part of this talk. Then we do a lot of new features because guess what? We also [02:04.880 --> 02:11.320] had a lot of time during that time and we worked a lot on it. And I want to finish the talk by [02:11.320 --> 02:16.160] talking about future development and challenges that we foresee for the future. So first of all, [02:16.160 --> 02:21.520] what happened with the community? So I started looking back at the stats over the past three [02:21.520 --> 02:28.280] years and I looked at the GitHub stars. It's nice, but it was growing pretty steadily except for [02:28.280 --> 02:36.720] that little bump here when we did the 2.0 release. And I looked at the Slack channel, [02:36.720 --> 02:40.280] which I want to move out of this platform. But anyway, it was also growing pretty linearly. [02:40.280 --> 02:44.760] I was like, doesn't seem really anything happened over the past three years. Then I looked at the [02:44.760 --> 02:50.400] issues. And that's where I was like, yeah, I remember that. So what happened is when the whole [02:50.400 --> 02:54.640] shutdown happened, because we are a project that enables people to communicate online, [02:54.640 --> 03:00.880] well, it was one of the places that people wanted to go to communicate with people and to try to [03:00.880 --> 03:07.640] reach out, create link, maintain link. So we had a lot of people suddenly who were like, [03:07.640 --> 03:11.680] I want to do an online stream. I want to put music for my friend to listen to. I want to do all [03:11.680 --> 03:16.200] these things. And the second effect, we had a lot of people with time. So we're kind of one of [03:16.200 --> 03:19.760] those projects where you're like, oh, we like it. I like it. I'd like to look at it at some point, [03:19.760 --> 03:24.880] but I'm too busy and suddenly I have time. So people starting using it, submitting a lot of [03:24.880 --> 03:31.240] issues and well, we got busy. One of the things that we deal with so is because we didn't have [03:31.240 --> 03:37.360] that kind of in-person meeting, someone who's the other code developer of the project decided [03:37.360 --> 03:41.880] that we should do an online workshop, which was the thing to do. And it was really good because [03:41.880 --> 03:46.360] what it really allowed us is to get to meet our community. And I think one of the things that's [03:46.360 --> 03:51.000] really nice with this project is that it's both a technical project, but a lot of our users are [03:51.000 --> 03:56.960] actually not technical people. They're like people who have Burning Man Radio at the Burning Man [03:56.960 --> 04:03.800] thing. Some of the cute project we had was like, the top one was a network of community radios [04:03.800 --> 04:08.920] in Barcelona where they literally have trucks that they bring to different neighborhoods to [04:08.920 --> 04:17.440] do online reporting of what happens in the city. And the lower one was another Hungarian [04:17.440 --> 04:21.560] community radio. So that was really good to get to meet that. And I think we valued that a lot. [04:21.560 --> 04:27.920] We have, of course, we have industrial users, but this is also one of the core motivations for [04:27.920 --> 04:34.520] the project for us. And then, eventually, another thing we did, again, by product of having a lot [04:34.520 --> 04:39.840] of time, which was mainly some working on it, was to write a programming book specialized in [04:39.840 --> 04:45.760] audio stream, media stream, and how to use liquid soap for it. And it was very useful on many [04:45.760 --> 04:50.560] levels. One of them is that it forced us to rethink our API and reorganize it. So you say, [04:50.560 --> 04:55.080] can we do that? What's a good example to do that? Well, actually, it's not clear. Let's make a nice [04:55.080 --> 05:00.080] module that can do that easily, document it. And also, it enabled the users a lot to be more [05:00.080 --> 05:05.760] confident because, as I said, most of our users are not programmers. And so they come to this, [05:05.760 --> 05:11.440] and they're like, whoa, never touch a line of code. How can I do? So that book was a really good [05:11.440 --> 05:20.920] starting point to get people interested and more confident with the project. So what did we do [05:20.920 --> 05:27.640] for the new features, though? That's the part that the gist of the work we did. So first of all, [05:27.640 --> 05:34.680] we did a lot of language changes because for a long time, our focus was more on features. We only [05:34.680 --> 05:39.040] did a lot of output, a lot of input, a lot of different interesting operators. But when you [05:39.040 --> 05:47.600] start to want to implement things that are more complex, you also need powerful language extension [05:47.600 --> 05:52.920] or toolbox. And also, we did a lot of FFMP integration. So those are the two things that I'm [05:52.920 --> 05:58.360] going to talk next. And first, the language change. So yeah, more expressivity. You want to enable [05:58.360 --> 06:03.760] your users to write code that is nice, readable, that they can understand, that is powerful, [06:03.760 --> 06:08.240] but it doesn't have to have a million lines to do a simple thing. So simple things like that. [06:08.240 --> 06:15.440] These are like spreads, you know, to split out a list. You have a list, you want to get the [06:15.440 --> 06:20.960] first element, last element, and whatever else is left as a remainder. We could do that before, [06:20.960 --> 06:27.760] but we had to use functions and it was complicated. This really helps users just visualize the code [06:27.760 --> 06:32.720] and be like, all right, understand what this is doing, and have much less lines to write it. [06:32.720 --> 06:42.000] We implemented a lot of types that were missing. So for instance, a null variable. A lot of that [06:42.000 --> 06:47.760] will remain a sense of, of course, dynamic scripting languages like JavaScript or others. So we [06:47.760 --> 06:52.840] didn't have a null value. We did a null value and we added a nice operator that says if you have a [06:52.840 --> 06:59.240] null on X, if X is not null, it's a string, then you get X. If it's null, you get the string full. [06:59.240 --> 07:06.160] Stuff like that. But when you start writing actual usable code, you need these things and they're [07:06.160 --> 07:13.520] very useful. Yeah, you can write function with optional argument and you can exactly say the [07:13.520 --> 07:23.040] user didn't pass an argument. That's just basic programming language features. The other thing [07:23.040 --> 07:30.800] we did that was very useful is a new module syntax. That basically means that any value in the [07:30.800 --> 07:37.280] language can be attached methods or attributes to it. So here you have a string and that string [07:37.280 --> 07:45.200] has two methods. Well, attributes here, duration, floating number, and BPM. It's basically an object [07:45.200 --> 07:52.000] oriented ID, but not really an object. More like JavaScript hash, I guess. And that means that you [07:52.000 --> 07:58.560] can query the song.duration and get the floating point duration or you can print the song title [07:58.560 --> 08:06.160] as a string using the underlying value. That was really useful because when you want to create [08:06.160 --> 08:11.520] something that's easy to use for user, you need to structure your language. You need to have modules [08:11.520 --> 08:16.160] and you need to have functions in those modules. Also, another typical case is that now you can [08:16.160 --> 08:22.080] specialize things. So you have sources that have different functions. So all sources have a skip [08:22.080 --> 08:28.720] function. It means that you can get to the next track in your stream, but you may have sources [08:28.720 --> 08:35.680] that have specific function. So for instance, you can have a source that can insert metadata [08:35.680 --> 08:41.360] that's created with this operator. It will be added a new function that's called insert metadata, [08:41.920 --> 08:49.280] and now you can use it. So instead of having a billion of general calls that are API based, [08:49.280 --> 08:56.960] you can start really attaching specific use to specific variables and it makes things more compact, [08:57.680 --> 09:05.040] more specialized, and very useful for cleaning up your API. Another thing we did after that [09:05.040 --> 09:09.680] with the module is that now we were able to describe high level things. So one of those [09:09.680 --> 09:15.680] things that's really painful in static ID type language is parsing JSON. Why? Because JSON can [09:15.680 --> 09:21.760] be anything, and we really want to know that name is a string. We really want to know that name is [09:21.760 --> 09:26.400] a version. So in language like OKMOR you have to parse an object, iterate through the keys, [09:26.400 --> 09:30.160] validate that is a string, and every branch of that you need to think of what you're doing. [09:31.280 --> 09:36.160] And our users, again, not programmers. So what do we do? We try to find a primitive that's [09:37.120 --> 09:43.280] readable and easy to use. So what are you reading here? You're reading a parsing statement that [09:43.280 --> 09:49.120] says I want to parse and I want to get a module that's going to have a name, a version, and another [09:49.120 --> 09:54.560] script attribute that is itself a module that has a test function. It's what you get in a package [09:54.560 --> 10:01.600] that JSON for node modules. And this is a type annotation that says I want the name to be a [10:01.600 --> 10:06.640] string, the version to be a string, and the script to be a module with a test that's a string. [10:07.440 --> 10:11.520] And at runtime we're going to take all this information, we're going to try to parse this, [10:11.520 --> 10:16.240] and we're going to do two things. If we have what we need in the JSON, you can go on with your [10:16.240 --> 10:21.600] script. If we don't, we raise an error, you can cache the error and reason with it. But that [10:21.600 --> 10:27.200] really makes parsing of JSON more easier on the user, which is important because, again, [10:27.200 --> 10:32.640] when you want to connect a lot of interconnected systems for streaming, you want to be able to [10:32.640 --> 10:38.960] talk to JSON API. So pretty useful. And we did the same for Yemo recently. So yeah, [10:38.960 --> 10:45.920] you can do settings now. Yeah, those are the new features. There's more, but I don't want to spend [10:45.920 --> 10:50.400] too much time on it because the other part that I want to talk about that's exciting is the FFMPG [10:50.400 --> 10:54.560] integration. And that really started after Radio France because they had a strong need for it, [10:54.560 --> 11:00.160] and we started looking at the API from FFMPG. And the thing with the API from FFMPG is that [11:00.960 --> 11:08.560] it's really good. It's amazingly good. It's all very low level in C. So for us, because I didn't [11:08.560 --> 11:15.280] mention, but LiquidSup is implemented in OCaml, so we need to speak to low level implementation [11:15.280 --> 11:22.160] to really be efficient. We can talk to Pearl or to whatever, Python. And this C, it's really [11:22.160 --> 11:27.600] simple for us. But it's also extremely abstract. And that really helps us because, again, we want [11:27.600 --> 11:33.440] to do what we do well, which is the programming language side, the logic, the typing, the functions. [11:33.440 --> 11:39.280] But we're not specialists in multimedia implementation. We don't want to do that. We want [11:39.280 --> 11:45.440] to find people who do it better than us and interface with it. So that's the API for a FFMPG [11:45.440 --> 11:50.640] packet, which is a little tiny bit of encoded data. It contains, I don't know, an MP3 frame, [11:50.640 --> 11:56.640] a video, A or B frame, all the abstract things I don't want to know about. But it's going to [11:56.640 --> 12:04.320] tell me two things. It's going to tell me this is your data. This is your pointer to a presentation [12:04.320 --> 12:10.160] timestamp. And that thing here tells me this is the time at which you want to insert this packet [12:10.160 --> 12:15.120] in your stream. This is the data. That's what I need to know. Because then we can build a stream [12:15.120 --> 12:19.120] with it. We can pass the packet around to our different operators, not even knowing what they [12:19.120 --> 12:25.360] contain. The other thing that FFMPG really, yeah, so that's why we started doing, we started [12:25.360 --> 12:32.080] implementing a new encoder that was basically reflecting everything that you see as a parameter [12:32.080 --> 12:39.600] in the FFMPG command line. We think that we support it as an option in those encoders. [12:40.720 --> 12:45.360] And why do we think that is because another thing that FFMPG really does really well [12:45.360 --> 12:53.360] is describing their API. So I'm sorry, that's not very readable. But that's a C structure that has [12:53.360 --> 13:03.520] all the parameters name here for H264 encoders, description, type, somewhere, and minimum, [13:03.520 --> 13:09.920] maximum value, everything we need to basically write an interface to it. It also does it for filters. [13:10.880 --> 13:16.160] Again, not very readable. Sorry about that. But it's basically a programmatic interface [13:16.160 --> 13:22.640] to everything you need to know about parameters for FFMPG. That's also not readable. Great. [13:22.640 --> 13:29.520] So then what we can do is this is an FFMPG filter implemented in liquid soap. And if you could [13:29.520 --> 13:34.960] read well, you would see that every parameter is in the filter, like speed, is a floating point, [13:34.960 --> 13:40.800] optional. It has no value default. They saw no value default. Sometimes they don't. [13:41.360 --> 13:47.760] We get this information from the FFMPG C API, and we can plug into it immediately and be very [13:47.760 --> 13:53.760] confident that we're using it well. And so one of the things it allows us to do is to actually [13:53.760 --> 14:00.240] have scripted manipulation of FFMPG primitive like filters. So we take a source and we want to [14:00.240 --> 14:06.880] define a filter that is a flanger followed by a high pass and then output it. So you need a graph. [14:06.880 --> 14:11.280] That's just part of the C API. You need a source. You create an audio input from it. That's the [14:11.280 --> 14:15.920] FFMPG side. That's what they call an audio input. You pass it to the filter with the parameters. [14:15.920 --> 14:20.880] Everything is typed here, so we can check. That's the right value. And then you create an output. [14:21.760 --> 14:28.560] Run that. You have a high level description of your filter. I don't know if you all have manipulated [14:28.560 --> 14:35.120] FFMPG filters, but when you want to do complex filters, they have a description graph that's [14:35.120 --> 14:39.920] pretty hard to read. I'm used to that, so I'm biased. I can read these things easier, but also [14:39.920 --> 14:46.480] it's kind of more descriptive. It's typed, too. So this is another filter. It takes an audio, splits [14:46.480 --> 14:52.000] it into two sources. One of them is going to go through a flanger. The other one is a high pass. [14:52.640 --> 14:55.600] This is some conversion that was required. I don't know why. And then you merge them back. [14:56.320 --> 15:00.880] So we're describing now a graph that branches out, do two filtering and comes back together. [15:01.920 --> 15:04.000] This is a simple one, but you could use that to do, I don't know, [15:04.000 --> 15:12.400] a multi-band compressor, for instance. You can do that in FFMPG. It's just a little bit more [15:12.400 --> 15:18.960] structured and readable and also type safe. So next, I want to talk about how we implemented that. [15:19.840 --> 15:26.800] Yeah, I still have some time. This is the timeline for us, infinite time. We started here, [15:26.800 --> 15:32.320] and we go all the way here. If you use your imagination, all these little horizontal dots [15:32.320 --> 15:39.200] are audio packets that came from FFMPG. Vertical ones are video frame. This is your stream of data [15:40.080 --> 15:46.160] that you're sending to an ice castle, anything. What we do is that we find the lowest common [15:46.160 --> 15:53.280] denominator between the video rate and the audio rate. We need to find a little chunk of time that [15:53.280 --> 15:58.400] will contain the same amount of audio and data. Most of the time, with the parameters we have [15:58.400 --> 16:05.200] internally, it would be 0.04 seconds. That's what we call our frame. And then, the idea of the [16:05.200 --> 16:11.920] streaming loop, once you've parsed and prepared all your outputs, is to just recreate that frame [16:11.920 --> 16:17.040] every 0.04 seconds, infinitely many times. That just creates your stream. And so, [16:18.720 --> 16:23.280] here's an example. We have a simple script. It has two outputs. You want to save to a file and [16:23.280 --> 16:28.960] send to an ice castle server. Fullback is an operator that will take the first source available. [16:28.960 --> 16:34.480] So, the first one is an input. It's a hardware. So, it's one of the operators we have that can [16:34.480 --> 16:40.960] receive ice cast clients. So, let's say you want a DJ to connect to your radio. You can direct them [16:40.960 --> 16:46.880] to this input, and it starts streaming. The fullback will stream that data immediately. If it's not [16:46.880 --> 16:51.680] available, we have a queue of requests that you can. So, let's say you want to send a track to [16:51.680 --> 16:56.080] be played immediately after the following one. Every now and then, you can send a request here. [16:56.080 --> 17:01.440] Otherwise, we have a playlist of just files in the directory. And if that is not available, [17:01.440 --> 17:05.200] we have some kind of fullback. Just in case everything fails, something is going to say, [17:05.200 --> 17:10.400] I don't know. We're having technical issues. Please come back. So, now we're going to run the [17:11.600 --> 17:18.160] streaming algorithm and see how we do that. So, output.file starts. It always goes back from [17:18.160 --> 17:23.840] the output down to the inputs. And the reason is because all of that is dynamic. So, there's no [17:23.840 --> 17:28.720] reason to start asking these people here to prepare data. They might not be used because, [17:29.360 --> 17:34.480] up in the graph, the fullback might choose to use just one of them. So, we have to start from the [17:34.480 --> 17:42.800] output. Bring it down. So, I have an empty frame, a cycle, 23 sudden streaming cycle. And I want [17:42.800 --> 17:48.800] to fill it up with 0.04 seconds of data. I go to the fullback and say, hey, can you fill up this [17:48.800 --> 17:55.680] frame? Fullback is like, sure, let me ask first, input a hardware? Not available. But request.q [17:55.680 --> 18:00.800] has something in the queue, actually. Let me pass it down, pass it down to this operator. [18:00.800 --> 18:06.080] Request.q partially fills the frame. It added a little bit of audio and one video frame. [18:06.960 --> 18:12.720] What you can think about it is that it was just finishing a track. Remember, request.q takes [18:12.720 --> 18:17.200] files when you want to play them. So, I don't know. It's just done playing the jingle or the [18:17.200 --> 18:23.520] commercial that you wanted to finish. That's it. I'm done. So, it's a partial fill of the frame, [18:23.520 --> 18:31.600] in which case it comes back to the fullback that says, I need more. Playlist is not available. [18:31.600 --> 18:36.000] For some reason, the directory is empty. So, we go back to the single and say, hey, single, [18:36.000 --> 18:41.760] can you fill this frame? One of the things we do when we start the script is that we actually [18:41.760 --> 18:46.240] double check that we have at least one operator here that's always going to be available. So, [18:46.240 --> 18:51.600] that's what happens here. The fullback is being used. And single is, sure, I got a file. I prepared [18:51.600 --> 18:57.120] it. I can decode it. Boom. Finish filling up the frame. And then it goes all the way up the tree. [18:57.840 --> 19:03.120] We're ready to encode that, save it in the file. Now comes the second one, [19:03.920 --> 19:08.880] which is the iSCAS output. Same thing. It's like, hey, I need to send data to my iSCAS server. [19:08.880 --> 19:14.400] It goes back to the fullback. But then what we do here, of course, we cache stuff. So, [19:14.400 --> 19:20.000] we know that fullback has already produced a whole frame. We have saved it. We can just fill it up [19:20.000 --> 19:28.400] here, send it back to iSCAS, again, encode it if needed, send it back. So, the thing that's really [19:28.400 --> 19:34.000] nice with this algorithm is that, again, we don't really care about what's in the frame. We just [19:34.000 --> 19:38.720] care that we know that we can fill it up and we know how much is filled. And then we can pass it [19:38.720 --> 19:46.960] down. So, these things can actually be FFMpeg encoded packet. No problem. They can be raw PCM. [19:46.960 --> 19:51.200] Then we have to encode them. So, it's really, again, we are just looking at things from a [19:51.200 --> 19:58.560] high-level perspective. So, if the time for that whole cycle was t, we have two possibilities. [19:58.560 --> 20:06.000] If we generated the 0.04 second in less than 0.04 second for the computer, it means that we run [20:06.000 --> 20:10.960] faster than the real-time. We can sleep a little bit because we're generating things in real-time. [20:11.920 --> 20:16.480] If not, we have a problem. We need to catch up. So, we need to run the loop again as fast as [20:16.480 --> 20:20.800] possible. Basically, if you're in the red, you have a problem. It means that, I don't know, [20:20.800 --> 20:25.280] encoding takes too long. Something happened in your script. You cannot produce in real-time. What [20:25.280 --> 20:31.440] we want is to be in the green all the time so that we know we can deliver the content at a [20:31.440 --> 20:40.640] real-time rate all the time. So, that's how it works. Now, because I told you all that we now [20:40.640 --> 20:47.280] can use encoded content, we're having a little bit of problems that I won't have time to describe [20:47.280 --> 20:54.000] here. But essentially, sometimes we have to do a lower-level understanding of what's [20:54.000 --> 20:58.800] happening in the bitstream. So, basically, in ffmpeg, you have things that's called extra data. [20:58.800 --> 21:06.480] So, if you take an mp4 media, everything that is needed to decode like fman-table is in the header, [21:06.480 --> 21:10.640] it's communicated first, and then all the packets, the frames after that, don't have it. But if you [21:10.640 --> 21:16.640] take mpeg-ts, because mpeg-ts doesn't have a global header, every frame is going to have that data. [21:16.640 --> 21:22.000] And that was a problem because now we do things that ffmpeg doesn't do, which is a live switch [21:22.000 --> 21:29.280] between two different bitstreams. This is a RTMP input, this is a file, mp4. And when we live [21:29.280 --> 21:35.280] switch, if we started with this one, for instance, the muxer from ffmpeg might say, oh, you know what, [21:36.400 --> 21:41.840] I already have the, no, if we started with this one, the muxer will say, I know that all the frames [21:41.840 --> 21:46.480] are going to have the private data that I need, I can go on with it. And then we live switch to that, [21:46.480 --> 21:50.480] and suddenly we start receiving frames that don't have it. And the muxer is like, whoops, [21:50.480 --> 21:54.400] I cannot do it. So we have to insert those filters here to make sure that they're always present. [21:54.400 --> 21:59.600] Which is a problem for us because it means that the user has to think about low-level stuff. [21:59.600 --> 22:04.000] And that's part of one of the questions that I was wondering if ffmpeg might find [22:04.000 --> 22:07.760] some of the beautiful abstraction they have to alleviate that kind of problem. [22:09.120 --> 22:15.040] All right, almost done. I'm going to finish real quick. We have a 2.x release with tracks so you [22:15.040 --> 22:20.640] can manipulate and remix different tracks. Let's say you do an MKV with French, English, and Italian [22:20.640 --> 22:26.720] soundtrack and encode it in different format. We want to rewrite the whole clocking system [22:26.720 --> 22:31.360] and streaming because Okamera 5.0 does concurrency. It's pretty exciting. We want to think about [22:31.360 --> 22:38.080] JavaScript and YSM because we can do it and why not. And I'm very interested to see what [22:38.080 --> 22:45.360] VLC does because that's the next part and why they do it. And long-term development though [22:45.360 --> 22:51.440] is what we are wondering about because we have grown a lot of community, but most of our users [22:51.440 --> 22:56.160] are not programmers and our programming language is OKMO, which is still a niche language. [22:56.160 --> 22:59.840] And we're two developers, someone and myself. So one of the questions, you know, the project is [22:59.840 --> 23:06.480] 20 years. We're 40-year-old. At some point, we need to think about moving on. So what we have [23:06.480 --> 23:12.000] done so far is do a lot of automation, which is very powerful. It allows us to focus on the code [23:12.000 --> 23:16.240] and have less to think to do. Like, we have automated release. We do CI to run all the tests [23:16.240 --> 23:20.800] every time. We have augmented the number of automated tests we have so that we catch everything [23:20.800 --> 23:26.160] very quickly instead of relying on a lot of manual testing. But it's far as a challenge to [23:26.160 --> 23:32.720] think about how we can bring in more developers that like OKMO, like radio, is a pretty intersection [23:32.720 --> 23:38.720] of two niche things. And that's it for me. And thank you very much for your time. And maybe there are questions. [23:38.720 --> 24:04.720] So we have time for a couple of questions, yes? [24:04.720 --> 24:12.720] Yeah, so once you go from the debumster to the coder, it's expected to format the entity. [24:12.720 --> 24:18.720] Then the same thing on the mob suicide. So if the expected output format is something else, [24:18.720 --> 24:26.720] so it will actually tell you that you're expected to insert a bit stream filter again to get it [24:26.720 --> 24:41.720] forward. Yeah, it was actually an answer. There are APIs in FFMPEG to inform the user of the [24:41.720 --> 24:45.720] C API whether or not they need to insert those filters. And I see that there is automatic [24:45.720 --> 24:49.720] insertion of those filters in the code. I guess I have to have another pass at making [24:49.720 --> 24:54.720] sure I understand when and how. And I regretfully for this presentation I didn't dive again [24:54.720 --> 24:57.720] and I remember that sometimes it becomes a little bit intricate. But thank you very much. [24:57.720 --> 25:00.720] I will definitely have a good look. [25:00.720 --> 25:10.720] Is it possible to control the running script using an external automation, especially like editing running playlists? [25:10.720 --> 25:18.720] Yeah, absolutely. So are there any tools to control a running script from the external user? [25:18.720 --> 25:24.720] So we have a lot of different options. The traditional one, the old one was a TerNet connection. [25:24.720 --> 25:29.720] But we have a fully featured HTTP server, so you can write your own API endpoints. [25:29.720 --> 25:34.720] And basically because every source has their own methods now that are attached to it, [25:34.720 --> 25:37.720] and you can run scripts, you can basically take a source and say, [25:37.720 --> 25:42.720] okay, there's a function that inserts metadata, plug that into running everything else. [25:42.720 --> 25:47.720] We're programming language. So yeah, we have a lot of different options for that for sure. [25:47.720 --> 25:55.720] Can you use LiquidSoup to do static operations, not to compare files, [25:55.720 --> 26:03.720] and not with the goal of doing streaming, just to apply operations on many... [26:03.720 --> 26:06.720] Yes. The answer is yes, because the clock doesn't have to be... [26:06.720 --> 26:11.720] Oh yeah. Can you use LiquidSoup to do offline processing faster than real-time and not just real-time? [26:11.720 --> 26:14.720] The answer is yes, because you can run a clock that says, [26:14.720 --> 26:17.720] I don't want you to be real-time, I want you to run as fast as possible. [26:17.720 --> 26:21.720] But the sub-answer is that it's not a common case, it's not commonly tested. [26:21.720 --> 26:25.720] And that's part of the thing that I want to bring on more when we write the streaming. [26:25.720 --> 26:30.720] Because for instance, a request needs to resolve a file which needs to make a network request, [26:30.720 --> 26:32.720] download the file test that you can decode that. [26:32.720 --> 26:34.720] Most of the time we expect that to take a while, [26:34.720 --> 26:37.720] but it's not been tested a lot when you want to run very, very fast. [26:37.720 --> 26:40.720] And you run into race conditions that are very different. [26:40.720 --> 26:42.720] Okay. Thank you very much. [26:42.720 --> 26:44.720] Thank you.