[00:00.000 --> 00:07.000] So next up is Ulympe Voltault. [00:07.000 --> 00:11.000] Hello, hello. [00:11.000 --> 00:19.000] So I am Jean-PĂ©retot, I am one of the maintainers of Prometheus and I have been doing monitoring [00:19.000 --> 00:28.000] for more than 10 years now and I am not working at all where we are basically doing Prometheus [00:28.000 --> 00:29.000] support. [00:29.000 --> 00:34.000] We are covering the Prometheus ecosystem so any tool, open source tool around Prometheus [00:34.000 --> 00:40.000] and we are also like many other companies in the area contributing upstream in the open [00:40.000 --> 00:44.000] source, Prometheus development and the ecosystem. [00:44.000 --> 00:46.000] But let's go into the toolkit. [00:46.000 --> 00:51.000] So I have been working with Prometheus for about 5 years now. [00:51.000 --> 00:56.000] I have seen many people using it in good and less good ways. [00:56.000 --> 00:59.000] Well, anyone is free to use it the way they want. [00:59.000 --> 01:04.000] But I also know what is missing in the tools that Prometheus is offering. [01:04.000 --> 01:10.000] And I have seen some people struggling to use Prometheus or having very specific issues [01:10.000 --> 01:16.000] and it's not always super easy to debug your setup or to get more information about it. [01:16.000 --> 01:23.000] And I know that no one wants more tools, no one wants more sidecars. [01:23.000 --> 01:28.000] Part of my work in Prometheus upstream has been to include more service queries so you [01:28.000 --> 01:31.000] don't have to run that many sidecars. [01:31.000 --> 01:37.000] We have also removed the need for sidecars that write to the local system of pointys by [01:37.000 --> 01:40.000] enabling HTTP service queries. [01:40.000 --> 01:45.000] So upstream we are working to reduce the number of tools that you need to run Prometheus for [01:45.000 --> 01:47.000] your own environment. [01:47.000 --> 01:50.000] But still some things don't fit upstream. [01:50.000 --> 01:53.000] So some tools are very specific. [01:53.000 --> 01:56.000] You might only need it in one place and you know Prometheus is big. [01:56.000 --> 02:01.000] It's used by a lot of people and once you put something in Prometheus itself, [02:01.000 --> 02:06.000] well, you have to maintain it forever and you cannot change the way you like. [02:06.000 --> 02:13.000] So some features that we have used only at one or two different customers, [02:13.000 --> 02:17.000] we wanted to open source that and to make that available for everyone [02:17.000 --> 02:24.000] so people could see what they could, if other people could use them, [02:24.000 --> 02:27.000] it would be just great to have a place to do that. [02:27.000 --> 02:33.000] And instead of releasing each tool individually, which is very difficult to discover [02:33.000 --> 02:36.000] because you need to go to 10 different websites to get your tools, [02:36.000 --> 02:40.000] we branded that in the early tool kit. [02:40.000 --> 02:45.000] So the goal of the tool kit is to provide an open source. [02:45.000 --> 02:49.000] So you can just get it just like you want. [02:49.000 --> 02:54.000] You can just download it directly from our website and it will help you with your Prometheus stack. [02:54.000 --> 02:59.000] We will probably extend that as we see the needs to other tools in the ecosystem, [02:59.000 --> 03:05.000] but the goal is that you can debug your system and also enhance it in some way [03:05.000 --> 03:11.000] so that more people can use it and you can find solutions for some problems that you might have. [03:11.000 --> 03:17.000] So it's licensed under the Apache to license just like Prometheus is. [03:17.000 --> 03:23.000] So the principle of the tool kit is that every tool is available individually [03:23.000 --> 03:26.000] so you don't need to download all of them. [03:26.000 --> 03:28.000] You can just get the one that you need. [03:28.000 --> 03:33.000] We have both command line tools and tools that look directly in the browser. [03:33.000 --> 03:36.000] So we'll get to that later. [03:36.000 --> 03:41.000] We tried to use the HTTP API rather than local file when it's possible [03:41.000 --> 03:44.000] because when you run in Kubernetes or in cloud environments, [03:44.000 --> 03:48.000] well, you don't always have access to the local file system. [03:48.000 --> 03:52.000] And also all the tools have a common look and feel. [03:52.000 --> 03:56.000] So if you provide a configuration to connect to your Prometheus server [03:56.000 --> 04:05.000] with a username and a password, you can reuse the same file for the other tools in the tool kit. [04:05.000 --> 04:07.000] So let's go to the tool kit. [04:07.000 --> 04:09.000] So the tool kit is available at olidotools. [04:09.000 --> 04:15.000] We currently have six tools at this moment and I will demo all of them now [04:15.000 --> 04:18.000] so you can have an idea about what they are doing. [04:18.000 --> 04:21.000] The first tool is called CSV2Targets. [04:21.000 --> 04:27.000] I was at a customer and the network team, they wanted to start using Prometheus [04:27.000 --> 04:33.000] but really like keeping an up-to-date list of switches that they had [04:33.000 --> 04:38.000] is still a very difficult challenge in infrastructure and also CMDBs [04:38.000 --> 04:41.000] and all that kind of tools when you work in a really big corporation, [04:41.000 --> 04:44.000] it's a lot more difficult to interact with. [04:44.000 --> 04:47.000] But somehow they can produce CSVs. [04:47.000 --> 04:51.000] So we developed a tool, CSV2Targets, [04:51.000 --> 04:56.000] where basically Prometheus can work with file service coverage. [04:56.000 --> 05:00.000] So you can have a JSON file with all your targets [05:00.000 --> 05:06.000] and then Prometheus can use that JSON file and scrape the targets in the JSON file. [05:06.000 --> 05:13.000] But when you talk to those people, they are like, what is JSON or do I use it? [05:13.000 --> 05:19.000] And then when you start explaining that in the file SD you need, [05:19.000 --> 05:24.000] if you want different labels that you need to duplicate the label section [05:24.000 --> 05:26.000] multiple times, they are completely lost. [05:26.000 --> 05:31.000] So we made that tool, so if you just create a CSV file. [05:38.000 --> 05:41.000] So the first colon is going to be the address. [05:41.000 --> 05:46.000] So we can just leave it empty and then you can have labels [05:46.000 --> 05:52.000] like the data center and the rack and whatever you want. [05:52.000 --> 06:00.000] And then you just can, you can put just all the switches that you have, [06:00.000 --> 06:07.000] completely making something up. [06:07.000 --> 06:14.000] London 1 and the rack with the rack S5, I don't know. [06:14.000 --> 06:19.000] And then they can just duplicate that. [06:19.000 --> 06:26.000] And maybe the rest, they have one in Europe as well in Paris and also changing the rack. [06:26.000 --> 06:36.000] And then you call the CSV to target tool. [06:37.000 --> 06:55.000] So you put it the gateway.csv and you have your file that's written. [06:55.000 --> 07:00.000] So you have all the targets with all the labels which are colon based. [07:00.000 --> 07:04.000] So it's quite easy to get the file that Prometheus can scrape [07:04.000 --> 07:13.000] and then you just need to change your, your point is config. [07:13.000 --> 07:39.000] So you are the job and file as the configs, files is gateway.gson. [07:39.000 --> 07:57.000] And so it targets.gson, I called it. [07:57.000 --> 08:03.000] And it works, it works the first time, so it's great. [08:03.000 --> 08:09.000] And I have my Prometheus server and I have on my targets page, [08:09.000 --> 08:13.000] I have my gateways with the correct labels from the CSV file. [08:13.000 --> 08:22.000] And now because Prometheus is using a notify, just when you change the CSV and you run the tool again, [08:22.000 --> 08:24.000] it will just keep the new labels. [08:24.000 --> 08:29.000] So this was a very easy way for them to add and update their targets. [08:29.000 --> 08:32.000] You can even choose the labels that they wanted to use. [08:32.000 --> 08:38.000] So it's quite easy for them to, to add and change targets. [08:38.000 --> 08:40.000] It's still very specific tools. [08:40.000 --> 08:43.000] And we also plan to add in that tool HTTP support. [08:43.000 --> 08:46.000] So you don't need actually to be on the local file system of Prometheus. [08:46.000 --> 08:50.000] But those people, they know everything but Linux, they know system and expression, [08:50.000 --> 08:56.000] they know CSV, but gson is still one step further for, for them to learn through. [08:59.000 --> 09:02.000] So just one work about how you can get those tools. [09:02.000 --> 09:08.000] So at the bottom of each page, you will see that you can download, download them, [09:08.000 --> 09:13.000] but you also provide dev files, RPMs, Docker images. [09:13.000 --> 09:19.000] And you can also use Nix if you want to, except that this will probably download the world. [09:19.000 --> 09:21.000] But it works. [09:21.000 --> 09:26.000] So you can just get the tools easily each one individually. [09:26.000 --> 09:35.000] If you are on a DBN or a CentOS machine, it's just easy to just get the package as well. [09:35.000 --> 09:39.000] So it's easy to, to get and to install. [09:39.000 --> 09:42.000] The second tool is only exposed. [09:42.000 --> 09:47.000] So that tool is just taking a metrics file and exposing it for Prometheus to consume. [09:47.000 --> 09:52.000] So the idea is that sometimes you have scripts and you don't want to use the push gateway [09:52.000 --> 09:54.000] because then you need to secure the push gateway. [09:54.000 --> 10:01.000] And so this is basically the text collector feature of the node exporter, [10:01.000 --> 10:05.000] except that the issue with the text collector feature of the node exporter [10:05.000 --> 10:09.000] is that the node exporter needs to be able to read the file. [10:09.000 --> 10:12.000] It means that you need to run it with the correct user. [10:12.000 --> 10:15.000] And if you have three different applications running their own user [10:15.000 --> 10:20.000] and writing their own files, you might, you, you don't want to run the node exporter [10:20.000 --> 10:22.000] as well just to read all the files. [10:22.000 --> 10:27.000] So this is just taking a small HTTP server to just expose your metrics. [10:27.000 --> 10:35.000] In addition to just Python dash M simple HTTP server that you might do to expose your metrics, [10:35.000 --> 10:37.000] which would also work. [10:37.000 --> 10:42.000] This is also adding the node exporter specific metrics about the time changes of the file. [10:42.000 --> 10:49.000] So you can still monitor that if a file is changed, just like you can do it in the node exporter. [10:49.000 --> 10:57.000] So again, if you, if we start it, thank you. [10:57.000 --> 10:58.000] Someone is following. [10:58.000 --> 11:07.000] No, I know that you can read the screen. [11:07.000 --> 11:16.000] So I have a file with a very small metric which I will call forced them talk running one [11:16.000 --> 11:19.000] because my talk is running right. [11:19.000 --> 11:28.000] And now I can just expose this using the only exposed tool. [11:28.000 --> 11:33.000] So the defaults are it will listen up on 9 0 9 9. [11:33.000 --> 11:41.000] And the file it will take is okay, but I will just. [11:41.000 --> 11:46.000] As you can see also, it, you will see the same log messages as you've seen prometheus [11:46.000 --> 11:50.000] because we are able to use the same configuration file as prometheus. [11:50.000 --> 11:55.000] So if you have your TLS conflict file for prometheus and you know the format, [11:55.000 --> 12:00.000] you can also just use that and protect that with a password or anything like that. [12:00.000 --> 12:07.000] So 9 0 9 9. [12:07.000 --> 12:12.000] Not fun because I need the slash metrics. [12:12.000 --> 12:17.000] So you see that I have my forced them talk running one and then I have the go metrics [12:17.000 --> 12:18.000] one. [12:18.000 --> 12:25.000] I also should have the file, the node exporter specific metrics like the text file and time [12:25.000 --> 12:33.000] seconds and all the specific items of the node exporter. [12:33.000 --> 12:37.000] We also have the same flags as in the node exporter. [12:37.000 --> 12:42.000] So you can disable the exporter metrics. [12:42.000 --> 12:47.000] So like this and now I just have the first them talk running and the modification time [12:47.000 --> 12:49.000] and whether or not there is an error. [12:49.000 --> 12:55.000] So you don't have to go garbage collection metrics for those very small components. [12:55.000 --> 13:00.000] This is a feature that is also available on some of the exporter. [13:00.000 --> 13:09.000] I think at least another exporter is, but I don't know who says it, but it's getting a thing now. [13:09.000 --> 13:14.000] So that tool was also used by a customer. [13:14.000 --> 13:18.000] And the last tool is a bit more technical. [13:18.000 --> 13:21.000] It is all script jitter. [13:21.000 --> 13:28.000] Basically, the story is that one of the customers at a pair of HHA Prometheus server doing the [13:28.000 --> 13:31.000] exact same work like everything was the same. [13:31.000 --> 13:36.000] But in one of the server, the blocks took twice the size more or less. [13:36.000 --> 13:39.000] It was like, yeah, why is that server running out of this space? [13:39.000 --> 13:47.000] And it was really like slowly increasing until it gets the maximum was at the full retention time. [13:47.000 --> 13:52.000] So it did not grow that much after that, but still we had blocks that were twice higher. [13:52.000 --> 14:01.000] And after investigating, we noticed that basically that Prometheus server was not compressing the blocks, [14:02.000 --> 14:06.000] the chunks directly because there was a difference. [14:06.000 --> 14:15.000] The scripts were not aligned, which means that instead of taking the metrics every 30 seconds, [14:15.000 --> 14:22.000] it was taking the metrics every 30.30 seconds and 10 milliseconds. [14:22.000 --> 14:27.000] So that is a big issue in Prometheus when this happens all the time, [14:27.000 --> 14:30.000] because it means that we are not compressing the data. [14:30.000 --> 14:36.000] And instead of taking a really, really negligible amount of bytes to store the timestamp, [14:36.000 --> 14:41.000] you need to use a lot more storage to do that. [14:41.000 --> 14:44.000] And it was really noticeable. [14:44.000 --> 14:46.000] So we have all this script jitter. [14:46.000 --> 14:47.000] Oh, is it working? [14:47.000 --> 14:51.000] It will look at the timestamp of all the ob-metrics in your Prometheus server, [14:51.000 --> 14:56.000] and it will tell you, OK, are they aligned or not in the Prometheus TSDB. [14:56.000 --> 15:00.000] So we are by using that query and querying Prometheus directly. [15:00.000 --> 15:02.000] You don't need to have access to the chunks, [15:02.000 --> 15:08.000] but we can tell you just by running the tool if the scripts are aligned. [15:08.000 --> 15:19.000] So this is what it tells me now on my laptop, [15:19.000 --> 15:23.000] and hopefully my laptop has five aligned targets. [15:23.000 --> 15:29.000] So it did look at all the metrics that I have, [15:29.000 --> 15:32.000] and it is just happy with that. [15:32.000 --> 15:36.000] You have a lot of options there to better understand the output, [15:36.000 --> 15:39.000] so you can decide to only log the underlying target. [15:39.000 --> 15:44.000] So you can see maybe one of the targets you have is more problematic than the others, [15:44.000 --> 15:47.000] and you can also plot the target. [15:47.000 --> 16:03.000] So if I run the plot, I can see the... [16:03.000 --> 16:06.000] I picked a really good name today. [16:06.000 --> 16:08.000] File.png. [16:08.000 --> 16:11.000] So you can see anyway like all the targets are aligned, [16:11.000 --> 16:13.000] so you don't see anything. [16:13.000 --> 16:22.000] If I show you what we had at that specific customer... [16:22.000 --> 16:28.000] So that specific customer had 150,000 scripts that were not aligned [16:28.000 --> 16:30.000] with a delay of a few milliseconds. [16:30.000 --> 16:32.000] And using the tools... [16:32.000 --> 16:38.000] So the first thing that we made when we did notice that is we implemented [16:39.000 --> 16:43.000] a feature that by default, if you are a few milliseconds off, [16:43.000 --> 16:46.000] we will just say, okay, we will just align your scripts [16:46.000 --> 16:49.000] so you don't lose bytes just for nothing. [16:49.000 --> 16:50.000] You can disable that. [16:50.000 --> 16:52.000] You can change the cheater. [16:52.000 --> 16:58.000] And for that customer, we actually went and we increased that tolerance a bit more [16:58.000 --> 17:01.000] because the tool told us, okay, that server is really overloaded. [17:01.000 --> 17:07.000] And if you increase the tolerance, then you can gain a lot of this space. [17:07.000 --> 17:12.000] So by default now, Prometheus is doing that with a smaller amount of milliseconds. [17:12.000 --> 17:17.000] So if you really have a few cheaters that are not expected, [17:17.000 --> 17:21.000] but as long as it's small enough, you will not see the difference. [17:21.000 --> 17:28.000] You will still do correctly the alignment. [17:28.000 --> 17:31.000] So the outcome for that customer is when we implemented that [17:31.000 --> 17:36.000] is that we had a 30% disk usage reduction. [17:36.000 --> 17:39.000] So increasing the cheater is not always the right solution. [17:39.000 --> 17:41.000] So if you are able to just... [17:41.000 --> 17:45.000] It very often means that your Prometheus is running out of CPU. [17:45.000 --> 17:48.000] So it will look like it's working really fine. [17:48.000 --> 17:50.000] You can still do query with your Prometheus server, [17:50.000 --> 17:55.000] but the ingestion path is getting somehow stuck [17:55.000 --> 18:00.000] and it cannot handle correctly the scraping on time. [18:00.000 --> 18:05.000] So it is very often a way to signify that your Prometheus server [18:05.000 --> 18:13.000] might need a bit more CPU power, even if it still works fine. [18:13.000 --> 18:16.000] So I did speak about the configuration. [18:16.000 --> 18:20.000] So we support everything that you might need to configure [18:20.000 --> 18:22.000] to access your Prometheus server. [18:22.000 --> 18:26.000] So basic authentication, authorization, [18:26.000 --> 18:30.000] also basically every HTTP mechanism [18:30.000 --> 18:32.000] and it is using the Prometheus library. [18:32.000 --> 18:37.000] So we did not invent a new way to configure your configuration. [18:37.000 --> 18:40.000] So if Prometheus can scrape it... [18:40.000 --> 18:43.000] Well, if Prometheus can scrape it, [18:43.000 --> 18:45.000] yeah, you can just use the same configuration [18:45.000 --> 18:52.000] to access the APIs that you are using. [18:52.000 --> 18:54.000] And the file... [18:54.000 --> 18:56.000] So we are following the Prometheus security guidelines, [18:56.000 --> 19:01.000] so we don't let you put username and password on command line [19:01.000 --> 19:04.000] or on environment variables, we just follow the Prometheus way, [19:04.000 --> 19:08.000] so you have the config file for connecting to Prometheus, [19:08.000 --> 19:14.000] except the URL which you can just pass as command line arguments. [19:14.000 --> 19:16.000] So if we get back to the HolyScript teacher, [19:16.000 --> 19:18.000] you can just put any URL there. [19:18.000 --> 19:27.000] So if I go to Prometheus.demo.do.prometheus.io, [19:27.000 --> 19:36.000] I think that's the one. [19:36.000 --> 19:39.000] Then I was able to run that against the demo Prometheus server [19:39.000 --> 19:45.000] to tell me that I have 26 milliseconds maximum. [19:45.000 --> 19:48.000] I have four line targets and six line targets, [19:48.000 --> 19:50.000] so if that becomes an issue, [19:50.000 --> 19:56.000] we might have to look at the number of actually not aligned targets [19:56.000 --> 20:00.000] to see if that's a small issue or a big issue that we need to figure out, [20:00.000 --> 20:03.000] because sometimes it's on your scripts and it's fine, [20:03.000 --> 20:06.000] but sometimes you have only 10% of the script [20:06.000 --> 20:11.000] which are efficiently stored on the disk. [20:11.000 --> 20:17.000] Okay, so that was it for the initial tools that we have in the command line, [20:17.000 --> 20:22.000] but we have a few more tools that are running directly in the browser. [20:22.000 --> 20:25.000] So we made a few modifications of streams [20:25.000 --> 20:29.000] so that you can compile Prometheus in JavaScript, [20:29.000 --> 20:31.000] so it's called WASM. [20:31.000 --> 20:37.000] So basically we don't have any API server running the web tools, [20:37.000 --> 20:42.000] but we directly run the Prometheus engine in the browser. [20:42.000 --> 20:45.000] So the first tool is the matrix linters, [20:45.000 --> 20:48.000] so it is using the client Golang matrix linters, [20:48.000 --> 20:51.000] so you can also do that with Prometheus, I think, [20:51.000 --> 20:55.000] but if you are quickly developing and you don't have access to Prometheus, [20:55.000 --> 20:58.000] you can just enter your matrix in that page, [20:58.000 --> 21:04.000] and in the browser itself, it will just validate your matrix. [21:05.000 --> 21:13.000] If I go again with my matrix for them to running one, [21:13.000 --> 21:21.000] I can link it and I will see, okay, that my for them to running has no help text. [21:21.000 --> 21:25.000] If I have a compilation issue, [21:25.000 --> 21:29.000] well, that my matrix cannot be parsed and I will also see an error, [21:29.000 --> 21:31.000] so that way if you are developing a script [21:31.000 --> 21:34.000] and you want to quickly add up, go for them back, [21:34.000 --> 21:38.000] you can just go to that page and it will link button [21:38.000 --> 21:41.000] and it will get you some linking information about your matrix. [21:41.000 --> 21:45.000] So this is running the same code as your Prometheus server is running, [21:45.000 --> 21:50.000] so you will get the same output and it's really nice to have that just easy [21:50.000 --> 21:56.000] under your hands, you would go to all the tools and you can just link your matrix. [21:56.000 --> 21:58.000] The second tool is the password generator, [21:58.000 --> 22:03.000] so if you want to secure the connection to your Prometheus server, [22:03.000 --> 22:10.000] you probably want to install or infrastructure have a password file for Prometheus, [22:10.000 --> 22:13.000] but the issue is that this is using Bcrypt, [22:13.000 --> 22:16.000] and Bcrypt is not always easy to generate, [22:16.000 --> 22:19.000] so this is the ashing algorithm for the passwords, [22:19.000 --> 22:24.000] so we made that tool so you can enter your user names and your password [22:24.000 --> 22:27.000] and it will generate your Bcrypt. [22:27.000 --> 22:35.000] What we see in most organizations is that they will use more sophisticated SSO, [22:35.000 --> 22:39.000] so they will use another proxy in front of Prometheus, [22:39.000 --> 22:43.000] but you can still secure your Prometheus server or secure the communication [22:43.000 --> 22:49.000] between that proxy and the Prometheus server using TLS for example, [22:49.000 --> 22:51.000] so you have a lot of possibilities, [22:51.000 --> 22:58.000] but smaller organizations, they can quick and easy add the web.yaml also to the exporter themselves, [22:58.000 --> 23:03.000] so if you want just password protected node exporter, it's also possible, [23:03.000 --> 23:09.000] so if I put the username for them and password demo, [23:09.000 --> 23:14.000] and I generate the file and so I have the web configuration file, [23:14.000 --> 23:23.000] so if I take this web.yaml and I will paste it in my user, [23:23.000 --> 23:30.000] so I will now launch Prometheus with that file. [23:33.000 --> 23:38.000] If I find the option, OK, web config file, [23:38.000 --> 23:44.000] and now if I go to my Prometheus server, [23:44.000 --> 23:50.000] it asks me for a username and a password for them demo, [23:50.000 --> 23:56.000] so it is very easy to generate user names and passwords. [23:56.000 --> 24:02.000] As it is fully open source, the deployment procedure is also open source, [24:02.000 --> 24:07.000] and if you open your browser tab, you will see that there is no connection to the server [24:07.000 --> 24:11.000] because it's all generated by JavaScript and, well, [24:11.000 --> 24:13.000] go compile in JavaScript in your own browser, [24:13.000 --> 24:16.000] so you are not sending us your password whatsoever, [24:16.000 --> 24:19.000] which is an issue with most of the big-crypt generator. [24:19.000 --> 24:27.000] You will find online, like you enter your password and it sends and it gets back by magic, but... [24:27.000 --> 24:30.000] And the last tool is the Prometheus parser, [24:30.000 --> 24:36.000] so it's the same, it's running completely in browser and you can just run your Prometheus input, [24:36.000 --> 24:45.000] so if I take a Prometheus query, [24:45.000 --> 24:50.000] let's see what we have. [24:50.000 --> 25:01.000] So we take a Prometheus query and put it in a Prometheus parser [25:01.000 --> 25:04.000] and this is running the Prometheus input from Prometheus [25:04.000 --> 25:07.000] and it also returns you the pre-defined Prometheus expression, [25:07.000 --> 25:12.000] so if you just mess up with your Prometheus query, you parse it [25:12.000 --> 25:14.000] and then you will see the pre-defined Prometheus. [25:14.000 --> 25:19.000] If you have more complex query, you will see the pre-defined Prometheus in multiple lines. [25:19.000 --> 25:23.000] This is actually a feature that has now also been implemented upstream, [25:23.000 --> 25:26.000] so let me show it to you because you might not know about it, [25:26.000 --> 25:30.000] but you don't need that tool if you have a Prometheus server under the hand, [25:31.000 --> 25:36.000] because let me close the umbrella menu. [25:36.000 --> 25:38.000] Next to the expression browser in Prometheus, [25:38.000 --> 25:43.000] now you have that button which is basically formatting the expression, [25:43.000 --> 25:48.000] so it will also tell you if your expression is not correctly written or very strange to read, [25:48.000 --> 25:51.000] so if I click this, it does not execute the query, [25:51.000 --> 25:54.000] but it just gives you a nice formatting of the query. [25:54.000 --> 25:59.000] So we implemented it in the tool just a few weeks before it was upstream, [25:59.000 --> 26:06.000] so that's why both versions exist, but still it's nice to have it in browser. [26:06.000 --> 26:09.000] It was just fun to make that in browser anyway. [26:15.000 --> 26:19.000] We are working on more tools, so we are looking at a tool that can tell you [26:19.000 --> 26:22.000] which alert can affect which target, [26:22.000 --> 26:26.000] so it's looking at all the metrics into one alert expression [26:26.000 --> 26:31.000] and it will try to tell you that target, [26:31.000 --> 26:34.000] that Prometheus target can be affected by that expression [26:34.000 --> 26:38.000] or that expression is not affecting a single target, which is fine, [26:38.000 --> 26:44.000] but it's just a kind of dashboard for people who actually want to nag your style dashboard [26:44.000 --> 26:49.000] where they see, okay, if I tab up equal equal zero with that selector, [26:49.000 --> 26:53.000] which target will be affected, so that's something that we are working on [26:53.000 --> 26:57.000] and we are working, as we will see the support request [26:57.000 --> 27:00.000] and what we are doing in the field, we might add more tools there, [27:00.000 --> 27:02.000] but as you are seeing with the script jitter, [27:02.000 --> 27:07.000] we are also open to work directly upstream when it makes sense. [27:07.000 --> 27:12.000] So that's it, so you can get the toolkit on GitHub or on Oli.tools, [27:12.000 --> 27:19.000] O11 wide tools, so just have fun with the toolkit if you need more tools, [27:19.000 --> 27:25.000] if you have found some things that maybe have been upstream by maybe me, [27:25.000 --> 27:28.000] maybe it makes sense to have that in the toolkit [27:28.000 --> 27:34.000] and to play around and to have it widely available. [27:34.000 --> 27:35.000] Thank you. [27:35.000 --> 27:56.000] I'm wondering what the challenges were with the Wasm compilation. [27:56.000 --> 27:57.000] I think I tried something similar. [27:57.000 --> 28:02.000] I also can't remember where I failed, but it didn't work out for me. [28:03.000 --> 28:05.000] So, was that a question? [28:05.000 --> 28:09.000] Yeah, but the challenges were with the Wasm compilation. [28:09.000 --> 28:10.000] I think you said you had to modify them. [28:10.000 --> 28:14.000] Oh, yeah, so the challenge is to make the TSDB to compile [28:14.000 --> 28:17.000] because you cannot run the TSDB in browser, [28:17.000 --> 28:21.000] so we are faking some of the files, APIs, [28:21.000 --> 28:29.000] if you compile to Wasm. [28:29.000 --> 28:32.000] Okay, thank you.