Hello, my name is Peter. I work for the Wikimedia Foundation in the quality and test team. So I have like three minutes, so I'm going to show you a couple of things. In the team, we want to make sure that we find regressions, right? And the cool thing about it is that we keep all our performance metrics in the open. So you can go to our Grafana instance, Grafana.wikimedia.org and see our metrics. Now I'm going to do a live demo. Oh, that didn't work out so good. So let's see. Okay. So, we have, I'm going to show you four dashboards. We have our real user monitoring dashboard with the data that we send from our read users back to Wikimedia. So I propose that you go to that dashboard and look out at our performance metrics. I think it's quite interesting because we don't have so many big websites that actually show their data. We also have another dashboard where we have all our synthetic tests. So you can use the drop downs to see the pages that we test and the performance data of that. So this is kind of like internal data, so maybe not so interesting for you. So I have two more dashboards that is more interesting. So we have the user network dashboard. Let's see. Here is actually what kind of network our users that use Chrome has. So we use the network information API and beacon back the data. So we can use the drop down and see what kind of network our users is using. And if we scroll down, you can also see what kind of connection type they have. So this is interesting because you can see what kind of connection different areas of the world have when they access Wikipedia. And the last thing I want to show you is the CPU benchmark. So as Beth said, it's important because different users have different devices, right? And for some of our users, we run a small JavaScript that we measure the time it takes to run and we beacon that data back. So we can see what kind of performance different devices have for different users all around the world. And we use that data to actually see and compare it to different devices. So we can use that data to tweak how we run our tests internally. So if you go to that page, you can see what kind of benchmark to use as a Wikipedia. Okay, that was all for me. Dave. Thanks, Peter. Hey, everybody. I'm Dave Hunt. I'm the, I'm going to stand over here so I'm not blocking the screen. I'm Dave Hunt. I'm the engineering manager for the performance tools team at Mozilla. And I'm going to show you a little bit about how we handle regressions for Firefox. And we have tests that test page load benchmarks. I'm going to use a real example of a recent regression. And I'm going to go pretty fast through these because I only have a few minutes. So obligatory slide with a quote from a famous person. So Galileo, Galilei said, measure what is measurable and make measurable what is not so. And I think this is something we try to do in our team. So here, this is a performance alert. We have a bunch of tests running on, we're not suddenly commit something to Mozilla Central, our repository for Firefox. When we notice a change in the baseline, we generate a regression. One of our performance sheriffs will be monitoring and triaging these alerts. In this case, this one was triaged by Andra. And this shows you the magnitude of the regression and the tests that have alerted. In this case, I filtered it down just for simplicity. This is Expedia. And we can see some of our speed index tests have regressed. The sheriff will do some investigation. This is the same test or one of those tests shown in graph view. You can see this is our baseline. There was a change. The sheriff actually has come in here to some retriggers and backfills just to narrow the regression range and identify a likely culprit. And then the sheriff will file a bug. So we file a bug in in bugzilla. Because we've identified the likely culprit, we'll also need info. They will request further information from the author of that patch so that they can be aware. Looks like there might have been regression. Maybe we need to back this out or maybe we need to fix it. And I'm just highlighting here as well links through to one of our other tools, the performance, sorry, the Firefox profiler. So we provide as much as we can to the engineer so they can confirm. Yes, it looks like it's my patch and also can have a little bit more of a deeper dive into what might have caused it. And then another tool that we have is perf compare. This allows engineers to, if they think they've got a fix or they think they have something that might affect performance, either positively or negatively, they can push that to our CI system, run the tests and see a comparison. And so here this is again that example, Xpedia contentful speed index. This is the before, in this case, the regression and a patch that should fix it. And we can see that the distribution of the results, we've run the tests multiple times. Distribution of the results is smaller. And so it indicates that perhaps this is fixed. And it was. So we also alert on improvements. This is the alert that came in a couple of days probably after the patch landed to fix it or to back this out. I think this change was a change in how aggressively we are garbage collecting. And so yeah, we get this and we can also look at the graph view. We can see the period of time that we had that regression and we can see that it is fixed and it's back to the baseline that we had before. We also capture videos. So again, another tool that is useful for the engineers to confirm. Yes, it looks like there really is a regression. In this case, this is the fix. So this is the slower and improved, the faster. I mentioned the Firefox profiler. I encourage everybody if you don't use it or haven't used it, check it out. Try it. Give us feedback. And finally, I just wanted to promote. Floring Kes is talking in Janssen at 1pm today. That's the main track on Firefox profiling. So you'll see a little bit of example of using the profiler for something other than necessarily web performance, but it's a very versatile tool. That's it.