[00:00.000 --> 00:12.200] Okay, thank you. We have two traditions here in the Go Dev Room. That is that we start [00:12.200 --> 00:16.040] with a state of Go, and then it's around lunchtime. We always have the next state, which is the [00:16.040 --> 00:29.920] state of Delph. So let's all get into Delph. Let's go debugging. [00:30.840 --> 00:42.760] Let's try this again. Hello, everybody. My name is Derek Parker. I am the author of [00:42.760 --> 00:49.800] Delph, and I continue to maintain the project along with my lead co-maintainer, Alessandro, [00:49.800 --> 00:58.720] who is also in the crowd today. And as mentioned, it's been kind of a tradition here at Bosdom [00:58.840 --> 01:04.640] to piggyback on the state of Go and talk about the state of Delph. So this talk will [01:04.640 --> 01:09.560] be kind of a two-parter. I'll start with the state of Delph, and then I'll go into the [01:09.560 --> 01:15.960] main talk, which is debugging Go programs with EBPF. Now, if you're unfamiliar with what [01:15.960 --> 01:23.360] EBPF is as a technology, fret not, I will go in and kind of explain it in more detail [01:23.840 --> 01:30.920] throughout the course of the talk as we kind of get into the real meat of everything. So [01:30.920 --> 01:35.160] just to introduce myself a little bit more, again, my name is Derek Parker. I'm a senior [01:35.160 --> 01:40.320] software engineer at Red Hat. If you would like to follow me, I am Derek the Daring on [01:40.320 --> 01:51.000] Twitter. And at Red Hat, I work on Delph and Go itself. So the first thing that I want [01:51.040 --> 01:58.720] to start and talk about is the state of Delph. So what I'll go through is essentially what's [01:58.720 --> 02:05.120] changed since the last Bosdom, and actually since the last in-person Bosdom. So I was [02:05.120 --> 02:11.840] actually here in 2020 presenting a different talk before the world ended. And I'm happy [02:11.840 --> 02:17.320] to be here again in-person with everybody and being able to kind of talk and catch up [02:17.400 --> 02:21.840] and present these things. So thanks everybody for coming and for attending this talk. I [02:21.840 --> 02:34.280] really appreciate it. So one of the big milestones that I kind of want to call out is that Delph [02:34.280 --> 02:40.000] turns nine this year. So to celebrate that on the count of three, everybody in the room, [02:40.000 --> 02:45.880] we're going to sing happy birthday. One, two, I'm just kidding. Maybe next year for the [02:45.880 --> 02:51.000] 10th anniversary. I'll hold that off for a little bit. But the Delph project was started [02:51.000 --> 03:00.720] in 2014. And yeah, it turns nine, still going strong. And I appreciate everybody who uses [03:00.720 --> 03:10.120] it, contributes to it. It's just really fantastic to see. So I'll go into some statistics a [03:10.160 --> 03:15.400] little bit about what's happened in the last couple of years. So since the last in-person [03:15.400 --> 03:26.000] Bosdom, we've done 18 different releases. Now, the way we do releases of Delph is somewhat [03:26.000 --> 03:33.880] scheduled, somewhat ad hoc. So we always produce a new release when the first release [03:33.880 --> 03:39.560] candidate of the new Go version comes out. So anytime a new Go version comes out, we [03:39.560 --> 03:45.480] ensure that the day that it's released, you can debug it. So once you compile your code, [03:45.480 --> 03:49.960] do everything, you have a debugger that's going to work with that version. And then [03:49.960 --> 03:55.240] aside from that, we have kind of minor releases that come out throughout the year. And in [03:55.240 --> 04:02.800] between the releases to fix bugs, add new features, things like that. So within that [04:02.840 --> 04:09.320] time frame, we've added support for numerous different Go versions. So 114 all the way [04:09.320 --> 04:16.920] through 120. And as 120 was just released the other day, we've supported it since the [04:16.920 --> 04:24.000] first RC. So you always have a debugger to kind of go through your code even before [04:24.000 --> 04:31.120] the official release actually comes out. During that time, we've also added four new [04:31.200 --> 04:38.360] operating system and architecture combinations. So with Delph, we strive to enable you to debug [04:38.360 --> 04:43.320] on any operating system and architecture that Go itself supports. We're getting closer [04:43.320 --> 04:49.760] and closer to that goal with each passing release. So I'm proud to say within the last [04:49.760 --> 04:57.240] few years, we added support for four new platforms. And there's a few more already in the works [04:57.240 --> 05:04.960] and we'll be releasing later this year. So I want to call out a few major new features [05:04.960 --> 05:12.360] that have been developed. The first is we've integrated a DAP server into Delph, which [05:12.360 --> 05:18.880] is probably not something that's super relevant to everybody here unless you're like the author [05:18.880 --> 05:25.080] of like an editor or something like that. It's really for editor integration, but from [05:25.080 --> 05:31.240] a user's perspective, it really improves the usability of Delph within editors such [05:31.240 --> 05:41.240] as VS Code and things like that. We've added support for Apple Silicon and that happened [05:41.240 --> 05:45.560] really quickly once we were able to kind of get our hands on the hardware and everything [05:45.560 --> 05:51.640] like that. We added the ability to generate core dumps from running processes. So while [05:51.680 --> 05:56.800] you're in a debug session, you can ad hoc, generate a core dump, save that away and use [05:56.800 --> 06:03.560] that and debug it later. We've added support for hardware watchpoints, which I think is [06:03.560 --> 06:11.000] a really, really cool feature. And kind of difficult to do with Go due to some kind of [06:11.000 --> 06:16.920] internal things of how Go kind of looks at the stack and changes stack and stuff like [06:17.000 --> 06:21.760] changes go routine stacks as the stack grows and things like that, but we were able to [06:21.760 --> 06:27.000] implement them and get them working. So if you're unaware of what hardware watchpoints [06:27.000 --> 06:31.640] are, it's a really cool feature where you can say like, I want to watch this particular [06:31.640 --> 06:35.640] variable or this particular address and memory and I want to know, I want the debugger to [06:35.640 --> 06:42.840] stop any time that value is read or changed. So you're basically just saying like telling [06:42.920 --> 06:48.000] the debugger what you want to do and letting it do the heavy lifting for you. Really cool [06:48.000 --> 06:55.560] feature. And as was just shown in the previous talk, we've improved some of the filtering [06:55.560 --> 07:00.720] and grouping for Go routine output. So you can filter by label, you can filter by all [07:00.720 --> 07:05.600] different kinds of things. So in like massively concurrent and parallel programs where you [07:05.600 --> 07:12.400] might have tons and tons of different Go routines, we've improved a lot of the introspection [07:12.400 --> 07:16.160] on that and being able to kind of filter out and get the information that you really [07:16.160 --> 07:22.680] need. We've also added an experimental EBPF based tracing back in. So that's what I'm [07:22.680 --> 07:29.760] going to be talking about today. And we also added support for debug info find. So this [07:29.760 --> 07:34.200] is really cool for a lot of operating systems where maybe you're debugging a package that [07:34.200 --> 07:40.000] you installed via your package manager and the like the door, the debug information is [07:40.040 --> 07:43.960] not included with the binary. Maybe it's in a different package or something like that. [07:43.960 --> 07:48.320] We've integrated with debug info find to be able to automatically download those debug [07:48.320 --> 07:54.520] packages for you so that you can have a fruitful and successful debugging session. And there's [07:54.520 --> 07:59.920] also been a lot more. If you want a look at all of the details, go ahead and check out [07:59.920 --> 08:05.640] the change log in the repo. It'll detail all of the changes that we've made. Next thing [08:05.680 --> 08:13.400] I want to talk about is a few little upcoming features that I want to tease. So one of the [08:13.400 --> 08:21.400] biggest ones is we're working on support for two new architectures. So PowerPC64LE and [08:21.400 --> 08:28.840] S390X. My colleague Alejandro is working on the PowerPC64LE port and he's in the crowd [08:28.880 --> 08:36.040] as well. So thank you for your work on that. We're looking at some more improvements to [08:36.040 --> 08:39.760] the EBPF tracing back end. I'll go into some more detail on that as well during this talk. [08:39.760 --> 08:48.760] We're also working on the ability to debug multiple processes simultaneously. My co-maintenor [08:48.760 --> 08:54.840] Alejandro is working on that and we're hoping to land that pretty soon. So that would be [08:55.720 --> 09:01.080] if your process forks or anything like that creates new child processes, you can debug [09:01.080 --> 09:08.080] all of them within a single session. Another thing that we want to work on this year is [09:08.080 --> 09:13.880] improved function call support across architectures. So that was a big feature that landed in Delve [09:13.880 --> 09:18.520] as well, the ability to call a function while you're debugging your program. It's very [09:18.520 --> 09:23.480] architecture specific. So one of the things that we want to do throughout this year is [09:23.520 --> 09:29.200] improve support for that across different architectures. There's tons more. We're always [09:29.200 --> 09:36.200] working on new things and we also always try to gather community feedback and user feedback [09:36.440 --> 09:42.080] and stuff like that. So since I'm here and other maintainers of Delve are here, if you [09:42.080 --> 09:46.480] want to come and tell us something that you would like us to implement or something that [09:46.480 --> 09:53.480] you would like to focus on, feel free to come chat with us. So now with that said and done, [09:55.440 --> 10:01.640] I want to move on to the real portion of this talk which is debugging and tracing go programs [10:01.640 --> 10:08.640] with EBPF. Now it's really cool that this talk comes after the talk right before because [10:08.960 --> 10:14.840] I think the tracing feature in Delve is somewhat underutilized and I think it's really good [10:14.840 --> 10:19.920] for debugging concurrent programs and seeing the interactions between go routines as your [10:19.920 --> 10:26.840] program is actually running. So if you're unfamiliar with what tracing is, I'll show [10:26.840 --> 10:32.520] a little demo but essentially what we're talking about here is instead of going into a full [10:32.520 --> 10:36.880] on debug session, what you're really doing is spying on your program. So if you're familiar [10:36.880 --> 10:41.720] with STrace, it's the same concept except for the functions that you're writing as opposed [10:41.720 --> 10:47.800] to the interactions with the kernel, like the system calls and things like that. So [10:47.800 --> 10:52.440] you can kind of spy and see what functions are being executed, what are their inputs, [10:52.440 --> 10:58.080] what are the outputs, what go routines are executing that function, so on and so forth. [10:58.080 --> 11:05.080] So to show a little demo of it real quick, let me increase my screen size a little bit. [11:05.360 --> 11:12.360] It may still be hard for folks in the back to see but hopefully that's good enough. [11:15.840 --> 11:20.680] So what I've done here is instead of typing everything directly on the console, I've created [11:20.680 --> 11:25.320] a little make file just so that you can see kind of the commands up there and they don't [11:25.320 --> 11:29.080] disappear as I run them. But the first thing that we're going to do is we're just going [11:29.080 --> 11:35.640] to run a simple trace. So to do this, we use the trace sub-command of delve and what you [11:35.640 --> 11:42.640] provide to it as an argument is a regular expression and what delve will do internally [11:42.760 --> 11:46.140] is set a trace point on any function that matches that regular expression. So you can [11:46.140 --> 11:53.140] do something like main.star to trace anything in the main package, extrapolate that out [11:53.420 --> 11:59.740] to any other package and it's a really cool feature. So just to kind of show how it works, [11:59.740 --> 12:05.740] we can go here and say make trace and we see the output there. So to explain the output [12:05.740 --> 12:12.740] a little bit, you have like the single line or the single arrow is the call, the double [12:12.980 --> 12:19.980] arrow is the return. You can see there it labels what go routine is running and calling [12:19.980 --> 12:23.660] that function. You can see the arguments to that function and then you can also see [12:23.660 --> 12:28.260] the return value. So again, really cool and useful for like if you have a bunch of different [12:28.260 --> 12:33.900] go routines, you can kind of see the interactions of them and see what go routines are doing [12:33.900 --> 12:40.900] at any given time. Another option that you can do is you can say if you pass the stack [12:40.900 --> 12:46.220] flag and give it an argument, you can get a stack trace anytime one of the trace points [12:46.220 --> 12:54.460] are hit. So if we say trace with stack, you see we get kind of a similar output but we [12:54.460 --> 13:00.220] get a stack trace as well. So you can kind of see a little bit more detailed information [13:00.220 --> 13:07.220] as your program is being traced. So the real meat of this talk is how we improve the tracing [13:07.460 --> 13:14.460] back end to make it more efficient because what you, especially when you're doing something [13:20.900 --> 13:26.140] like tracing and things like that, the lower overhead the better. We don't want to make [13:26.140 --> 13:31.220] your program run significantly slower because that's just going to frustrate you and it's [13:31.220 --> 13:34.180] going to take longer to get to root cause analysis which is what you're really trying [13:34.180 --> 13:38.740] to do if you're using a debugger in the first place. So we'll talk about quickly how things [13:38.740 --> 13:45.500] are currently implemented and then how we can improve upon that using EBPF. So right [13:45.500 --> 13:52.500] now Delve uses, or traditionally Delve uses ptrace syscall to implement the tracing back [13:53.380 --> 13:59.420] end. It's how ptrace is useful for, like it's used by pretty much every debugger, every [13:59.540 --> 14:06.300] kind of tool like this. Delve is no exception. And if you look at the man page it'll explain [14:06.300 --> 14:10.260] a little bit more about what it is but it essentially allows you to control the execution [14:10.260 --> 14:16.060] of another process and kind of examine the state of it, memory and things like that. [14:16.060 --> 14:23.060] So the problem is ptrace is slow and it can be very slow. So I ran some tests kind of [14:23.580 --> 14:30.580] a while ago when I was implementing the first iteration of this EBPF back end and I measured [14:33.920 --> 14:40.420] like a simple program execution that executed in 23.7 microseconds. And then the overhead [14:40.420 --> 14:45.580] with the ptrace based tracing, the traditional based tracing, it went up to 2.3 seconds. [14:45.580 --> 14:51.620] So that's several orders of magnitude of overhead, which is definitely not what you want. But [14:51.660 --> 14:58.660] why is ptrace so slow? So part of the reason is syscall overhead. We have to, ptrace is [15:07.060 --> 15:12.260] a syscall so whenever you invoke a syscall, you trap into the kernel, you switch context. [15:12.260 --> 15:19.260] So that has its own kind of overhead which can be pretty significant. And as I mentioned, [15:20.260 --> 15:26.380] the user space kernel context switching, the overhead of that can be really expensive. [15:26.380 --> 15:33.380] And it's amplified by the fact that ptrace is in a sense very directed. So when we're [15:38.260 --> 15:43.900] tracing these functions, we often have to make multiple ptrace calls per function entry [15:43.980 --> 15:48.820] and function exit. So if you think about it, we need to read the registers, we need to [15:48.820 --> 15:55.420] read all of the different function arguments that are there. There's a bunch of different [15:55.420 --> 16:00.500] things that we need to do. So it kind of balloons up really, really quickly where we get into [16:00.500 --> 16:05.020] this situation where we're doing a ton of these user space kernel context switching [16:05.020 --> 16:12.020] per every time you hit one of these trace points. And on top of that, all of these operations [16:14.180 --> 16:20.180] have to happen twice per function, right? So the entry and the exit. So it's a lot of [16:20.180 --> 16:26.220] overhead, a lot of context switching, essentially a lot of unnecessary work and a lot of work [16:26.220 --> 16:33.220] that just slows down your program and adds a lot of overhead. So the way that we can [16:35.500 --> 16:42.500] improve upon this and work around this is by using EBPF. So EBPF is a lot more effective [16:44.220 --> 16:51.220] and efficient, a lot quicker to do this kind of work. So with the same task, again, as [16:51.500 --> 16:58.500] I mentioned before, the original program, 23 microseconds with ptrace 2.3 actual seconds [16:58.540 --> 17:04.580] and with the EBPF based tracing, we have like 683 microseconds, which is still measurable [17:04.580 --> 17:11.580] overhead but significantly less than the traditional method of doing it. So I've been [17:12.220 --> 17:19.220] talking about this technology a lot, EBPF, EBPF, EBPF, right? But what actually is it? [17:20.380 --> 17:27.380] So EBPF is a technology that enables the kernel to run sandbox programs directly. So EBPF [17:30.740 --> 17:36.780] programs are written primarily in like a limited C. I'll get into some of the limitations [17:36.780 --> 17:43.780] later. But it gets compiled to a bytecode, loaded into the kernel where it's executed [17:44.100 --> 17:51.100] and jaded as it's ran. And it has a lot of use cases, observability, networking, debugging [17:52.260 --> 17:56.220] and a lot more. So you'll hear a lot about EBPF. I'm sure a lot of folks in this room [17:56.220 --> 18:03.220] have already heard of it in some shape or another. Typically, it started as a technology [18:03.300 --> 18:09.800] for networking and kind of ballooned from there. So originally it was like BPF, which [18:09.800 --> 18:16.800] is Berkeley packet filtering, and it came into extended Berkeley packet filtering. And [18:17.380 --> 18:21.420] now the acronym doesn't really mean anything anymore. EBPF is just EBPF because it's way [18:21.420 --> 18:27.780] more than just what it originally was. And the cool thing is these programs that are [18:27.780 --> 18:31.220] loaded in the kernel, they can be triggered by certain events. And I'll talk about how [18:31.900 --> 18:38.900] we can trigger those events ourselves, but they run in response to something happening. [18:41.740 --> 18:48.240] So why is EBPF so fast in comparison to the way that we're traditionally doing things? [18:48.240 --> 18:55.240] The first thing is these EBPF programs run in the kernel. So there's a lot less context [18:55.440 --> 19:02.440] switching overhead. We're already in the kernel, so we don't have to keep asking the kernel [19:02.440 --> 19:09.440] for more and more and more information to get what we actually want. Relative to traditional [19:09.440 --> 19:16.240] sys call and a bunch of sys calls, the context switching is a lot cheaper. You get small [19:16.240 --> 19:23.240] targeted programs that, again, execute really quickly and can do everything that you need [19:24.240 --> 19:31.240] or want to do in essentially one shot. And a single program can execute many tasks that [19:31.800 --> 19:38.800] we would traditionally use multiple ptrace calls for. So you have access to the current [19:38.920 --> 19:45.920] registers, you can read memory, and a lot of other things like that. [19:46.920 --> 19:53.920] Now, when I was looking to implement this backend, I had a few requirements that I wanted [19:54.160 --> 20:00.920] to make sure can be satisfied with this EBPF-based approach. So the first one was the ability [20:00.920 --> 20:06.240] to trace arbitrary functions. As a user, you just want to say, I want to trace everything [20:06.240 --> 20:13.240] in the main package or I want to trace this specific function or whatever. This new backend [20:13.400 --> 20:18.800] had to be able to satisfy that requirement as well. We had to be able to retrieve the [20:18.800 --> 20:25.800] GoRoutine ID from within the EBPF program. We had to be able to read function input arguments [20:26.840 --> 20:33.840] and we had to be able to read function return arguments. Now, let's talk a little bit about [20:35.080 --> 20:42.080] tracing arbitrary functions. So, just as a little bit of background, how DELV has been [20:43.240 --> 20:48.840] used is EBPF from the Go side of things is we use the Cilium EBPF package. There's a [20:48.840 --> 20:55.840] few other Go-based EBPF packages out there. Originally, I implemented using one from [20:57.800 --> 21:04.800] Aqua Security but ended up switching to Cilium for a few various different reasons. But the [21:06.640 --> 21:09.880] first thing that we need to do when we're tracing these arbitrary functions is we need [21:09.920 --> 21:15.040] to first load the EBPF program into the kernel so that we can start triggering it with some [21:15.040 --> 21:22.040] of these events. Once we've loaded the EBPF program, we attach U-probes to each symbol. [21:23.320 --> 21:28.560] This slide is actually a little bit outdated because we don't actually use U-rep probes. [21:28.560 --> 21:35.560] U-probes can be attached arbitrarily to different addresses and things like that within the [21:35.960 --> 21:42.960] binary. U-rep probes are typically used to hook into the return of a function, which [21:43.560 --> 21:47.760] seems like something that would be super, super useful. In theory, it is, but with Go, [21:47.760 --> 21:54.760] it doesn't work very well because of how Go manages Go-routine stacks. When Go has to [21:56.960 --> 22:03.960] inspect the stack, it reads up the stack to unwind it a little bit, and then we can [22:05.920 --> 22:11.520] if it sees anything that doesn't look right, it'll panic. U-rep probes work by overwriting [22:11.520 --> 22:18.520] the return address of the function that we're trying to probe. Go notices that during its [22:21.760 --> 22:28.760] runtime work and freaks out. We just use U-probes. Again, we want to do as much in the kernel [22:29.200 --> 22:35.200] as possible to limit overhead. We have to communicate function argument and return values [22:36.320 --> 22:43.320] to the EBPF program and get those values back from the EBPF program. First, we load [22:43.720 --> 22:49.400] it. First thing we have to do is write the EBPF program. Second thing, compile the program [22:49.400 --> 22:54.440] and generate some helpers. This is what the Sillian package helps us with. Then we have [22:54.440 --> 22:59.680] to load the programs into the kernel. These are actually links. I'll publish these slides. [22:59.680 --> 23:06.680] You can follow along at home, but I'll show a little bit of the code here. This is an [23:06.760 --> 23:13.760] example of the EBPF program that we use, written in C, basically. We have access to a bunch [23:18.240 --> 23:24.480] of different EBPF-based data structures, like maps, ring buffers. These are just different [23:24.480 --> 23:28.560] ways to be able to communicate with the EBPF program running in the kernel and the Go program [23:28.960 --> 23:35.960] that's running in user space. I won't go through all of this exhaustively for time, but again, [23:36.440 --> 23:42.480] if you want to look at it yourself, go ahead and follow the link. The second thing that [23:42.480 --> 23:47.320] we have to do is go ahead and actually compile this EBPF program and make it usable from [23:47.320 --> 23:53.040] Go. The Sillian EBPF package has a really nice helper that you can just use with Go [23:53.120 --> 23:58.280] Generate to be able to compile the object file that is your EBPF program. It generates [23:58.280 --> 24:02.640] a bunch of helpers for you that you can call to be able to load it and interact with that [24:02.640 --> 24:09.640] EBPF program. Then finally, we have to load the EBPF program into the kernel. Again, the [24:14.720 --> 24:21.720] Sillian EBPF library has a ton of helpers to be able to facilitate that. We open up [24:22.400 --> 24:29.400] the executable that represents the process that we're debugging. We call this helper that [24:29.680 --> 24:35.120] the package generated for us. Then we initialize some of the things that we need to do, like [24:35.120 --> 24:42.120] the ring buffer and the map data structure that we use to pass values back and forth. [24:44.880 --> 24:51.380] The next thing we have to do is attach our U probes. First, we find an offset to attach [24:51.580 --> 24:58.580] to, we attach the probe to that offset, and then we go from there. We have a little helper [25:01.180 --> 25:08.180] here to take an address within the program to an offset. The offset is just like an offset [25:08.660 --> 25:15.660] within the binary itself as it's loaded into memory. Then from there, we attach our probe. [25:16.660 --> 25:23.660] Then from there, we attach our probe. It's as simple as the executable that we opened [25:24.100 --> 25:31.100] earlier. We have that attached to this EBPF context here. We just call this U probe method [25:31.860 --> 25:38.860] and pass it the offset and the PID. The nice thing about this is you pass along the PID [25:39.860 --> 25:45.860] so that this EBPF program is constrained to just the process that you're trying to debug, [25:45.860 --> 25:51.860] because these programs that you load in are actually global, so they're not really by [25:51.860 --> 25:58.860] themselves attached to any specific process. Then from there, we need to actually communicate [26:01.500 --> 26:08.500] with this program. We need to store function parameter information, and then we need to [26:08.860 --> 26:14.260] communicate that information with the program. I won't go too much into the code in this [26:14.260 --> 26:21.260] for the sake of time, but essentially we need to tell the EBPF program all of the function [26:23.460 --> 26:28.980] argument information, the return value information, where they're located, are they on the stack, [26:28.980 --> 26:34.340] are they in registers, and let it know where to find it so that it can read all of this [26:34.340 --> 26:41.340] information and send it back to the user space program. When we want to get the data [26:41.340 --> 26:47.540] back, we use a ring buffer to again communicate between user space and our program running [26:47.540 --> 26:52.420] in kernel space, and essentially it's just a stream of all of the information coming [26:52.420 --> 26:57.140] back, so all of the information that's being read and picked up by the EBPF program. That's [26:57.140 --> 27:04.140] ultimately what gets displayed to you as we run the trace command. I'll go through another [27:05.860 --> 27:11.540] quick demo of actually using the EBPF backend, so all you have to do to enable it for now [27:11.540 --> 27:18.540] is just add dash dash EBPF to the trace command, so if I run our make command here, nobody [27:19.140 --> 27:26.140] looking at my password. We see that the trace happens, and from here you can't really tell [27:29.660 --> 27:34.620] that it's significantly faster, but the output is a little bit different. As I mentioned, [27:34.620 --> 27:40.100] this is still kind of like an experimental work in progress backend, so some of the output [27:40.100 --> 27:46.260] is a little bit different, and it doesn't have exact parity with the traditional more [27:46.300 --> 27:52.460] established tracing backend, but you can see it works. You see the arguments, the return [27:52.460 --> 27:59.540] values, and everything like that, and this is all happening with significantly less overhead. [27:59.540 --> 28:04.460] So a few downsides of the EBPF approach. The programs are written in a constrained version [28:04.460 --> 28:10.540] of C, so you're not writing go. You end up having to fight the verifier a lot. If you [28:10.540 --> 28:17.540] don't know what that means, that's great for you. Congratulations. There's a lot of constraints [28:19.860 --> 28:25.380] on stack sizes and stuff like that within EBPF programs, which can be kind of gnarly [28:25.380 --> 28:32.380] to deal with. It's different to write some control flow, like loops and stuff like that, [28:33.500 --> 28:38.940] and as I mentioned, UREP probes do not play well with go programs at all, do not use them, [28:38.940 --> 28:45.940] do not try. And that's it. Thank you very much. [28:47.700 --> 28:54.700] Thank you. Unfortunately, we do not have time for questions, but if you see him in the hallway [28:55.660 --> 29:02.660] track, you can always ask him any questions, improvements, bug fixes, et cetera. [29:08.940 --> 29:15.940] If you leave, it's better to do so on this side. You may pause the stage, and there is [29:18.260 --> 29:20.140] also a swag table diagram.