[00:00.000 --> 00:12.280] It was a discussion that we originally scheduled for an hour, but due to scheduling issues, [00:12.280 --> 00:18.920] it got collapsed to 25 minutes, which we are going to stretch to an amazing 28. [00:18.920 --> 00:21.680] But everybody has to be in this with us together. [00:21.680 --> 00:25.280] The topic is one that everybody has a lot of thoughts on. [00:25.280 --> 00:29.960] We're in the beginnings of figuring out how we feel about it as a community and as a movement. [00:29.960 --> 00:35.200] And so I want to do something a little bit different for this session. [00:35.200 --> 00:44.000] So raise your hand if you think you have like a question, a comment, or a topic that you wish would be addressed during this session. [00:44.000 --> 00:45.880] So like, okay, good, five people. [00:45.880 --> 00:48.560] I want everyone here raise their hand to come up. [00:48.560 --> 00:54.240] Or if you think that by the time those five questions get asked, you will have a comment or question, [00:54.400 --> 01:00.040] come up on this side, line up in a line here, because if we take the mic around, we will have no time. [01:01.160 --> 01:05.840] Who thinks that they might have something to say about any of those comments or topics? [01:05.840 --> 01:11.680] And just has a lot of thoughts and maybe doesn't even know what, you could be some of those people who had comments or questions. [01:11.680 --> 01:17.080] If you think you probably want to just say something in reaction to that, come a line up over here. [01:17.080 --> 01:19.800] And what we'll do is we'll go through the questions comments. [01:19.800 --> 01:21.240] Those people can come over here. [01:21.240 --> 01:27.480] We'll have two sections of the line, people who have not spoken yet, and people who have but want to say more. [01:27.480 --> 01:32.640] And we're going to speak as briefly and concisely as we can, and it's going to be awesome. [01:32.640 --> 01:33.640] All right. [01:33.640 --> 01:42.560] So I want the lines to actually come just a tiny bit closer so we can be efficient. [01:42.560 --> 01:44.560] Yeah, so is this a line? [01:44.560 --> 01:46.400] This is the line of people who want to talk? [01:46.400 --> 01:49.400] No, we need people who want to participate in the discussion. [01:49.480 --> 01:51.480] You're here because you want to talk. [01:51.480 --> 01:53.480] Okay, Van wants to talk. [01:53.480 --> 01:55.480] Van wants to talk, so over here, but no, wait, no, no, no, wait. [01:55.480 --> 01:57.480] This is not, so hold on. [01:57.480 --> 02:01.480] People who want to react to the topics over here, if you're not sure, just line up. [02:01.480 --> 02:02.480] It'll be fun. [02:02.480 --> 02:05.480] You don't have to necessarily answer any particular question. [02:05.480 --> 02:07.480] You can let somebody else come forward. [02:07.480 --> 02:08.480] Okay, great. [02:08.480 --> 02:11.480] And so we're going to start with the people who have the topics they want to talk about. [02:11.480 --> 02:13.480] And then we're going to let people come over here. [02:13.480 --> 02:18.480] And so it can be a comment, a question, a topic you want to talk about. [02:18.480 --> 02:20.480] And people can go back and forth. [02:20.480 --> 02:24.480] And I suspect people will come up as you want to join the discussion. [02:24.480 --> 02:26.480] It's like basically a self-forming panel, okay? [02:26.480 --> 02:28.480] We're going to prioritize people who haven't spoken. [02:28.480 --> 02:30.480] And we'll see what happens. [02:32.480 --> 02:34.480] Hi, thank you, Karen. [02:34.480 --> 02:35.480] My name is Alex. [02:35.480 --> 02:40.480] And actually I have a question related to something that is on the board already. [02:40.480 --> 02:46.480] And this has been a lot of conversation on the subject of AI trained on code. [02:46.480 --> 02:52.480] And what I notice is majority of those are very U.S. centric and are mostly around the [02:52.480 --> 02:58.480] train of fair use, which is kind of, I guess, philosophical thing in a way. [02:58.480 --> 03:06.480] But in the EU, there seem to be some regulations on the subject relevant to this one, [03:06.480 --> 03:11.480] but from the previous hype cycle, let's say, or previous cycle of technologies [03:11.480 --> 03:13.480] that gave us a lot of interesting things. [03:13.480 --> 03:19.480] That being data mining and web search results, information retrieval. [03:19.480 --> 03:24.480] So there's been some laws and some academic papers published on this subject. [03:24.480 --> 03:27.480] And they are kind of EU focused. [03:27.480 --> 03:30.480] And they almost never get mentioned in the discussions online. [03:30.480 --> 03:37.480] I was wondering why is that and could it be productive part to see it from another angle then? [03:37.480 --> 03:39.480] Think like that. [03:39.480 --> 03:40.480] I personally love that. [03:40.480 --> 03:44.480] And I love that you kick this off with that question because you've just brought it to the conversation. [03:44.480 --> 03:50.480] Does anyone on this side want to answer that? [03:50.480 --> 03:56.480] So the nice thing about the EU and in a lot of other places is that they've basically made a lot of the things [03:56.480 --> 04:00.480] that you're fighting about in the U.S. already de facto legal. [04:00.480 --> 04:05.480] And so that is why, for example, the lion land, however you pronounce it, database, [04:05.480 --> 04:11.480] is a lot of that model banking is happening in Germany. [04:11.480 --> 04:18.480] Or that's the reason why in India there's a lot of scientific literature that is being created and put into models. [04:18.480 --> 04:21.480] And then they export those models. [04:21.480 --> 04:26.480] Basically, the EU, in my opinion, is ahead of the U.S. in this area. [04:26.480 --> 04:30.480] And what's happening is that the U.S., this is still an open question [04:30.480 --> 04:39.480] and something that could be sort of de facto made expensive or hard. [04:39.480 --> 04:41.480] Awesome. So we're going to do it like that. [04:41.480 --> 04:42.480] Each answer will be brief. [04:42.480 --> 04:44.480] I'm going to continue with people down here. [04:44.480 --> 04:47.480] When they haven't, I'm going to go to the people over here. [04:47.480 --> 04:51.480] There are several people in the audience that I know already have opinions on this question. [04:51.480 --> 04:52.480] So come on up. [04:52.480 --> 04:53.480] Okay. [04:53.480 --> 04:55.480] Did you want to answer that or no? [04:55.480 --> 04:56.480] Okay. [04:56.480 --> 04:58.480] Anybody over here want to address that point? [04:58.480 --> 05:01.480] Or we'll move on to the next one. [05:01.480 --> 05:07.480] The University of Cambridge has a group called the Cambridge University Ethics in Mathematics Society, [05:07.480 --> 05:10.480] which runs conferences occasionally. [05:10.480 --> 05:11.480] It's trying to do two things. [05:11.480 --> 05:16.480] It's trying to make ethics training a mandatory part of mathematics teaching, [05:16.480 --> 05:20.480] just as it is in law and engineering and related fields. [05:20.480 --> 05:24.480] So that mathematicians who go into AI, for example, [05:24.480 --> 05:28.480] have some idea of the ethical implications of that work. [05:28.480 --> 05:38.480] One of these conferences, someone who'd been part of a UK government review of AI implications, [05:38.480 --> 05:43.480] came up with a list of things that reminded me of the four freedoms, [05:43.480 --> 05:46.480] but four explainable AI. [05:46.480 --> 05:53.480] I'd like to ask, what is the closest thing we have to the four freedoms in the context of AI? [05:53.480 --> 06:01.480] And also, does anyone else know of other initiatives to give mathematicians ethics training? [06:01.480 --> 06:03.480] Also an excellent question. [06:03.480 --> 06:08.480] Do you want to, and other people who want to participate in the discussion, [06:08.480 --> 06:11.480] you're here for a discussion, so please come on down. [06:11.480 --> 06:16.480] And Bea, I'm going to look out for people trying to get out. [06:16.480 --> 06:21.480] I've done quite a recent, quite big ethics in the AI project. [06:21.480 --> 06:30.480] And the first problem I run into is how do you approach defined ethics and where you ground it. [06:30.480 --> 06:34.480] And in this project, we went through the fundamental rights, [06:34.480 --> 06:39.480] but that creates a new problem because there are several definitions of fundamental rights, [06:39.480 --> 06:41.480] and then you have to choose one. [06:41.480 --> 06:49.480] And we had luck that there was some kind of model we should use that was adhered by the Dutch government, [06:49.480 --> 06:52.480] and that had a list of fundamental rights. [06:52.480 --> 06:58.480] So that would be my answer. [06:58.480 --> 07:03.480] Start looking at fundamental rights as who? [07:03.480 --> 07:07.480] Make your choice, own choice there. [07:08.480 --> 07:13.480] I want to make a comment general about ethics and regulation. [07:13.480 --> 07:20.480] Any ethics, any regulation, any restriction we put, it puts it on us, the good guys. [07:20.480 --> 07:25.480] It gives the bad guys the monopoly to do the unethical things. [07:25.480 --> 07:28.480] Keep that in mind. [07:28.480 --> 07:33.480] I don't really agree to that one. [07:34.480 --> 07:46.480] I don't really agree to that one because, for example, a government can ask for an ethical assessment of some system, [07:46.480 --> 07:53.480] and then the good guys can tell the bad guys, well, you've been very naughty. [07:53.480 --> 07:59.480] No, generally bad guys don't listen to their governments, that's what good guys do. [07:59.480 --> 08:04.480] But bad guys ignore the loss. [08:04.480 --> 08:14.480] So one comment about the focus on ethics is that it is being approached from a very different perspective than the typical four freedoms or the OSD. [08:14.480 --> 08:22.480] The OSD and the four freedoms start with freedom zero, the ability to run the program anytime for any purpose. [08:22.480 --> 08:32.480] That is the thing that is being explicitly denied by a lot of the ethical efforts around AI, whether or not that will be successful. [08:32.480 --> 08:40.480] But it's coming from a much more of the ethical licensing type side where they're trying to restrict it. [08:40.480 --> 08:51.480] You can't use this if you're doing climate things or if you're going to make someone be discriminated against or have all these societal effects. [08:51.480 --> 08:59.480] I think a more free software aligned one would start with, you can use the AI for whatever purpose you find. [08:59.480 --> 09:06.480] I don't know if that's what we'd want to say, but that's what I would say is most aligned with freedom zero and I'm not seeing it out there. [09:06.480 --> 09:12.480] Anybody over here want to comment on this? [09:12.480 --> 09:25.480] That slightly answered the question. Nobody's touched on the question I asked about training mathematicians specifically who are often recruited by AI companies in ethics. [09:25.480 --> 09:29.480] Does anyone know of other efforts to do that? [09:29.480 --> 09:31.480] Anybody in the audience? [09:31.480 --> 09:32.480] I can ask. [09:32.480 --> 09:38.480] I have against training mathematicians about ethics. [09:38.480 --> 09:47.480] The response that is not on the microphone is that the audience member asserts that mathematicians don't need and should not get ethics training. [09:47.480 --> 09:51.480] They should only get training in mathematics. [09:51.480 --> 09:53.480] Can I answer this one? [09:53.480 --> 10:01.480] I think that it's not mathematicians that decide that they are going to be hired, but the companies. [10:01.480 --> 10:04.480] I can come back on both those points. [10:04.480 --> 10:09.480] It's a discussion, so I'm trying to decide if I'm going to weigh in. [10:09.480 --> 10:22.480] To the person who said mathematicians don't need ethics training, you should look at the resources compiled by the Cambridge University Ethics and Mathematics Society because they answer that point in great depth. [10:22.480 --> 10:25.480] Essentially, I think that you are mistaken. [10:26.480 --> 10:31.480] As for the question of the mathematicians being hired, mathematicians are human beings. [10:31.480 --> 10:38.480] They have agency, they are not just passive robots who have to work for companies doing evil things with AI. [10:38.480 --> 10:42.480] You have a choice about what you do in the world. [10:42.480 --> 10:44.480] It's the same as a programmer, right? [10:44.480 --> 10:48.480] At the end of the day, you are the one building it, so you are the one that can say no. [10:48.480 --> 10:49.480] Exactly. [10:49.480 --> 11:04.480] At the end of the day, they are going to blame you because you fixed the Volkswagen's engines to be cheating, and they're not going to take the blame. [11:04.480 --> 11:11.480] If you do want to participate, you have to come up. We can't do the shouting. [11:12.480 --> 11:31.480] So, one other thing about ethics is that all the entire discussion on ethics is actually being used by companies to, well, do bad things such as, let's say, not releasing the GPT model, [11:31.480 --> 11:39.480] an actual open AI used, it will be used for unethical purposes to close down the model. [11:39.480 --> 11:45.480] I think this is a really bad approach. We should have a framework where that is not allowed. [11:45.480 --> 11:56.480] An ethics should not be a reason to create closed models or be secret about them. [11:56.480 --> 12:01.480] Yeah, that's good. It's a good segue because that's exactly the question I had. [12:02.480 --> 12:05.480] Companies like OpenAI, which, by the way, were really tricky with their name. [12:05.480 --> 12:09.480] A lot of people think they're open and they're not. [12:09.480 --> 12:21.480] They did use some questionable practices to train their models and underpaid people on third-order countries for some very bizarre content. [12:21.480 --> 12:24.480] And probably that's what makes the models really good, actually. [12:24.480 --> 12:28.480] All this data they collected that they're secret about and all this practice they used. [12:28.480 --> 12:37.480] So my question is, if anybody knows, how can we from the open source community compete against that in a good way [12:37.480 --> 12:45.480] and get as powerful as a computable model just like stable diffusion coming out? [12:45.480 --> 12:53.480] We can replicate their papers, but all this data and this practice that may be taken to the next level that OpenAI is releasing. [12:54.480 --> 12:59.480] Okay, it feels like you're stuck in the middle here, but you still have something to say. [12:59.480 --> 13:03.480] Just get up right now and everyone will let you out. Just come to the front. [13:03.480 --> 13:09.480] Is anyone here who has not spoken want to respond to this? Anyone who's not spoken? [13:09.480 --> 13:19.480] On that point, in Bradley's talk, you mentioned that a free software had wins early because of the free, as in free beer, part of it. [13:19.480 --> 13:26.480] And I think one of the things that's interesting about your question is, what can we do in the free software world? [13:26.480 --> 13:35.480] One of the big barriers, I think, is that my understanding, correct me if I'm wrong, is that the models that GPT have used [13:35.480 --> 13:40.480] comprise maybe a decade of time and billions of dollars of investment. [13:40.480 --> 13:44.480] This is a challenge, I think, for us in the community to compete against. [13:49.480 --> 13:57.480] How do we compete? Regarding AI, AI is about recognizing the patterns and companies like proprietary AI [13:57.480 --> 14:09.480] censoring the models like chat GPT from actually giving the right answer about patterns that it recognized in the data. [14:09.480 --> 14:13.480] And how do we compete with such a closed models? [14:13.480 --> 14:19.480] You know, there is a website, the copy of Twitter, it's called gab.com. [14:19.480 --> 14:22.480] And there is a CEO of this site, Andrew Torba. [14:22.480 --> 14:33.480] And recently he announced that he will create an AI model based in Christian values about openness and freedom of speech [14:33.480 --> 14:42.480] where the model will be trained to recognize patterns in data without the censorship that others apply to this data. [14:42.480 --> 14:53.480] I think one area where open source really can get the edge over closed source initiatives is in explainability. [14:53.480 --> 15:01.480] It's hard to explain a model, it's hard to understand the algorithm, how it works out in ethics, [15:01.480 --> 15:09.480] but you can put a layer to it that makes it explainable, that makes it understandable why the model is acting in a certain way. [15:09.480 --> 15:17.480] And that's a level, that's an area I think where we can really get the edge as open source developers. [15:17.480 --> 15:25.480] There are a few people who haven't spoken yet. [15:25.480 --> 15:34.480] Hello. Regarding the problem of closeness of open AI, there is actually one big problem. [15:34.480 --> 15:40.480] These kind of models are really powerful, it's like an atomic bomb, it's not like a gun. [15:40.480 --> 15:50.480] So if a company, it's accountable for the output of those models, they will do everything in their power to prevent swing [15:50.480 --> 15:57.480] and prevent people from asking how to dissolve the body to the AI or to kill somebody. [15:57.480 --> 16:02.480] So they have to do this, they are not doing that because they are evil. [16:02.480 --> 16:10.480] So if you go through the stable diffusion way, you end up with unstable diffusion. [16:10.480 --> 16:16.480] I think everybody here knows what unstable diffusion is, just look it up. [16:16.480 --> 16:26.480] So it's a really edgy situation in which we discover this kind of powerful weapon, [16:26.480 --> 16:31.480] but we are not ready to handle it and if those kind of companies handle it, [16:31.480 --> 16:39.480] they will try to censor it and reduce the scope in order not to get closed by the government or by something else. [16:39.480 --> 16:47.480] So I don't know what the solution is but we have to go through the open source process of training them [16:47.480 --> 16:50.480] but we don't have the resources to do that. [16:50.480 --> 17:01.480] They spent like $10 million to train GPT-3 and I don't know if the open source community can pull up the same thing. [17:01.480 --> 17:13.480] They are trying to do that with a couple of other models but they are not nowhere there yet in accuracy and functionality [17:13.480 --> 17:20.480] but I think we will end up there. I don't know how. That's my question. [17:20.480 --> 17:27.480] There are a lot of question marks around this entire discussion which is one of the reasons we wanted to have it as a group discussion. [17:27.480 --> 17:34.480] Because this is a short session, we are just going to touch on the topics but we have mailing list discussion already started [17:34.480 --> 17:39.480] and I think we should engage in a really deep conversation there following up on some of this conversation. [17:39.480 --> 17:43.480] So let's see how far we can get now. Did somebody want to respond to that? [17:43.480 --> 17:47.480] Anybody who hasn't spoken yet? [17:47.480 --> 17:54.480] Well, I don't know if open source community doesn't have resources. [17:54.480 --> 18:02.480] Actually, they do and even now there is this big bloom model which tries to replicate GPT [18:02.480 --> 18:08.480] and they already made a system where each participant can just plug in their GPU and its network [18:08.480 --> 18:14.480] and it participates into this giant resource cloud, so to speak. [18:14.480 --> 18:17.480] So we might get it. [18:17.480 --> 18:26.480] In the end, I think what stable diffusion shows is that AI openness always wins. [18:26.480 --> 18:33.480] Nobody cares about Dali anymore, it's just a random project, literally no one cares anymore. [18:33.480 --> 18:41.480] And all it took is just one open source stable diffusion just appeared and everyone started training. [18:41.480 --> 18:46.480] Every single person started using it. That's little by little how great it became. [18:46.480 --> 18:50.480] It's way smaller, way, way smaller. [18:50.480 --> 18:55.480] The model is way, way smaller, like a hundred times smaller. [18:55.480 --> 18:59.480] Well, that's amazing. [18:59.480 --> 19:01.480] That's really amazing, yes. [19:01.480 --> 19:05.480] Yes, the stable diffusion models are like 12 gigabytes. [19:05.480 --> 19:07.480] I have it on my laptop. [19:07.480 --> 19:10.480] It's really nice. [19:10.480 --> 19:14.480] Anybody else want to add anything to this particular conversation? [19:14.480 --> 19:19.480] I have a question but at the same time I will go into another topic. [19:19.480 --> 19:24.480] So we're talking about pictures and stable diffusion but then I want to ask, [19:24.480 --> 19:28.480] okay, then they also trained co-pilot. [19:28.480 --> 19:36.480] So they trained, they used code which they were not allowed to or did not pay for it even if it's open source. [19:36.480 --> 19:42.480] So why did it, why didn't licensing work with that? [19:42.480 --> 19:47.480] Maybe if, for example, I heard that EU declared open source as a public good. [19:47.480 --> 19:53.480] So should EU protect public good? [19:53.480 --> 19:56.480] Co-pilot, anyone want to? [19:56.480 --> 19:59.480] It's going to be my question. [19:59.480 --> 20:04.480] I guess I can add my thoughts to it and perhaps Mark's on discussion. [20:04.480 --> 20:07.480] But yeah, it is a very great question. [20:07.480 --> 20:14.480] Should companies be allowed to use code that has been licensed under licenses like the GPL or AGPL [20:14.480 --> 20:18.480] or other licenses that require people to disclose the source code? [20:18.480 --> 20:26.480] Should that code be used in training these datasets and furthermore is the resulting output from the models? [20:26.480 --> 20:30.480] Are they those considered covered works? [20:30.480 --> 20:35.480] And perhaps this goes a little bit back on the fair use discussion that question that was done earlier. [20:35.480 --> 20:44.480] But I think companies do have a duty to address this concern [20:44.480 --> 20:50.480] and perhaps start putting on a framework that would allow people to opt in [20:50.480 --> 20:58.480] into having their work used for training of these systems. [20:58.480 --> 21:02.480] Yeah, and I also had a second part to that question which was [21:02.480 --> 21:08.480] if you declared to open source public good, should you enforce the license from EU side? [21:08.480 --> 21:18.480] Because if somebody uses open source not according to the core principle, then maybe should you step in? [21:18.480 --> 21:21.480] I think that question is... [21:21.480 --> 21:30.480] I think in legal terms what is being asked here is should the government take the place of the license sore? [21:30.480 --> 21:36.480] Like if the license sore isn't enforcing, suppose you upload AGPL code to GitHub [21:36.480 --> 21:40.480] and Microsoft scrapes that into co-pilot and reuses it. [21:40.480 --> 21:45.480] And you don't have the resources or the will or whatever to try to prosecute Microsoft, [21:45.480 --> 21:52.480] then should the EU or member states step in and prosecute on your behalf? [21:52.480 --> 21:57.480] Yeah, I think we're just starting to begin to see the functioning of how this is going to shake out. [21:57.480 --> 22:04.480] There were several lawsuits filed around co-pilot and I believe that there are more coming. [22:04.480 --> 22:07.480] So it'll be really interesting to see it shake out. [22:07.480 --> 22:14.480] And what's interesting about the suits that have been filed already is that there are quite a number of legal theories that have been thrown out there. [22:14.480 --> 22:20.480] And I think actually the core licensing argument hasn't been made yet. [22:20.480 --> 22:25.480] While at the same time there's litigation happening about other data sets that have other freely licensed work. [22:25.480 --> 22:30.480] So it'll be really interesting to see what those enforcement mechanisms are. [22:31.480 --> 22:36.480] So I'm curious if you think this wording is appropriate. [22:36.480 --> 22:46.480] If an AI, not just co-pilot, takes in source code, free software, creates a model and then suggests a code snippet, [22:46.480 --> 22:49.480] would you consider that license washing? [22:50.480 --> 23:01.480] Yeah, I think it depends on the amount and it's a little bit like a fair use. [23:01.480 --> 23:10.480] If I read a book and take inspiration from the book and I end up writing five words in a sequence [23:10.480 --> 23:14.480] that are the exact same on my protected work, is that fair use? [23:14.480 --> 23:17.480] So I think it depends. [23:17.480 --> 23:26.480] But I think it depends on certain metrics such as a frequency, amount of work and likeness, things like that. [23:26.480 --> 23:35.480] So that relates to the next question I was going to ask, which is, can an AI, there is talk of AI becoming legal persons, [23:35.480 --> 23:42.480] can an AI perform sweat of the brow, could a work be the copyright of the AI itself? [23:42.480 --> 23:47.480] And I would ask the follow-on question which is sort of related to that, which we got in an email, [23:47.480 --> 23:52.480] which is like, should we have an ethical obligation to identify AI in conversation, [23:52.480 --> 23:55.480] so people know they are interacting with AI? [23:57.480 --> 24:00.480] So I wanted to answer the previous question. [24:00.480 --> 24:06.480] And yes, I think you should really step in, should really try to enforce these licenses. [24:06.480 --> 24:13.480] But the important note there is that these licenses should work towards opening the models, [24:13.480 --> 24:18.480] not restricting on what the models trained on, which is the spirit of GPL. [24:18.480 --> 24:26.480] It's not to restrict someone from accessing this code, but rather that the derivative work or derivative model is also open source. [24:26.480 --> 24:28.480] And that was the spirit of GPL. [24:28.480 --> 24:32.480] And I think it's still the same applies to any AI training. [24:32.480 --> 24:42.480] Even if it's like you say a small inspiration, we should make sure that this small inspiration still results in a more open world in the end. [24:47.480 --> 24:50.480] To follow on that question, I had a question. [24:50.480 --> 24:54.480] What would you consider an open source equivalent within the terms of AI? [24:54.480 --> 24:57.480] Is that then just that the model is open source or the output is open source? [24:57.480 --> 25:01.480] Or you also want to go with a data set where it was trained on as open source? [25:01.480 --> 25:03.480] And even then, how can you produce different steps? [25:03.480 --> 25:08.480] Because for many people, especially I'm a physicist now, and my colleagues at CAI is a black box. [25:08.480 --> 25:15.480] And things happen, and like the whole power I think of open source is where we can actually go and look and understand the algorithms in the code. [25:15.480 --> 25:18.480] And currently with AI, we have no idea. [25:18.480 --> 25:23.480] That is the perfect place that our time is up, because there are so many questions. [25:23.480 --> 25:30.480] Let's have this discussion at lists.copyleft.org slash mailman slash list info slash AI assists. [25:30.480 --> 25:32.480] You have all been great sports. [25:32.480 --> 25:34.480] Thank you everybody for coming up. [25:34.480 --> 25:35.480] Thanks.