So, welcome to the next talk and welcome to Peter Levin, who's trying to answer the question if the first AGI instance will be free or open source software. Okay. Thank you very much for the introduction and welcome all to this talk, indeed. Maybe I'll wait a minute. Okay. So, indeed, I will be talking about AGI or general artificial intelligence and I will be trying to look at the question whether we can expect this to be open source or free software. And I'm a professor from the VUB. There where I do research in artificial intelligence. So, VUB is a sister university of VLB, where we are. So, I'm very happy to be here. And before diving straight into AGI, let's first have a look at the advances that we saw over the last decade. Because we did see some advances that were quite significant. First of all, I think 2015 we saw a big breakthrough made by Google DeepMinds, where they developed an agent that could actually learn to play video games, Atari video games. It's an old video game system, but it's a video game system that I used to play when I was a kid. So, this is generally considered to be quite a breakthrough because an AI system was developed that could on its own play these games and play a wide range of these games. Next, they made another breakthrough, Google DeepMinds. They came up with an agent that could actually play the game of Go. And the game of Go is a board game comparable to board games such as chess, but generally it's a harder board game. So, it is a board game for which it was not expected that we could actually build an agent that could solve this difficult game. But they did achieve to make such an agent and they actually beat Lisa Doll, who is here on the slides, who was the world champion in Go. This program, this AI agent was actually able to beat this human player. Next, in 2017, they developed AlphaGo Zero, another Go agent. But here, instead of learning from data, which was the case in the former agents, they actually learned solely by playing against themselves. So, the agent was just playing against another agent and there was no data used from any human games. So, this was also assumed to be quite a breakthrough because, of course, if you can build systems that go out of distribution that can learn more than data has to offer, that's of course something that is very interesting because that would allow us to learn also outside of the capabilities that we as human beings have. Later, also DeepMinds, they came up with AlphaFold, another problem that was thought to be very complex where you want to predict the structure of proteins simply based on the nucleotide sequences. But they also were able to construct a deep neural network to do this job. And of course, more recently we saw, for example, OpenAI coming up with Dali, a big neural network which you can prompt to generate images. For example, here, it was prompted to ask for a Matisse interpretation of a robot playing chess. So, you see that these engines are really becoming more and more powerful. And maybe the most shocking element of it all was ChatGPT, which was released in 2022. That was actually an agent that can act like a ChatGPT. And this is something that in a way really feels like you're talking to a human being. And this made a big impression on many scientists, but also the general public because this is really a type of AI that is approachable by the general population. So, of course, this is just AI. Let's say it's a bit unrespectful, just AI. But in this talk, we're going to have a look at AGI, artificial general intelligence. And to do so, we need a definition. And the debate is on what AGI is exactly is. But I chose this definition from Wikipedia, which is a consensus website. And what Wikipedia says is that it is an intelligent agent that could learn to accomplish any intellectual tasks that human beings can perform. That is what Wikipedia comes up with. And this is supported by an article in The Economist. And we can interpret this as a system that can behave like, that can do the things that humans beings can do. I think that's a reasonable interpretation of this. And does that mean that we're talking about average human being, or are we talking about the upper bound of what humanity is capable of? That's still up for interpretation. But I think we can agree that once you can actually emulate an average human being and the cognitive abilities of an average human being, this would really be a great breakthrough. And you might think that this idea is kind of a hype, and you're writing that. But it's also important to remember that this idea was there from the start. So this is a picture from the Dartmouth workshop where a lot of very smart people assembled. And it was 1956 already, quite a few years ago. You have Claude Shannon there. You have Marvin Minsky, who founded the MIT AI Laboratory. And you also have John McCarty that came up with Lisp, for example. And they came together with this task. They thought we're going to try to simulate or build a machine that can simulate all features of intelligence, which is very closely related to the definition that I just gave you. So the idea really is around for some time. So it's not only a hype, it's really what relates at the foundation of the field of AI. So maybe this claimer. So in this talk, I will not try to make predictions on AGI. That's a very hard thing to do. I will not try to do that. And I will not try to take too strong of a stance. I will present you with what is out there. I will present you with some of the difficulties to which I think at the free and open source software community can really make an important contribution. But I will refrain from taking too strong of a stance. What I will do is I will discuss how a scientific approach to AGI is, in my opinion, really important or even crucial. And to do that, you need reproducibility. Reproducibility in a scientific context. And in my opinion, this is very closely related to free and open source software. And I believe that we, as a scientific community, and the free and open source software community, can really have a big influence on each other in this regard. So why should we care about AGI now? So why is it becoming a hype? Well, there are two good reasons for that. Two elements that popped up over the last couple of years that are really quite influential. First of all, large language models and the secondly, reinforcement learning. And I'm aware, I'm very well aware that this list is not complete, but in the interest of time, I will focus on these. So large language models, I will try to make as intuitive as possible to explain. But basically, what we have is we have a language model and this language model will try to predict the next word. Based on a sequence of words, it will try to predict which is the most likely next word to be chose. And you can do that with all kinds of machine learning. You can do that with hidden Markov models. But when we're talking about large language models, what we're actually referring to is using a deep neural network. And here we have a very simple neural network with just an input layer, an output layer, and one hidden layer. But when we're talking about deep neural nets, what we actually mean is that you have many hidden layers and different kinds of architectures to make this advanced learning possible. But the general principles are still the same. We still have different layers and these layers are connected and the weights that we have on these connections between these layers is basically the parameters of our model. And this is what we will use to actually do a learning. And so this is a large neural network. Training it is really a complex thing. And this is also something we will discuss later. But this is also something on which all these research institutes really build upon free and open source software. The operating system that also the ways to do the networking of such trainings. Now of course large language models. There was evolution and the evolution started with relatively simple things that already showed some capabilities to what we now have in chat GPT, a system that is actually quite impressive. And this tweet, it's just a tweet, but it's indeed not AGI, but there are some capabilities that are really remarkable. And this is really something that many people used to make very impressive demonstrations. For example, you can write poem just by chatting to this bot. It will generate codes that you can actually use to play poem. So this is really something. The next thing is reinforcement learning. Reinforcement learning that's actually the main topic of my research in the AI lab of the VUB. And what we actually have, we have an agent and an environment. And we want the agent to learn to behave optimally in this environment. And the agent can do so by performing actions in the environment. And in this simple environment where we have Super Mario, this corresponds to pushing buttons on the controller. And when an action is performed, we can observe the state, which here is simply the screen that we can see, and a reward signal that tells us how well we're doing. It's a kind of feedback signal. So if we can build an agent that through these actions and through the observations of states and rewards can actually learn how to behave optimally, that's when we're talking about the field of reinforcement learning. And this is a simple video game, but it does not take that much imagination to see if we replace the video game with the world. The state space will be much more complex. The action space will be much more complex. But you can see that if you can make a sufficiently advanced agent, you would result in AGI. And this is exactly what Silver et al. have some influential researchers in the fields of AI set in a paper in 2022. They said reward is enough. Reinforcement learning, where you follow a simple reward, is sufficient to learn advanced capabilities. And they gave this example of the squirrel. The squirrel wants to maximize the nuts sense. He likes to eat nuts sense. So he wants to maximize nuts. And in order to maximize nuts, if we use this simple reward signal, the squirrel will need to learn advanced capabilities. For example, it will need to recognize trees. It will need to be able to climb in trees. It will need to be able to pick acorns. It will need to be able to store them for winter and so on. So by following a very simple reward signal, we can actually produce quite complex capabilities. And this is something that we will get back to, because this is also not without risk, but this is something we will get back to. Okay. A third thing, I said LLMs, I said reinforcement learning. But a very important thing is also compute. Since about a decade or maybe a little bit longer, we really upped our compute. We have now GPUs that allow us to do very complex, to allow us to train very complex models. And this is also really, has been really a game changer and the amount of compute that we have at our availability. Now, I said I will not go into predict when we will get a GI, but I will show you some quotes because the opinions are actually quite divergent. And you might know this guy. This is a professor, Jeffrey Hinton, he's a British Columbian AI scientist. And he's a professor and he's really considered to be one of the Godfathers of deep learning. And deep learning is really what lies at the foundation of all these influential models that we see show. And he thinks that we might be only 20 years away from AGI, General Purpose AI or AGI. And this is quite remarkable. And he made this statement in March 2023. And he also said that this is a statement that he would not have made 10 years earlier. So it's really based on the recent developments. On the other hand, people like Jan Le Koon, he's also a very influential AI researcher. He's also considered to be one of the fathers of deep learning, but he has a different opinion. He says it will take us decades to even only touch upon what AGI could be. So the opinions really diverge. Then let's have a look at what these big AI companies are actually thinking about it. Because this will also be important in this talk. For example, this is Jean Leg, he founded DeepMind together with Demis Hassabis in 2010. And he thinks that AGI is likely by 2028. He says so with a probability of 0.5. So it makes it easier to make predictions, of course, like that. But this is a statement by him. And he's one of the founders of DeepMind. So it will also have some... What these people say, of course, also resonates with many people in the community. Next, maybe you know this guy, this is the CEO of OpenAI. And he thinks it will take us to 2030 or 2031 before we will get to AGI. And I think it's very important to state these are very influential people within their company, within huge companies. And of course, these companies, they have significant bias. This might be a very self-fulfilling prophecy, the predictions that they make here. So it's very important to keep this in mind. Because, of course, the more people get hyped about AGI, the easier it will be for them to actually get their interest and capital to work on this research. And again, Altman also... I need to make this disclaimer. Altman also said that it will take 20 to 2030, but with a huge confidence interval. But of course, with huge confidence intervals, it's always easier to make predictions. So that's also something to take into account. Okay. All this sparked interest and maybe some concern. And even Snoop Dogg, he came up with this statement that he was concerned about AI. I had to remove some curse words here, but basically that is what he said. He heard this old dude talking... that created AI talking about that it is not safe that these things would get their own minds. So he's talking here about Jeffrey Hinton, maybe a bit disrespectful. But I think this really resonates with the general population. And it might be a bit tough to really grasp what this actually means, these breakthroughs, and where it will lead. Okay. Now, we could ask ourselves the question, are we aiming for AGI? As we will see, AGI has a lot of potential, has a lot of positive elements associated with it, but also some significant risks. Are we aiming for AGI? Well, don't take my word for it. I'm just a simple professor at the VUB. In my lab, it will not happen. I can be sure of that. But other companies like Open AI, they put on their website that they're working on that. So this is not something that we're coming up with. This is not just science fiction. Companies are actually trying to build this, and not only Open AI, but also Google DeepMinds. They want to build AR responsibly, okay? And if you look a little bit lower, they also mentioned AGI. So this is something that companies are working on. And more recently, also Meta, the CEO of Meta, came up with a wish to make this kind of technology. So it's really something that companies are working on. So as we will see, there will be an impact of this, so it's important to be aware of this. So what will be the impact of AGI, or the potential impact of AGI? Well, let's start with the goods. First of all, we will be able to tackle complex problems, maybe visit the stars. That would be a very nice achievement for humanity. We could automate things. Advanced automation, maybe even complete automation. And when we're talking about automation, we might think about automating the assembly of cars. We might think about automating a bakery. But of course, once your system is sufficiently smart, this will also allow us to automate coding jobs, for example, or research and teaching jobs. So this is really something that could have a huge impact. It would allow us to automate things. But if you automate things and you do not distribute the wealth that is generated like this, you will also be in serious trouble. And then of course, we can hope that we can enhance our human capabilities, because that would also be quite an interesting byproduct of AGI. Now the bad, many of these good things could result to serious social disruptions. And in a way, what is happening on social media, how people are being influenced, this is already going on. And you can assume that once you have agents that are even more intelligent, this will be even a bigger problem. Also, when starting to automate everything, if you do not take the politics into consideration when doing so, this might also lead to serious social disruptions. Another aspect is misalignment with humanity's goals. And this is something even within humanity, it's not easy to align our goals. There are many different views on how society should work. So how should we align an AGI to do what is best for us? Can we even define this? And then the ugly, this is of course the existential risk. And this is something that is of course an important concern. It's something that is explored extensively in science fiction literature, but it is actually an important risk. And we might also even go further than AGI, we might build a super intelligent system that is able to greatly outcompet us. And this might have even more advanced implications for our society. And you might know the books of Isaac Azimov, who really explored how we can try to align human beings with what we as humanity would like to do. But this really is not such an easy thing. And the existential risk of AGI is really something that has its own wicked Wikipedia page. So if we have an AGI, there are many ways that you could think of that it could influence our society or even really wipe out the human race. This is a really negative point of view and I'm not saying that it will necessarily be like this, but there are many options to do it for an AGI. So it is something that we should take into account if we make the balance. And this is maybe an example that you already heard of. It was introduced by MIG Bostrom. It's about the paperclip maximizer. And paperclip maximizer is about an AI system that is defined by a set of humans to maximize the number of paperclips. And at the start, the AI system is able to do that in a very efficient way. It is able to find or it is able to mine in a very efficient way and make many, many, many paperclips. But then when the mines are getting empty, there is a problem. The AI can no longer make paperclips, so it will start to use atoms of other things to also make paperclips. And in the end, also human beings and our entire earth will be transformed into paperclips, which is quite concerning. And maybe it's good to think about the reward is enough paper where we had this setting where we had an acorn that wanted to maximize its nuts. And in a way, this is very similar to the paperclip maximizer problem. And to say the least, that is actually really concerning. If we're not specifying our objective functions in a save and meaningful way, we really might run into troubles. And this is something that you also might have heard of, this probability of doom. It's something that is uttered a lot on social media. This is the probability of this existential risk. Currently, we do not have a formal framework to reason about this. So making statements about this is purely intuitive and in my opinion, at this point, not very relevant. But it is something that I think that most scientists would agree on, that this P doom is not zero. And if it is not zero, then you only need to make an AGI once, that actually for which this P doom will not be zero. And we will be in big trouble. So this is really something that we need to be very well aware of. And we also counter this paper. Reward is enough in our own paper where we set that scalar reward is not enough. If you have an agent that just follows one reward signal, for example, acorns or paper clips, you might end up in deep trouble. It might be smarter to look into multi-objective or multi-criteria reward signal set where you can say, I want to maximize the number of paper clips, but I also want to keep alive most of humanity, for example. So this is really something that might be something to consider when developing such systems. But we also make the disclaimer that even when making this multi-criteria reward signal, this is not a guarantee to avoid this existential risk. So this is something that we really should be aware of. And in my opinion, safety should be key. I think there are many positive aspects to AGI, but we should be aware of safety. And to do that, I think a scientific approach is necessary. And a scientific approach means that we need to formulate hypothesis about a risk, about safety, but also about purpose and impact. And when we can formulate hypothesis, we can also do experiments. And to do experiments, our science needs to be reproducible. Now, reproducibility in science is not so easy. It's a very important topic. It's not trivial. In a wet lab, you have many things that can go wrong. You have equipment, you have lab temperature, you have the purity of chemicals that you use, you have the skills of your technicians that might be different. And also the sex of your technicians, actually. So it has been shown by Sergei that the sex of the technicians who operate on rodents actually influences your experiments. Because for example, if you have male technicians, it will stress out your rats more than if you have female technicians. So reproducing experiments is something that is really challenging. But of course, in silico, in computer, on a computer system, in simulation, it could be much better. To reproduce things, we need two things. We need codes or a very rigorous description of what is going on in the codes. And we need data. Unfortunately, and this is a survey from 2018, not all papers, not all scientific papers that are done in AI actually come with codes. So as you can see here, there are a lot of papers that come with pseudo codes. There are some papers that have some test data, but not that many papers come with code. And that is really concerning. And major AI conferences, and we as AI scientists, we publish a lot in conferences where we present our peer-reviewed papers. These conferences are really becoming more and more aware of this problem. And they really make it an issue that people share codes with their manuscripts. But there are still journals like Nature and Science, really influential journals that do not enforce this, and this is really a pity. I have to say that there is this code science manifesto that says that basically doing science outside of the wet lab really coincides with releasing your codes. You need your code to do that. So this is a manifesto that has raised quite some awareness. And in AI research, there is a growing awareness of this. However, when we are talking about AI research these days, we indeed have academia. We have academia where research is being done. But we also have research institutes like Google DeepMind, like OpenAI, that are inherently different organizations. For many academic institutions, myself included, I think it is important that experiments are reproducible. I think it is important that we make our source code available so that other researchers can really build on top of our finance. But what about these research institutes? Well, the picture is really not black or white. So for example, DeepMinds, they developed this alpha fold system to predict protein structures, and the code to train a neural network was not available. And what happens, a set of researchers actually developed OpenFold, which was an open source implementation of this functionality, which really was able to reproduce this work. But this is what happened already in the free and open source software community so many times. Due to the need to have software as open source, people spent their time trying to rebuild things. But of course, this is of course a very good thing. But it would be better if the scientists just shared the code straight away. On the other hand, DeepMinds also made very important libraries available in a purely open source fashion, for example, Jax, which is a library that allows you to do very performant computational analysis. Also, TensorFlow for building these large neural networks. So these libraries are all very influential, and they really shaped how research is being done at this point. So there has been a major impact on AI research from these companies. Also, Alpha Zero, the agent that learned to play go, its code was not released with the paper, but recently they did release their code in an open source fashion, I think even a free software fashion, to actually allow other scientists to work on it. On the other hand, they also have open AI. They have their stable baselines, a library, which is a reinforcement learning library that also incorporates many algorithms and really lies at the foundation of many research that is being conducted. But on the other hand, we also have chatGPT, which is completely closed. It's near impossible to reproduce, not only because we do not have the codes, but we also do not have the description of the methods. We do not have a description of the infrastructure. We don't even know how big the dataset they use this or how big the neural network they use is actually, how big it actually is. So this is really something that is concerning. Google Gemini, same thing. Also, no source, only a black box that we can interact with via the network. There is one big exception, and that is the work done by Meta, where Jan Le Coon is the president of, and they did release many LLMs, where for which the source code is actually available. So the landscape is really divergent. Now, the question of today, or the first, the sparks of AGI, this was a paper written by Sebastian Bubek, is a brilliant scientist, and he wanted to investigate what the capabilities of chatGPT were. And so, for example, he used different versions of GPT, which he prompted to draw a unicorn index. This is what he reported in a paper, but this is not something that we could reproduce. These experiments, we cannot reproduce. First of all, we cannot seat GPT. So GPT is a stochastic agent. So in order to really reproduce what is going on, we would need to have the seat. But also, and maybe even more concerning, is that very recently a set of influential scientists actually showed that this black box axis is insufficient to properly audit AIs. So not having the code, not being able to look at the internals of this big neural network, what is going on, where, when you ask a certain prompt, is really not sufficient to really understand what is going on, and is not sufficient to get to safe AGI. So, to go back to this question at the first slide, will the first AGI be free or open source software? Well, it's really hard to know what drives these companies, but there is sure no commitment from their side to do so. OpenAI, DeepMinds, Anthropiq, they do make no mention of free software or open source software. But very recently, Meta, the CEO of Facebook, actually introduced that they will be developing AGI and they will also make it available in an open source fashion. And it was surprising to many, but maybe not too many who follow Jan Le Koon on Twitter, because he's tweeting about this all the time. He thinks this is really important. AI should be open source, should be free software, and this is indeed what they tried to do. So, okay, good. So we have these different viewpoints. Only a few days after Zuckerberg put this statement out, we already get these articles that are predicting doom, that saying that it is very scary, that such an influential technology would become available in an open source fashion, even comparing it to nuclear weapons. And if we're talking about existential threats, you could indeed see the comparison there. And we might also not make the recipe for a nuclear weapon available on a free license. So there is something to it. And so maybe we should ask ourselves the question, should the first AGI be free and open source software? And I'm not taking a stance here. It's really up for debate. And in general, I'm very much in favor of making things free and open source software. But this is the first kind of software that I really have my doubts about, because it will have a huge impact on society. And what will this mean if we have this available as open source so that everybody can access it? So this is really something that I believe is up for debate, because AGI will have a major impact on individuals, societies, but also governments. A government that has AGI will be a different government than a government that does not have access to an AGI. And this is all up for debate. But what I do think is that there should be oversight. And currently, there is not. There is no governmental oversight. Also, much of this research is happening in the United States. There has been some hearing in Congress, but this has also not been that much in depth. So for the moment, I think there is no oversight. So we have companies that are working on AGI at a certain point they might actually reach it. And what will we do next? So this is really something that we should be concerned about. And I will take this stance. So quite recently, Satya Nadella from Microsoft made this statement. If you look at inflation adjusted, there is currently no economic growth. And in a world like that, we may need a new input. And with this new input, he meant AI being the general purpose technology that drives economic growth. I think these kind of statements are really quite concerning, because of course economic growth is not everything. And as I said, there might be very positive things from AGI, but we also need to be very much aware of these risks. And in the end, if this is the case, if we have good things, if we have bad things, then basically it becomes a question, how do we balance these? And in order to balance these, we will need to be able to quantify what is this risk. And if we know, if we have a way to formally quantify this, then we will also need to make it a democratic question, because this is something that is a democratic problem. We need to have society decide on this topic. How important is it that we as a species remain? And how do we balance that to maybe the good things that come with AGI? And maybe something to close off with, the OpenAI came up with their work on superalignment. That means they expect that we will not only have AGI, but we will have a superintelligent system. And they were very proud to announce that, I think it was somewhere at the start of 2023, they were going to work on this superalignment. And they were going to put 20% of the computer they have, and they have a lot of computers, like many, many thousands of GPUs. They were going to dedicate 20% of that on safety. And they felt very happy about this statement, but I was confused, because I would think that we needed to do it the other way around, spend 80% on safety and 20% on capabilities. So this is something that I wanted to close with. So to wrap up, in my opinion, I did not take many stances in this talk, I think, but in my opinion, safety should be the first concern. And in this regard, we should study risk and the balance to make will be a democratic choice. So it will be something that societies will need to decide. Very importantly, oversight in the development of AGI is needed. And this is really something that, in my opinion, is lacking. And the debate of how free and open source software will be involved in this process is really important. And I think the community of free and open source software and the AI community has a lot to learn from each other. Maybe we will need new kind of licenses in order to deal with this kind of technologies. And many people say that AGI is something that we really need. And I fully agree that it would be a blessing for society if things go right. But in a way, if we can make AGI that it will be saved in only a thousand years, that should also be a good thing. The only thing to really want AGI now is if we want to solve a problem that is also an existential risk and that there is no other way to solve it. That would be a clear balance that we could make. But otherwise, I think safety should be of main concern. And that closes my talk. And if you have any questions, I will be happy to answer them. So are there any questions up there? Hi. So I was wondering, assuming AGI becomes open source and accessible to everyone, what are the material constraints that you think it could have in the future in the sense that, yeah, it might be open source, but then only very few people may be able to run it because it requires really powerful hardware? So what do you think in this regard? Yes, that's an excellent comment. In the interest of time, I didn't include it in this presentation. But this is indeed a real concern. This is already the case with the LMS that we now have here at our university, we would not be able to reproduce this research. But this is again something where governmental oversight is really important. If our governments really think AGI should happen, then they should also have the infrastructure to test these things on and make it possible to do this kind of research. And this is something where the EU also should step up, I think, to make this feasible. That being said, if we collaborate across our universities, we do have a significant amount of compute. So there is some compute, but that would also require us to really collaborate intensely in this front. But yeah, very good question. Thank you. Thank you for your talk. I'm wondering what kind of, so if deep neural networks are for what we have today, what we have to do more to have an AGI? Is just some fundamental research that we have to crack or is just more data, more training, more computer power? Well, the opinions differ. Some people say our project we're following now will lead eventually to AGI. So just using more data, just using bigger neural networks, that is a line of thinking. But on the other hand, this is not how we work. We as human beings, we do not need all literature that was ever produced in order to learn things. We can do that, but with just a fraction of this data that is available. So personally, I think we're still missing some things. There's some fundamental things that will require us to step up in order to get there. But at this point, it's really hard to say. I would also not have expected what JetGPT has become 10 years ago. I would also not have expected what JetGPT has become 10 years ago. I would also not have expected what JetGPT has become 10 years ago. I would also not have thought that this would have been possible. So it's really hard to say, but indeed, one advantage of making things from scratch to build the capabilities from a more fundamental base is maybe that we might have more control over what is going on. Because of course, if you train neural networks with a lot of data, it's really hard to know what to expect at the end of the training cycle. So I wanted to ask about whether an open source model or a commercial model is going to be first. So OpenAI took a very clear stance on this, right? In multiple interviews, they've stated that basically developing AGI will just require too much resources to be done by anything else than a commercial party. What do you think about that? Well, I think it's something that we need to work on. The question of being an open source will also allow us to look at the code, which will also give us insights in how these things work. That's one thing. But on the other hand, we will also need a computer to do this kind of research. And this is something that the European Union can indeed step up on. But this is something that will be important. Because indeed, without compute, it will be difficult to run these models. On the other hand, the previous question, it might be possible that we will have breakthroughs that will allow us to do things with much less compute. So this is something that is also very interesting. At these deep neural networks, they really are models with a huge number of parameters. So to train them, that is really an infrastructural nightmare. But maybe we can come up with things with fundamental concepts that will allow us to do much more with much less compute. So I had a question. What do you think of stopping the research or strengthening the research right now until we have proper security in place to be sure that none of those existential crises can occur? Yes, I think that's what I meant with oversight. There should be debate on how it should be conducted. There should be debate which direction we're reaching. And I think this is missing a lot at this point. So this is really something that needs to be debated. And we will need to... I think governments will need to be on top of this rather than following the companies that do this research. Because in the end, if a company develops this kind of technology, who will be responsible? There is a lot of ethical but also legal issues with that. So there is a lot of work to discuss this. And as I said, we might have AGI in 10 years, might be in 1000 years, but it doesn't hurt to start thinking about these processes now in due time. Hi. Can you have explained a lot of problems? Can AGI help with those problems? Sorry? So can AGI help with the problems of AGI? Well, then it might be too late, of course. There might be some circular aspects to it that is indeed in a way that many of these alignments, the ideas to alignments actually use the same kind of technologies and methods that are used to develop capability. So in a way that is indeed being sought after. But if you have a sufficiently complex model, this model might be trying to deceit you. So if this is the case, it becomes really hard to understand what is going on and to understand whether this model is really working with you or working against you. So in the end, our brains might be too small to still follow what is going on there. And that is where things get complicated. Appear. Don't you think that maybe we have a bigger probability of dying by disease that AGI can cure? That's a good question. So if you have other existential risks, we need to think about their probability. So for example, I do a lot of research in pandemic preparedness and indeed, might be possible that we have a virus that will be very destructive. On the other hand, in human history, we did not have any viruses or pathogens that really wiped out the entire species. So existential risk is really wiping out your species. So balancing that to other existential risks is important. But then we have to make sure that we have a formal framework to reason about this probability. Because otherwise, you're just comparing apples and oranges without actually knowing how they compare to each other. Does that answer your question a bit? Yes, but if you all have 100% more of a disease that might be dying, so maybe AGI can cure the disease. Yes, but that's how it has been. That's a good question because it was not with the microphone. So I think you said that we now have 100% chance of dying, that AGI might fix it. Well, that's true, but that's also what humanity is about. We are mortal beings. Should we put that in the balance of making ourselves maybe live forever, but on the other hand, maybe we might swipe out our species. I'm not an ethical expert in ethics, but these are things that we should think about. And maybe society should, through a democratic voice, decide on this. But that's not something that I think we can decide here today, but there are different angles to it. That is definitely the case. Hi. You mentioned two papers. The first one that argued that reward might be enough to achieve artificial general intelligence, and then the second one which argues that it may not be enough. Obviously, artificial general intelligence should be able to learn how to behave ethically in the same way all of humanity does. Sorry. This will understand. Sorry. Do you think there is a good approach of teaching artificial general intelligence to behave ethically, like humans do? If it can save the same sort of problems, it surely can understand the ethical reasons we do. Yes. Well, asking me, is there a good way to do that, said that would solve the problem in a way? So, unfortunately, I do not have the answer to that. We did do research that at least said that a multi-criteria approach makes a lot of sense. And this is also how we as human beings work. We do not have only this acorn to follow. We have different things that we deem important. So, formulating things in this fashion might be a good way to do that, but we also make the disclaimer that this is no guarantee to make this work out. So, there is a lot of work that will be necessary in order to, first of all, get some ideas on this probability of existential risk, but also ways to make it more likely that we will be heading towards a safe agia. Okay. So, thank you very much. Thank you.