All right, folks, we're going to start. David, it's you. Hello. Hi. I'm David, or Jorick. I work at Pascal. I'm going to tell you a few more words about that in a minute. And I am here to tell you about an ongoing work at Pascal called Cadence. And as you can guess from the name and possibly from the logo, it's related to quantum computing. So before I proceed, I would like to stress out one thing. None of the things I'm going to tell are my work. I'm just, for one thing, I joined Pascal recently with a baggage in programming language theory, compilers and things like this. And this project has not reached the stage where we can use programming language theory or compilers just yet, but maybe someday. So a few words about Pascal. What do we do? We build qubits. More generally, we build quantum computers. We build quantum algorithms. We build quantum tools. We build quantum teaching materials. I forgot to mention we are a private company, but we are a spin-off from several laboratories. Sorry, there is a strong research background at Pascal. And importantly for today, we build open source tools related to quantum computing. And if you're interested in knowing what the inside of a quantum computer looks like, well, that's part of the inside of one of ours. I think this one is called Fresnel, but I'm not sure. You can see lots of lenses which suggest that lots of lasers are involved. Yes, lots of lasers are involved. We're not generally allowed in this room because of the class 4 lasers. Way too dangerous. Still, cool to have. So if you're like me, you might have a question. What the heck is quantum computing? I mean, we all hear about it. A little bit. Well, I hear about it every day, but I pay for that. But we hear about it in mass media and everywhere on LinkedIn, etc. It's still not clear. At least it wasn't clear to me. It might still not be entirely clear yet. What quantum computing is all about? So the first thing is quantum computing is about computing with qubits, not with bits. An important part of it is quantum computing is very much research. You may have seen many announcements, each of them informing us nicely that the last few problems in quantum computing have been solved. I'm sure that we are going to see these announcements for the next 5 to 10 years. Quantum computing is currently a very active research domain, but it's a research domain. And while there are companies that are actually building quantum hardware, we are not there yet. It's not something you can buy at the local shop or even if you go further down the road. And it's probably going to be a few years before we can do anything really useful except in a few domains. I'm going to mention that a bit later with quantum computers. Still, it's extremely exciting. And when I say it's open research, it's open research for the hardware, it's open research for algorithms. And these algorithms most of the time are designed based on mathematical models of quantum computing. There are a few algorithms, but not many algorithms that actually run on quantum hardware. And there is lots of research on compilers and tools, but again based on mathematical models usually and simulators. Lots of hype too on quantum computing. So on the upside, it means that lots of credits for quantum computing, lots of funding, which is why companies such as Pascal and a few others can do their work. It's also thanks to this that a number of academic laboratories can do their work. And it's a good time to be working on quantum in general and quantum computing in particular. It makes things a bit complicated when you have to read a press release and it's a bit hard to understand whether the new problem that has been solved on a mathematical model has been reproduced in labs or is actually ready to come out in production. Why do we care about quantum computing? Well, we do care about quantum physics anyway because in computing, I mean, because CPUs need to deal with quantum phenomena on a daily basis. One of the reasons why we cannot make CPUs that are much faster anymore is that we have hit some physical limits. I'm not exactly sure which ones, I'm not a field physician, but they exist. So we want to go for the next generations of hardware and at some point you can either continue fighting quantum physics or try to embrace it. So that's one of the reasons. Another reason is that there are hopes that quantum computing will be faster. I mentioned hopes because despite some papers including a famous paper by Google two years ago each, we don't know yet. There are good reasons to be hopeful that for some classes of algorithms we will have something very fast, but we're not sure yet. Similarly, we hope that we can be energy efficient. I'm going to show you some algorithms later during this presentation. And there are good reasons to be hopeful that we could possibly someday replace entire data centers working on very specific algorithms with something much smaller. Again, this needs to be validated in labs and on industrial hardware. We're not quite there yet. And also simply because we don't know how to build new hardware at the moment. If you look at what's needed to train chat GPT or at least an old version of chat GPT, I assume it's worse now. If I recall correctly, they were using 10,000 boards, each of them carrying, I don't know how many GPUs each of them carrying, I don't know how many cores for the training part. And I don't know how long training lasts. So how we do it at the moment is we expand as many resources as we can, which is not something that can last forever. Again. So I mentioned bits, 0, 1, easy. Cubits, three dimensional, more complicated. Plus you have the question of whether the qubits are 0, 1, which is a complicated phenomenon, its measurement, and I'm starting to have a few intuitions about it, which probably means that I'm wrong. So there are two flavors of quantum computing. The first favor is digital quantum computing. This is a program in digital quantum computing. If you look at it, you'll see something that looks very much like a circuit. Well, that's why it's called a digital circuit. You have quantum data coming from the left conveniently. All these rx, ry, rz are gates, which operate on the quantum, on the qubit, sorry, in these, all the ones prefixed with r, r rotations on the sphere. These x, z, and I could have had, y's also are symmetries on the spheres. There are all other gates, but these are the ones that I had an example to use with. And at the end, you might be able to do some measurement, and in practice, you'll have to run your experiment many times because what you end up with is probabilities. So you need to measure probabilities by taking pictures, essentially, which means you have to take many pictures. So as I mentioned, a program is a circuit. And there are programming languages for almost 10 years, I think, there have been programming languages designed to create those circuits, or at least to give a syntax to the circuits and possibly to do modeling and simulation on those circuits. But the big snag is the hardware isn't there yet. One of the big difficulties that digital has is noise. I know it's not the only difficulty, but that's the one I remember, which is already good for me. Again, I'm coming from a different field, adapting is complicated. On the other side, you have analog programs. This is an analog program. This is actually part, I believe, of the test suite of one of our computers. So the test here is, hey, can we make a program that looks like our logo? Needless to say, it's probably not a very useful program. But we need to manipulate things at a very fine level. So in practice, when you're dealing with analog, a program is not a circuit, but it's also called a circuit and some parts of it will model as a circuit. But in practice, it's geometry and pulses. It might be different for other kinds of hardware support, but I think the ideas are generally the same. When I say pulses, I mean laser pulses, so you have to set up a frequency, a shape, and things like that, which is a bit complicated. I'm not going to claim that I have any understanding of how it works. And this, why do we care? Well, there are two reasons. One of them is this actually takes advantage of the hardware. It maps extremely naturally to hardware constraints and to some classes of problems. So from the top of my head, there are a number of graph algorithms that map very naturally to this. I showed you a two-dimensional representation, but it could also be three-dimensional. And so graph algorithms, a number of optimization algorithms. I'm going to show you a little bit of an example later. And if we have a problem that maps naturally to an analog circuit, the big advantage is that this is something that you can mostly run today on some machines. Not everything can be run, but we're much closer to this than in a digital world. And one thing I should mention, if you are familiar with the history of computing, well, every computer nowadays is digital, but before World War II, there were already many computers and they were pretty much all analog. So if you look at the battleships of the UK, US, French, German, Navy, they all had onboard computers that were electromechanical and that were used for aiming precisely. So they were computing ballistic trajectories. It worked before we knew how to do digital, and it worked because this specific problem that they wanted to solve had a very nice physical, electromechanical representation. In the end, they disappeared. It took a few decades for them to disappear replaced by digital, because digital was so much more generic, but it took lots of time for digital to catch up with analog. So these justifies war were interested not just in the digital, which is going to be much easier to program once it works, but also in the analog, which might give much better results in some specific cases and which is much closer to being actually something that we can use. Of course, the problem is how do you program that? I mean, that logo was not very intuitive. Well, it's easy. Well, no, really. And I apparently accidentally removed one of my slides, which was a big differential equation, which showed on one side the interactions between atoms and the other side the interactions with the laser itself, which I have no idea how someone can go from this differential equation to actually writing an algorithm, but some people succeed and they have my complete admiration. Anyway, that's why we, and when I say we again, I mean they have devised cadence. Cadence is a toolkit. It's designed for experimenting. You can experiment both with digital circuits, with analog circuits. You can mix them. Once you have written your circuit, you can simulate or execute it. When I say simulate, the world is a bit overloaded, but simulate. I mean, an emulator running on your CPU or GPU that's going to pretend that it's doing quantum physics usually at a fairly deep level. You can pick a level or execute. Well, if you end up in the subset that actually runs on the machine, that you need big glasses and be very careful to look at, that we have in the basement, we have a few of them. They're not really in the basement, but we do have them. So if you end up with this, you can compile your program to essentially a sequence of laser pulses and then send laser pulse to the computer for execution. We do that because there are many experiments that still remain to be done. We're not quite there yet. One of the reasons, I'm putting it first because that's the one I'm most interested in, but it's not necessarily the main reason, is this is the kind of thing that can help us find out how to design a programming language that is both usable, ideally by human beings, and also executable on the hardware, which is something that doesn't really exist at the moment. Another thing is, even without that, just having some abstractions on top of laser pulses, for instance, we have libraries of geometries, well, that makes life easier when you don't have to actually solve that differential equation all the time. An interesting aspect of simulating and executing circuits is that we can run optimizations for at least two different meanings of optimizations, one of them being how we deal with noise. Noise is a big problem with quantum computing if you put your substrate, you should put your atoms too close to each other, they're going to interact, if you put them too far away from each other, they're not going to interact, how do you send exactly the data you want and not the data you don't want from one to the other. So that's the kind of thing we can simulate using CADNs or lower level tools, or possibly other tools, but anyway. And the other thing is something I'm going to show you very soon, again, still might work. So at some point, I assume that some people will ask questions, don't be surprised if my answer is, I have no clue. Okay, so let's look at a few demos. So this is an example of a graph. Let's re- yeah, okay, this is a random graph. We want to solve the MaxCAD problem. It's a well-known problem in graph theory. The detail is not extremely important. We want to find the best places to cut the graph according to some criteria. So this can be reformulated as maximizing this value. And someone, I was sure I had written my sources somewhere. Okay, so someone has devised an algorithm to do that. Sorry, I didn't sort my sources. So this starts by waiting, yes, after the wait. So we derive a circuit from the graph. So there are as many nodes as edges, if I recall correctly. And we do a number of operations whose objective is to eventually make some configurations more likely than others. So I couldn't tell you exactly how it works. Many operations, many, many operations. Yeah, and in the end, we can measure stuff. So once we have this, we can represent the quantity we want to maximize as an optimization problem for one of the many different, what? Okay. Demo effect. Hop. And so this code is basically PyTorch for people who are familiar with PyTorch. And then we can run what we call training in that case. So we can run the optimization problem. So what we're going to do is iterate. So there is a theorem in the paper which I forgot to cite that shows that this computation is eventually going to converge. There's no guarantee that it's about after 100 iterations. But in practice for a demo, it seems to work. And if we pick the configuration that was most likely, again, there is this problem with the cat which might or might not get out of the box. If we pick the configuration that is most likely, it happens to map to the solution that we're looking for. And here, so we need to cut in such a way that something, something. I don't remember exactly how to read this, but I'm going to read it. I don't remember exactly how to read this result. But the interesting part is, hey, quantum algorithm, give me the grants. So that was a digital algorithm. I'm going to show you something that has a very similar outline. We want to fit a curve. So this, we're just going to take the curve x maps to x2 and see if we can teach a quantum circuit to basically represent this curve. For this, we're going to use the quantum, the ansatz quantum learning algorithm, which exists. And basically, we're going to try and optimize a number of parameters, a number of angles here, and see what we can do. So again, let's finish our circuit. What is going on? It was working this morning. Yes. Yes, no more error messages. Okay. Okay, so this is with the initial state of our quantum circuit. The dots are the approximation, the, are samples that we want to approximate. And the curve is the initial result. As you can see, it's not exactly a perfect match just yet. So we're going to run a few steps of learning algorithm. So this one is just pure by torch, just regular optimization. And usually it works. Normally it works. I'm going to pretend that it has worked and I'm going to pre, to start. Yep. What the? Yeah. All right. So after a few steps of learning, this is what we get. We have an orange curve that why not absolutely perfect actually matches fairly well the blue dots. So okay, it's not, not time to call the noble committee for that. But this has applications. Of course, this is a very simple example for a very simple curve that we want to fit. But if you look at it with a little tolerance for approximations, this is kind of the things that neural networks are doing. That the learning phase is something kind of like this. In fact, there is an entire subdomain of quantum computing. That's quantum machine learning. And this is, I believe, one of the simplest algorithms of quantum machine learning. If you look at the API documentation of cadence, you will actually see a QNN module. So quantum neural networks. And this is a very, well, a very active subfield of an already very active field. Because if the hypothesis we have on, if the models we have of energy use and computational power are correct, this means that hopefully we could replace these tens of thousands of cores used by a chat GPT or whatever its competitors are named and replace them by something that consumes way less energy and hopefully runs at least as fast. So time to reach conclusions. What do we have? We have a toolkit designed for exploring the design of quantum circuits, both on hardware that already exists, on hardware that we believe is possible and might come out of, into labs or out of labs within the next five years, and on purely hypothetical hardware because why not? Experiments are interesting. We have this mechanism circuit optimization, which I've showed you. I showed you how it could be used to solve problems or to approximate curves. It has also other applications such as the problem of noise. I mentioned noise between atoms, for instance. Sometimes you want to optimize based on noise models and make your things work because you know that your model isn't perfect or at least your high level model isn't perfect and you want to go to a lower level model. And again, it's not a programming language, but I hope that maybe someday it could serve as the beginning of one. Ongoing work about enriching everything, writing libraries for domain-specific problems, for known algorithms, for geometries, etc. There are many questions. There is ongoing work on compilation, on the subset that we already know how to compile and more larger subsets. And of course, I'm trying to make this easier to program. And when I say we, of course, I mean them. There was a paper recently accepted and presented at Planck. If you were interested, it's on the last line here. And all the documentation and the source code are on GitHub. So thank you for listening. APPLAUSE We have like four minutes for questions, my friends. I'm sorry, did you catch it? Was there any attempt to implement the circuits that we mentioned as an actual problem? I can see the question for the mic. Yes, the question is whether these particular circuits have been implemented on hardware. The answer is I have no idea, I'm sorry. LAUGHTER I believe... No, sorry. I'm not going to say random crap. I don't know. Right now, the main use case is experimenting with this. But again, for the second algorithm, for instance, if we can manage to make its scale to very large... to a large number of curves and more complicated curves, there is a potential application to basically machine learning in general, not just artificial intelligence, but... And the former one, I can't think of any specific example for the former one, but I know that graph algorithms are very interesting for many things, because, well, for one thing, there are good reasons to believe that they can be executed on existing or almost existing hardware. And there are many important problems that can be modeled as graph algorithms. For instance, we are in an energy crisis at the moment, and all the energy distribution problems, for instance, are graph algorithms. I've heard of people who want to work on it. I have no idea whether they actually work on it. Also, for car... for modeling the circulation in cities, things like that. I couldn't tell you about more than that. Okay, I think we should also thank you very much. Thank you very much.