Welcome to Paul and Felix. Please start. They will be talking about the value of AMS in new cap. You can start. Thank you. Okay. I'm Al Davis and this is Felix over here. We're doing this presentation on Veralog AMS and GNU cap. So let's see what we have here. Okay. Here's an outline of the talk. I'm going to give about half and Felix will give about half. So we're going to kind of swap places midway through. The outline, what is GNU cap? Some history. All about the architecture of the program and the plug-ins. What is Veralog AMS? Something that some people, a lot of people need to know. Model gen is our model compiler that generates, essentially generates C++ models from Veralog. And we'll talk about some of the features, what we've worked on for the last year in a roadmap. First of all, what is GNU cap? Initially, the project started a whole bunch of years ago when I wanted to do circuit simulation on a Trash 80. You remember those? It's an 8-bit computer that I had a big 4K of memory in the thing and I thought I could do circuit simulation on it and that's what got me started on it. And one characteristic of the Trash 80 is that they're not big enough to run SPICE. And so, I mean, quite far from big enough actually. But anyway, that was the original intent. And ultimately, as the project developed, the goal was to go beyond SPICE, not just to re-implement SPICE, but to go past it. And we have in some ways. So, as it stands now, very much changed from the original Fortran. It has a very nice plug-in-based architecture, written in C++ modular, and it's written in such a way as to encourage public participation in the code because anybody can write, anybody can make a plug-in. And you don't have to necessarily wait like in many projects for the project supervisor to accept it. As soon as you make a plug-in, it's available. History, actually it was about 1980 that I got started. Al Circuit Simulator was a grad school project, and that was the first one that was, that was the version that was, really had some features in it. Released under GPL in 1992. GNU Cap in 2001, a GNU project. They treat us pretty well. And around 2008 to 2010, we re-architected the whole thing to use plug-ins as opposed to everything compiled into a big blob, which has, which makes a real big difference in collaborative development. Beyond Spice, GNU Cap is an early, an early mixed signal simulator. The emphasis is on mixed signal, mixed analog and digital, and even mixed different kinds of analog. And the idea of implicit mixed mode was really the terminology I used at the time for what they later came to call connect modules, which the VARILOG spec came about 10, 15 years later than this. Fast Spice, it's actually, in a way, the original fast spice in the sense that it, the algorithms will do partial solutions. And if you have a circuit that's only half active, it'll kind of push the inactive side out of the way and just simulate the half that's active, and you get some better speed that way than you would with a, let's say, the straight out spice algorithm. And if you don't look too deeply, it actually looks like the spice algorithm. You may not notice the difference. And another area that GNU Cap has done some work with the time step control. In spice, usually, most people will actually specify a time step. You don't have to do that in GNU Cap. It's smart enough to figure all that out. And so the transient analysis time stepping works a bit better. Mixed mode. Oh, yes, digital techniques for analog. What I mean here is that we use a matrix solver that actually only, that solves pieces of the matrix without doing the whole thing. Yet from a distance, it looks like it's solving the whole thing. But in practice, when you're doing circuit simulation, a lot of times, a lot of parts of the circuit are latent in the sense that there's really nothing happening there. And if there's nothing happening, there's no need to spend any CPU time computing it. And so GNU Cap uses cues and a variety of techniques that you normally only find in digital simulation. But we use it on the analog side. And if it fits your circuit, it can actually help a lot in terms of speed. If it doesn't fit your circuit, let me back this up a bit. Since I... Okay, here we go. If it doesn't fit your circuit, then essentially you have spice. But so we have... For the analog, we have an event queue like you have in digital simulation. We have a matrix solver that can solve only little pieces of the matrix when we don't need the whole thing. Low-rank partial matrix solver, that's what I mean. Low-rank means that, hey, I got a matrix of 10,000 nodes, but only 30 of the nodes actually need to be solved. We can do that. Where it actually incrementally updates the matrix, and then so that you make an incremental update to the previous matrix. It looks like you did the whole thing, but you didn't. It only needs to update the little piece. And this gives us better speed with full spice accuracy. The time step control in GNU Cap, transient analysis time step. The simulator actually picks a time step. It's actually more automatic in GNU Cap than in other simulators, in particular because we look at things like cross events. Now, a cross event is where your signal crosses a certain threshold. And spice doesn't get that at all. It doesn't do anything with cross events, but we do. And it actually helps a lot in terms of getting the correct time step control, particularly in control of a scary thing that's known as trapezoidal ringing. It really controls that pretty well. And then also getting started. What I mean by getting started there is getting the simulator started. Algorithms to how do you start this? What time step do you use right at the beginning before you have any information? We do that. The software architecture, it's in C++, a big emphasis on the plugins. The plugins are very much a part of the system that they're not optional. Everything is a plugin in GNU Cap, the simulation algorithms, the commands, the device models. GNU Cap analog simulator has no built-in device models at all. All device models are plugins. And I mean all. Even the resistor is a plugin. Even the, let's call it, submodels, a blob that you might want to use to make a model, those are plugins. So that as it starts out without any plugins, there are no models at all. So basically from the viewpoint of development, you want to do models, you're doing plugins. And I've heard accusations that in a plugin-based system it creates a hierarchy of developers that whether you're working on the plugin or not, no, if you're doing models, you're always working on plugins, even us. And so the program, so GNU Cap actually consists of a main program, a library, and plugins, and some utility programs. Utility programs like, well, one that we're going to talk about a bit a little later is the model compiler. Model compiler takes Veralog code and translates that into C++ code that we use as a plugin. And basically ultimately we're leading towards doing the whole language of Veralog AMS. The library, the GNU Cap library contains a, it's basic stuff that you need. It's not commands or anything like that. The library has things like the matrix solver, a circuit database, IO, has an expression evaluator. And these, this is just a library of functions you can call, databases that you can use to make plugins where the real work is done. And another bit about why on the plugins is that the plugins actually enforce modularity. Modularity is supposed to be one of those features of coding that you're supposed to do to make good code. If you violate modularity rules, the concept of the plugins don't work because they have to be, one model is independent of another. If I have a resistor model, I can't, it can't know anything about any of the other models other than through its, other than through its intended interface. So that you, you can't put in these random go-tos and come-from's or whatever you want to call them to get from spot to spot because it violates the plugin scheme. It just, so the, the coding rules tend to be very strict when you're using plugins because it's necessary for the plugins to work and it actually helps everybody to do that. So the idea of collaboration, one thing about the plugins is that in terms of somebody might make a plugin to do something and submit it and say, hey, I'd like you to, I'm going to do a pull request. I want you to put this into your product. I have this code. And, and, and my response is that we don't have to because you have a plugin, you put it out and it's available. And so you can have, you can have your caducap installed on your, on your Linux box. And let's say you installed it from the Debian package or something like that. And then you, you can essentially make your own custom version by, by, by twiddling with the plugins and, and, and do that in spite of the fact that you have the main, main installation through your library manager or through, through the, let's say you're in a university, the, the, the, the computer staff has installed it there and, and every student can have their own little twisted version to make models or whatever you want. And, and, oh yes, I, on quality, one thing about sometimes when people are making code to do something, the quality isn't all that great. And I look, and I look at that and say, you know, well, gee, I'd like to make this available to everybody, but, but with plugins, it's not a problem. We make it available and it's optional. If you want it, you use it. If you don't want it, you don't use it. So what we have is, it basically opens up who can participate. Okay. How are, what is, what is a plugin? How do they work? Basically the, well, first of all, they're dynamically loaded. They're DL open extensions. They just use the, the normal system called DL open. They're standard shared object models like libraries. And that's really, that's really the, the, the essence of how, of what they are. The, the, the basis is C++ derived classes like, let's say, let's say I have a, a type of plugin which is a device model. There's a base class that defines what a device, device model is, defines the interface. And I derive a class from that and I got my specific model. I got my MOSFET. I got my, my motor or whatever I'm going to make a model of. And, and I, I just loaded and through the C++ derived classes mechanism, they're hooked in. The dispatcher is a way of registering a plugin that, let's say I have a resistor and I, I, I have it, I've written this plugin for, for my own special kind of resistor. We have something called a dispatcher so that when you load the plugin with DL open, it registers itself with the dispatcher. And so now it can be found, it can be used. And, and it gives you a nice seamless interface of how, how you would use a plugin. Just as if they were built in. So in spite of the fact that all of this stuff is really external to the program, it looks like it's built in. Oh yes, wrappers. Well, sometimes we get code models, whatever that are written for something else. And I might have a different interface. Like for instance, we're talking, we're talking about device models here. We have device models of MOSFETs and transistors and stuff like that. Transmission lines. Suppose I have this old model for a, let's say it's a JFET and I look in the old Spice 3F5 code and I say, oh, here's a JFET model. The wrapper, we have a wrapper that wraps the, wraps the Spice model and makes it look like a GNU-CAP model so we can use it as a plugin. And so the result of that is that in, in addition to being able to use models that are compiled for GNU-CAP, we can use models that were designed for Spice 3F5, JSPICE, which is a special version of Spice designed for Josephson junctions and so on. And, and, and we can use models that are designed for NG SPICE. We can use them all as plugins. And through, through this concept of wrappers. Plugins, what? Okay. I, I've been talking mostly about devices as plugins, but also all of the commands are plugins too. The AC analysis is a plugin, the transient analysis. Right down to the source languages. Plugins determine the form, the input and output format, the input format. So that one, one way, one thing that might want in a circuit simulator might want to read Spice files, Spice, Spice input files. So we have a plugin that reads Spice input files. We have another plugin that reads Veralog files. And I probably shouldn't say it, but spec, because Spectre is a proprietary simulator from Cadence, but we can also read Spectre files. And to give you a path out of, out of Spectre, if you happen to have that. Jita is a schematic editor. We can read in the Jita files as an input and simulate from them. Quixator, the Quix project, which has been kind of dormant. We can read some of those files too. And so the idea is to be able to use the plugin mechanism to import from import, export from other code that wasn't necessarily designed for GNU Cap. We can use it. The idea is to facilitate sharing here. Source languages, measurements. There's some measurement plugins too. There's actually about 10 different types of plugins that work with GNU Cap to do various things. Plugins, wrappers. Okay. The wrappers, the idea of the wrappers is that like, like let's say I have this C model from that was written for Spice 3F5, which incidentally is the way Berkeley still distributes their B-SIM 3 models. And B-SIM 3, they're still on C. But anyway, anyway, we can, there's a lot of models that are there that may not be the latest version, but they're still available and still of interest for archival purposes. The idea is that sometimes you're working on something, I'm not going to say, well, I don't want to use the current version of the model. I want to use one that's four years old. And the four-year-old one, that four-year-old model is written in C, and we can read those C models and use them as plugins for GNU Cap. And not only that, but as we're going to be getting to a little later with the model compiler, the way we like to do it today is to write the models in Veralog, and we can do that too. Leading on to the model compiler, the model compiler generates C++ code from a model description. The model description, wow. Okay. The model description is, yes, it's time to turn it over to Felix. The model description is written in Veralog AMS, and so I'm going to turn it over to Felix, and he's going to tell you about the model compiler and the work we've done recently. Can you hear me? Test, whatever. So the model compiler, I just wait for a start sign. Test, test. All right. So, yeah, other than Al giving a basic generic instruction from the top and historical stuff, I'll introduce some of the work we did last year on that NLNet grant, which we have an extension for already. So one of our plans is generate code for other simulators, which happens to be on that slide. And the model gen, the previous one read some format that Al came up with, like, 20, 25, 30 years ago, before Veralog even existed, and that needed a little bump, and we do that now. Veralog AMS, some of you know it, maybe some of you don't. It's a long project, and it is a common denominator for a lot of stuff which started up as digital simulation and verification tool, and it's an industry standard now. And the AMS extension built on top of the previous stuff, adding the conservative flow and conservative and signal flow disciplines that otherwise are only known from SPICE simulation at that time. And former SPICE devs have thought about that in detail, and I think they did a good job with that standard. Yeah, there's no free implementation of that before the one we are working on now, I guess, and if I'm wrong, let me know. The features are a bit more tricky because it adds stuff that was available already in the digital Veralog, like hierarchical modeling. But like computational efficiency in analog mixed signal simulation doesn't make sense if you can't describe these networks, but the language itself has features that make these optimizations that we will need even possible. That way we get true mixed signal, that's something Al has already explained, and we head towards system level analog, but still with analog signals in them. The current implementations are centered around this Veralog A, which started with ADMS, which built a SPICE kind of front and for, or a SPICE targeting model compiler in around 2000, and this is the one we are actually trying to replace. In the meantime, there's OpenVAF, and it has a simplified SPICE interface that builds binary blobs that are now loaded into, I think, NG SPICE, maybe SICE, but it removes stuff from the SPICE interface that we would need rather than add stuff that we already have. So that's not what we want to do, and with that project in 23-24, we kind of took over or take over or overtake these developments in terms of features. Generally, the standard allows analog compact modeling, which is a great feed which Vladek has been putting much time into, and hence we do have Veralog A compact models, which we can now use without that work that wouldn't be possible, and a model compiler without models is a harder start probably. So beyond that Veralog A, that's the MS in Veralog AMS, L has been tinkering with these ideas since the early days. The standard document we currently have is from 2014. I think there will be an update at some point. Maybe we will have to do our own, but it's a pretty stable standard, and it's surprisingly stable, and it makes a lot of sense when you study it, and I had to because I had to implement it. This Veralog AMS Veralog was born in 2023, and we released the first master release last month, and we've got this funding for 2024 to add more stuff, and I will go through the features that we have by now, and that's the overtaking bit, so we do support hierarchy. We have Paramset, which is an essential component of the Veralog AMS standard. We can compile them. We do have Binning, which is available in SPICE as well, but it's much more difficult to use, and it's not standardized. We have compliant sources, like Veralog has different types of sources for voltages and flows and currents, and switching sources and whatnot, and making these compliant with the standard was a bit more work than anticipated. But anyway, we support tolerances. Different types of the system can follow different tolerances. You can have a temperature, and you can have a low voltage and a high voltage, and they all have different tolerances, and the standard and our implementation accounts for that. We add the time control to the model generator side rather than leaving it to the simulator, and the extensibility, I'll get to that. I think I said a lot, and I've got not so much time, so I'll just skip to the examples today, which I want to highlight, because they are important. The Compiled Paramset means, in addition to the module overloading you get with Paramset, you say, I've got a component, and I want to build a new one, and these are my parameter overloads. So you put in that Paramset statement into your netlist or your file, and that generates that new possibly simplified model, and we don't need to deal with model cards or the syntax anymore. For example, here we have ranges, and that gives us a way to bin, and a way to reuse code as well, because code reuse, why is this here? I forgot, but we will get to code reuse anyway. So the second phase of Paramset is the pruning. So you take your Paramset, you take your model, you combine the two before you compile. That way, structures in the model, here this is a simple capacitor model, I'm printing here, it has the describing equation, is this DDT statement stuff here in the else branch, but it has an if branch as well, and under some conditions it works differently. We want to get rid of that, because it interferes with performance, so we put in a Paramset that doesn't set the IC parameter, and so this condition is never satisfied, and it's just pruned, and that is the stuff we send to GCC to compile. Well, in the capacitor example it's very simple, it's one line, but imagine you have a million instances of some device, it could be a transistor model with lots of lines, and it computes the same constant value in each iteration of the simulation again and again, because you didn't prune it, the simulator doesn't know about anything because the compiler has compiled it all in. Well, that's a lot of work, and if you load pruned models, what do you get? It runs faster, surprise, and you get the same result because whether you pre-compute the additions, the exponentiation, the logs, whatever, it doesn't matter, but the model is simpler, and as a corollary, compilation time doesn't matter. You can take as much time to compile stuff as you want and optimize stuff. On hierarchy, yeah, so we have compact models with these relational statements here, and we have contribution statements, well, math statements, which you could replace by just adding sub-device instances. Well, we can do that, sure, but we can also mix them. You can have this and that, and you just put it into one box, and whatever you have, that's the code reuse maybe, you just reuse the resistor you already implemented and add some capacitor to it, and you get your low-pass filter, and you should reuse models because, well, reusing models, you don't need to compile them again, and you have a smaller memory footprint at runtime. You've got one diode for all your 1700,000 transistors in your net list. You need to validate them once. You don't need to validate every single implementation of a diode, and yeah, that's a quote from the LRM, and it is about the extensibility. The LRM explicitly says, we want to enable extensions, and that's how I implemented it. So you go and implement your own functions in the end, and same rules apply then for core team developer, so here's the roadmap, and maybe I better leave it. Shall I continue for two more minutes? Okay. The roadmap is we've got three sub-projects in the project that we have already got funding for, and we need to work on the analog part a bit, but also we add logic modeling, so that would kind of round up the whole package a bit. And on the simulator side, I mean we only have this simulator currently as a main target, and Discipline's Nature's Connect Semantics is something defined in the Verilog AMS standard. It tells about, or it defines how to model interconnects between digital parts and analog parts of the circuit, and the simulator needs to evaluate some rules to place these gadgets that do the transformation between the disciplines, following the rules from the standard. And the third package is also important, because nobody else does it, we need to be able to interoperate across the boundaries of tools. For example, we need to be able to store a net list and send it to somebody else to be able to make use of it. And this is something we need to define this year, and also we need to target other simulators with our model compiler. Currently it writes models that only run with GNUcap, it will run models that also run with NG-Spice when we are lucky. We shall see which one we will pick, and the device wrappers will also be extended, so we will add more of the current work that is happening in the field, like pushing the NG-Spice support to the current version, maybe. We have the plug-in interface, everyone can help, and we also want to see a wish list, really, because if I see something is needed, somebody says, oh, do generate, generate is great. Well, that triggers me to read about generate. The standard is so many hundred pages long, I don't know every single aspect of it, and if something is needed, that makes me look at it, and I know, oh, generate is now at the top of my to-do list, for example, or somebody says, oh, there are these Laplace filters, and they have a nice formulation or a nice way of specifying linear filtering. That made me curious, and I implemented them. They are not ready for release, but they will be out very soon, or somebody says, I don't know, give me an example, and I look at it and implement it, because it makes me curious, that's how it works. Thank you very much. Thank you for your time, and sorry for the additional time I took. Thank you, Mr. O. Thank you very much.