[00:00.000 --> 00:26.080] Thank you for being here. I was not expecting such a large room of JavaScript developers [00:26.080 --> 00:31.640] and nothing has been broken yet, so it's unbelievable. So yeah, I'm here to talk about [00:31.640 --> 00:36.800] strong dynamic type checking for JavaScript, which may sound a bit weird because you are [00:36.800 --> 00:42.360] not expecting strong type checking in JavaScript in the same sentence, right? But I will prove [00:42.360 --> 00:50.200] that we can do something about it. So first, what is strong type checking? What do we mean [00:50.200 --> 00:55.240] by that? So let's do a bit of vocabulary. So I try to find one definition online of [00:55.240 --> 01:00.280] what is strong type checking and couldn't find any. It's like a never-ending argument [01:00.280 --> 01:06.040] about which one, which language is strong and which one isn't. But I found two definitions [01:06.040 --> 01:11.360] commonly accepted. The first one is that this strong type checking means that you have this [01:11.360 --> 01:17.480] kind of explicit binding between some variable name and some type. So this variable and this [01:17.480 --> 01:21.200] type are like bound together. That means that every time you are calling variable by its [01:21.200 --> 01:27.760] name, you will get some reference data that matches the type you expect. The second definition [01:27.760 --> 01:34.480] is more regarding the program-languaging features, like no lack of type safety due to [01:34.480 --> 01:39.520] loser-typing rules. For example, in the case of JavaScript, we have implicit type coercion. [01:39.520 --> 01:44.480] That means that it's perfectly fine to get the plus operator between a number and a string. [01:44.480 --> 01:48.280] For JavaScript, it's just string concatenation and automatically casting the number to a [01:48.280 --> 01:54.040] string. But for other languages, like Python, you will get a type error. So let's take on [01:54.040 --> 02:01.520] as an example to show that Python is more strong, stronger than JavaScript. So when [02:01.520 --> 02:05.400] it comes to JavaScript, whatever definition you pick, you might say that JavaScript is [02:05.400 --> 02:11.640] not a strongly typed language and you will be right. Because JavaScript is based on dynamic [02:11.640 --> 02:16.400] type checking, dynamic typing, that means that JavaScript variables, you can assign [02:16.400 --> 02:23.120] it to some type and then move to another type and go on. It doesn't matter. So because [02:23.120 --> 02:29.200] types can change during program execution, that makes types in JavaScript quite unpredictable. [02:29.200 --> 02:33.480] So the creator of the language, Prandon H, justified his choice by saying that developers [02:33.480 --> 02:37.400] can be more expressive when you get dynamic typing, which means that it can get to the [02:37.400 --> 02:44.560] result faster, but he also agrees that it is also more error-prone. So that's the image [02:44.560 --> 02:54.040] I took for static versus dynamic typing. I think it sums up pretty well. So yeah, JavaScript [02:54.040 --> 02:58.320] and strong type checking, not really. Actually, every time you see someone complaining about [02:58.320 --> 03:04.600] JavaScript or mocking JavaScript, it would be about one of these memes, right? So these [03:04.600 --> 03:09.880] are some of my favorites. Maybe you know others. But as you can see, almost all these jokes [03:09.880 --> 03:16.640] about JavaScript are basically about the lack of strong type checking, which is too bad. [03:16.640 --> 03:22.360] So some people will decide to just get rid of JavaScript and maybe go to Kotlin or.NET [03:22.360 --> 03:27.960] or whatever language we have seen this morning. But I think that's the most common solution [03:27.960 --> 03:34.160] to this approach has been TypeScript, right? I mean, this thing has been invented specially [03:34.160 --> 03:41.640] to address this issue by having some optional static type checking about JavaScript. So how [03:41.640 --> 03:46.760] many of you are using TypeScript? Raise your hand. Wow. I was expecting that. Like, almost [03:46.760 --> 03:53.160] 100% of these people in this room are using TypeScript. I mean, why wouldn't we use TypeScript? [03:53.160 --> 03:59.000] It's so popular. Almost all the ecosystems, all the libraries today on VNPM ecosystem [03:59.000 --> 04:03.640] have been converted to TypeScript or provide TypeScript definition files. So we have seen [04:03.640 --> 04:09.160] this kind of exponential growth in popularity among the years. TypeScript is 10 years now. [04:09.160 --> 04:16.480] It will be 11 in five days, actually. And it has never been so popular. So did we solve [04:16.480 --> 04:21.560] the issue of type checking JavaScript with TypeScript? I would say not entirely and I [04:21.560 --> 04:27.360] explain why. Here are some things I learned about TypeScript after 110 years. The first [04:27.360 --> 04:33.320] one is a bit of shock is that TypeShaking is not actually the main selling point of TypeScript. [04:33.320 --> 04:39.560] The main selling point of TypeScript is developer experience. So if you have practiced TypeScript [04:39.560 --> 04:46.200] and the whole room has done that, it's great. You have seen some improvements in your development [04:46.200 --> 04:52.520] experience. So many things like be able to explore some API by using the autocomplete, [04:52.520 --> 04:57.520] be able to detect some typos that you have done in your code, be able to refactor your [04:57.520 --> 05:02.360] code more easily thanks to the static type annotations, maybe have some documentation [05:02.360 --> 05:07.680] right inside your IDE. Some compilers are using static type annotations to bring some [05:07.680 --> 05:12.960] compile time optimizations, which is great. You get type inference, good type inference [05:12.960 --> 05:17.960] in TypeScript, so you don't have to write all the static type annotations at any time. [05:17.960 --> 05:22.880] And we have seen some innovative use of TypeScript. For example, the Angular community is using [05:22.880 --> 05:28.120] a lot of TypeScript annotations to make some code generation, which is great. So all of [05:28.120 --> 05:33.040] this is part of developer experience and is great, but it's not really about type checking [05:33.040 --> 05:40.320] anymore. It's much more than that. I figured out that type checking was not the main selling [05:40.320 --> 05:45.160] point of TypeScript or at least not anymore. When I looked at the ES build project, one [05:45.160 --> 05:50.200] of the most important JavaScript project of these last years, ES build, the famous bundler, [05:50.200 --> 05:54.520] so maybe you are using the VIT development server, some people in the room. So VIT is [05:54.520 --> 06:00.200] based on ES build. And does ES build support TypeScript? Of course it does. Everyone is [06:00.200 --> 06:05.640] using TypeScript. But the fact is that ES build does not actually do any type checking when [06:05.640 --> 06:10.200] it compiles some TypeScript code. All it does is look at the TypeScript code, look at the [06:10.200 --> 06:14.920] TypeScript part of the static type annotations, and just get rid of it. That's all it does. [06:14.920 --> 06:19.760] Nothing else. And they say that running the whole TypeScript compiler is actually a loss [06:19.760 --> 06:26.240] of time today. Because of this development experience, developers have this whole integrated [06:26.240 --> 06:31.040] type safe environment and development process. So that means that you don't need to do it [06:31.040 --> 06:39.480] twice a second time on compilation. The second point of TypeScript that I learned [06:39.480 --> 06:46.520] about 10 years later, it's that type safety in TypeScript can be easily defeated. What [06:46.520 --> 06:51.240] I mean by that is that in many scenarios in your application, you are relying on type [06:51.240 --> 06:56.880] assertions. That is these little elements like the ASCII word or the exclamation mark [06:56.880 --> 07:03.000] here, which is I ensure you the compiler that is not null. So all these things are not bringing [07:03.000 --> 07:06.720] any type safety. It's just the developer saying to the compiler, trust me, I know what I'm [07:06.720 --> 07:15.520] doing. And most of the time, we do not. So yeah, this problem, these type assertions, [07:15.520 --> 07:20.320] you can find them easily on any web application. There are actually many parts where you are [07:20.320 --> 07:25.480] forced to use these kind of assertions because of the nature of the web. You can have your [07:25.480 --> 07:31.320] perfectly type safe TypeScript application and still have to deal with lots of unpredictability. [07:31.320 --> 07:38.320] Unpredictable is like the most of your time, your job for a front-end web developer. And [07:38.320 --> 07:43.200] what I mean by unpredictable, it is known at runtime. That means it changes every time [07:43.200 --> 07:49.880] at every user and so on. So for example, your application may have some back-end services, [07:49.880 --> 07:55.280] may call some APIs, maybe some third-party cloud providers. And you are trusting the [07:55.280 --> 08:00.560] responses of these servers, right? You are not validating any of the response of the [08:00.560 --> 08:07.440] server from the application side. So this could break. You are also relying on a browser. [08:07.440 --> 08:12.880] And some browsers have bugs and queers. They do not fully support visual script APIs. The [08:12.880 --> 08:20.600] web standard APIs. I chose this logo for a reason, why? You may also have some client-side [08:20.600 --> 08:26.040] store data for your application. Maybe you are storing on a local storage some user preferences [08:26.040 --> 08:33.640] or some five-store age users' cache. So this is likely to break as well because sometimes [08:33.640 --> 08:38.160] the cache is outdated. It comes from an older version of your application or maybe it has [08:38.160 --> 08:44.400] been modified by the user itself, who knows. And finally, maybe the most unpredictable [08:44.400 --> 08:53.280] part of every developer's job, did you guess it? The user. The user can be very unpredictable. [08:53.280 --> 08:57.000] If you have some application in production and have a look at the production database, [08:57.000 --> 09:02.160] you will always find some crazy stuff like, how did it get there? I don't understand. [09:02.160 --> 09:08.160] This is the user. All these things need to be validated. Otherwise, this is a recipe [09:08.160 --> 09:17.880] for disaster and can break your perfectly safe application in TypeScript. So if you look [09:17.880 --> 09:22.880] at TypeScript and wonder, how can I do that? How can I type check all of these things? No [09:22.880 --> 09:28.640] luck. It's not a compiler problem because all these things happen at runtime. You cannot [09:28.640 --> 09:33.200] anticipate it. So it's more an applicative problem and not a compiler problem, which [09:33.200 --> 09:38.520] means that TypeScript is completely helpless and it is up to you, the developer, to find [09:38.520 --> 09:44.800] a solution to these problems. So how do we deal with runtime errors? Most [09:44.800 --> 09:51.560] of the time, the truth is that often we don't. Maybe in the best of scenarios, you are doing [09:51.560 --> 09:57.360] some custom logic to validate the data and trying to fix the stuff the best you can. [09:57.360 --> 10:01.840] But most of the time, you have so many different possible runtime errors that you would have [10:01.840 --> 10:08.680] like try catch blocks and trying to show some error messages to the user, saying them to [10:08.680 --> 10:15.480] call you and send us an email in case something bad happens. And I also saw that we have some [10:15.480 --> 10:20.640] kind of global unexpected exception handler that is just sending all the runtime errors [10:20.640 --> 10:25.920] that you didn't catch to maybe a monitoring service. And it is added to a log file that [10:25.920 --> 10:30.120] you are checking like once in a month, looking at a bunch of errors and saying it's not worth [10:30.120 --> 10:35.040] my time, so I should move to something else. I don't know if some of you do that, but it [10:35.040 --> 10:41.560] happens, right? So it's too bad because we could figure it, figure a way to solve all [10:41.560 --> 10:47.480] these runtime errors. So back to this idea of strong dynamic type checking. How can we [10:47.480 --> 10:53.600] do that? So I'm just taking to the definition of a strong binding between variable name [10:53.600 --> 10:58.560] and a time here. What if we could do this kind of strong binding but at runtime? What [10:58.560 --> 11:04.400] would it mean? First, it would mean that the type errors that we get would still be runtime [11:04.400 --> 11:09.600] errors, right? Because it happens at runtime. But at least there will be more explicit and [11:09.600 --> 11:14.840] more catch early. That means that instead of having like undefined is not a function, [11:14.840 --> 11:19.160] you will get an error message like this variable has been from undefined and it was not supposed [11:19.160 --> 11:24.120] to. So instead of pointing to the consequences, it points to the source of the problem. So [11:24.120 --> 11:28.640] that helps a lot to reduce the investigation job that you have to do as a developer when [11:28.640 --> 11:35.080] doing debugging. The second thing is that this strong binding, it should not be just [11:35.080 --> 11:39.200] a one-time validation pass. I'm sure that there are plenty of JavaScript libraries that [11:39.200 --> 11:44.920] do that. That is, you are throwing a bunch of data to it and it validates, saying true [11:44.920 --> 11:50.120] or false or just throwing a type error. But we need more than that. We actually need to [11:50.120 --> 11:56.120] have this binding. That means the type information needs to live along with your data. So it [11:56.120 --> 12:01.000] should be validated, this type checking thing, on every reassignment or mutation of this [12:01.000 --> 12:08.960] data. And finally, the goal of this is to get rid of maybe some silent errors because [12:08.960 --> 12:13.520] we have many mistakes in JavaScript that just are silent. That is, you are not noticing [12:13.520 --> 12:18.840] them until it's too late. And it can also make runtime errors maybe more predictable [12:18.840 --> 12:26.680] and so more manageable from a developer's point of view. So this is the main reasoning [12:26.680 --> 12:32.200] I have when I worked on the open source library that I want to present you today, which is [12:32.200 --> 12:37.320] object model. So definitely not a new project. Actually, I've been working on this for the [12:37.320 --> 12:44.800] past eight years. So I'm at the version of 4.4.1. That means that I have written the [12:44.800 --> 12:52.440] entire thing like four times now. It's obviously the hardest thing I had to code in my life, [12:52.440 --> 12:58.760] I would say. It's very complicated, but it works. So I'm glad. And I would say also that [12:58.760 --> 13:04.000] it is my most used for real open source project. By use for real, I mean that it is used in [13:04.000 --> 13:10.560] business projects. So I use it in my professional project. Other people are using it as a fundamental [13:10.560 --> 13:14.920] component of their business project and I receive lots of positive feedback about this [13:14.920 --> 13:23.760] library. You've got an example here. So what is this library doing? So how do you use it? [13:23.760 --> 13:29.040] It's pretty simple actually. The first thing you have to do is define the dynamic types, [13:29.040 --> 13:34.080] I would say, I would call them models. I explain the difference later. But basically, let's [13:34.080 --> 13:39.280] say you are working on e-commerce application. You can declare an object model for the order, [13:39.280 --> 13:43.560] for example, the customer order, saying that you have a product which has a name property [13:43.560 --> 13:50.960] which is a string, a quantity that is a number, and also an order date. After I've been declared [13:50.960 --> 13:56.040] this model, you can now bind it to some data. So this is where you have this strong binding [13:56.040 --> 14:00.880] between the type and the variable. Here, I used the constructor pattern, so that means [14:00.880 --> 14:06.640] you are calling new order. I think it's probably the most intuitive form of a binding for the [14:06.640 --> 14:11.360] developer. And also, it helps to store the type information on the prototype level of [14:11.360 --> 14:15.680] the object. So that's how I have this strong binding. We already have a binding between [14:15.680 --> 14:21.320] objects and prototypes in JavaScript. And after having done that, you get the myorder [14:21.320 --> 14:26.640] object, which you can manipulate just like you would do with a regular JavaScript object. [14:26.640 --> 14:32.520] But instead, when you are assigning, for example, quantity to Boolean instead of a number, you [14:32.520 --> 14:37.960] will get a dynamic type error at runtime with an explicit message saying, expecting product [14:37.960 --> 14:44.320] quantity to be number and got Boolean false instead. So because this happens, every time [14:44.320 --> 14:49.640] you are doing any mutation on an object, it is really easy to quickly find some mistake [14:49.640 --> 14:55.280] that you are doing as a developer and so improve the debugging experience. [14:55.280 --> 15:02.360] So that's great, but how does it work? So let's start from a pretty basic example. Here, [15:02.360 --> 15:08.720] I have a class user having a constructor taking a name and an age. And if you want to validate [15:08.720 --> 15:13.680] the parameters that are passed to a function and not rely on static type annotation in [15:13.680 --> 15:21.360] JavaScript, that means that you would validate this data at runtime. What you could do is [15:21.360 --> 15:25.760] use these if conditions and check the type of these different variables and throw type [15:25.760 --> 15:32.360] errors like that. Pretty easy. The problem with that is that it only works in the constructor. [15:32.360 --> 15:36.640] So maybe you could decide to declare some setters like set name, set age, and have this [15:36.640 --> 15:41.640] validation process on every single attribute, but it's a bit tedious and we can do better [15:41.640 --> 15:49.160] on that. So we can improve this by using a feature of JavaScript, which is the proxy. [15:49.160 --> 15:54.120] So I don't know if everyone knows about proxies. This is a feature of JavaScript that has been [15:54.120 --> 16:02.240] introduced in 2015 as part of Xmascript 6. And proxies are actually really great features, [16:02.240 --> 16:07.800] really powerful. The way proxy works is that they enable you to intercept some operations [16:07.800 --> 16:12.960] that are done on some object and react to these operations. So in this example, I just [16:12.960 --> 16:18.200] use the set trap of the proxy, which means that every time I am reassigning a property, [16:18.200 --> 16:26.600] I can execute some custom logic. So I can move my if type of name, different string [16:26.600 --> 16:35.960] and so on into the set trap and be able to detect the different issues. So that's great. [16:35.960 --> 16:41.560] So it works both for the constructor and the future reassignment, the future mutations. [16:41.560 --> 16:49.160] What we can do as a first step is try to make a generic function out of this. Like so. [16:49.160 --> 16:55.160] So now I just move the definition part on this generic type check function argument. [16:55.160 --> 17:00.240] So the type check function takes two arguments. First is the data. Second one is the definition [17:00.240 --> 17:04.240] or the type if you prefer. So it makes clear that you have this strong binding between [17:04.240 --> 17:09.000] objects and types. And as you can see, it is really easy to make a generic function to [17:09.000 --> 17:16.160] do this kind of stuff. So the type check function that you see here is a very basic version [17:16.160 --> 17:20.760] of what object model is. Of course, the library is much more complicated than that. It can [17:20.760 --> 17:26.480] cover many other use cases, but you get the idea with this example. So as you can see, [17:26.480 --> 17:35.720] it is really easy to reuse this type check function to apply to many different models. [17:35.720 --> 17:41.160] So why did I call these models and not types? Actually, I wanted to find another word just [17:41.160 --> 17:45.480] to make straight that there is a few differences from the types that you know from TypeScript, [17:45.480 --> 17:51.160] for example, because everything happens at runtime. This is runtime data validations. [17:51.160 --> 17:56.960] That means that models are more powerful than just the static types. For example, they can [17:56.960 --> 18:02.360] not only check the types, but also the values. Let's say I have a short model which can have [18:02.360 --> 18:08.960] a size which is either a number or a letter like MSL, Excel, and so on. I could decide [18:08.960 --> 18:14.640] to have this kind of type annotation to have both control that it is either a number or [18:14.640 --> 18:21.320] a letter M or a string matching this regular expression. I can also have some more complex [18:21.320 --> 18:27.040] assertions. For example, if I want integers in JavaScript, yeah, integers in JavaScript. [18:27.040 --> 18:31.680] So it will be the number in the end because that's how JavaScript handles numbers. It's [18:31.680 --> 18:38.320] double 64 bits. But I can add another assertion on this number model to say I need to check [18:38.320 --> 18:42.960] that it is an integer. And maybe if I want it to be a positive integer, I can add another [18:42.960 --> 18:47.480] assertion to make sure that it's above zero. So this is the kind of stuff of assertions [18:47.480 --> 18:51.960] that you can have. And again, every time you are manipulating this property, for example, [18:51.960 --> 18:58.280] the age of the user, it will check all these assertions automatically. And also the last [18:58.280 --> 19:04.000] difference from models to types is that model validation can affect application logic. Because [19:04.000 --> 19:07.920] it's happening at runtime, that means you need to react to it and have some strategies [19:07.920 --> 19:12.440] of how to handle these runtime errors. For example, if you got some error, some type error [19:12.440 --> 19:18.080] on your short model, maybe you just want to cancel the order so that you are making sure [19:18.080 --> 19:26.280] that everything is happening correctly on your application. So these are the main differences. [19:26.280 --> 19:33.040] So to get a look at the pros and cons of this library, first the pros. You get descriptive [19:33.040 --> 19:39.040] error messages. They are pointing to the initial problem, not the consequences. So just that [19:39.040 --> 19:43.680] saves you a lot of time. And it means that you now have this kind of continuous data [19:43.680 --> 19:48.480] validation as part of your development process. That means you get faster debugging and more [19:48.480 --> 19:56.280] reliable applications in the end. Regarding how you manage these runtime errors, because [19:56.280 --> 20:00.280] you need to do something, right? Not only showing an error message, but maybe doing [20:00.280 --> 20:05.200] some strategies that are planned. You can define some global or maybe per feature, per [20:05.200 --> 20:10.440] model strategies about how to manage these errors. Maybe some errors can be easily manageable. [20:10.440 --> 20:15.760] For example, clean up an outdated cache. Or maybe some of them are more complex and then [20:15.760 --> 20:22.760] you need to maybe log them into a monitoring service. Some of the cons of this library, [20:22.760 --> 20:26.840] one about performance, of course, because since it's happening at runtime, that means [20:26.840 --> 20:31.760] that it has a cost, a performance cost. Don't worry, it's not too much. But if you are doing [20:31.760 --> 20:37.280] some heavy mutations, like more than 100 times per second, maybe you should avoid using dynamic [20:37.280 --> 20:41.520] type checking for this specific scenario. But most of the times you don't have to do [20:41.520 --> 20:48.040] this, so it's great. The second problem is that it relies on proxies. So you need support [20:48.040 --> 20:54.800] for it. Today, modern browsers are supporting ES6 proxies very well. But if you have older [20:54.800 --> 21:03.720] browsers for some users, this can be an issue. So, which is better? Static type checking [21:03.720 --> 21:08.560] or dynamic type checking? The correct answer is you should use both because they address [21:08.560 --> 21:14.080] different issues. Type script, we saw it. It's awesome. It improves a lot with the developer [21:14.080 --> 21:18.640] experience. It makes you have a coding base which is reliable and makes sense, which is [21:18.640 --> 21:23.960] logical. But you should also take care of all the unpredictable parts that are happening [21:23.960 --> 21:29.120] at the runtime of your application. So my personal recommendation would be to stick [21:29.120 --> 21:34.680] to TypeScript for the core application logic but also add this object model layer for every [21:34.680 --> 21:40.440] external interface that you have to deal with, like the server, user inputs, the local stores [21:40.440 --> 21:47.360] or the browser APIs. And this can lead to a more reliable application. That's all I [21:47.360 --> 21:58.360] have for you today. So thank you for listening and I'm taking questions. [21:58.360 --> 22:03.560] Thank you very much. So, we have time for questions. Who would like to ask the first [22:03.560 --> 22:25.080] question? My question is, have you ever tried using this library with other libraries for [22:25.080 --> 22:36.600] like, as it called, immutable data or for other validation? Have you tried using this library [22:36.600 --> 22:44.920] with other libraries for dynamic checking like YAP or for immutable data like EMR or other [22:44.920 --> 22:51.680] libraries? Yeah, so immutable should work fine. For other validation libraries, I mean it's [22:51.680 --> 22:55.800] kind of the same thing, the same job, so maybe it doesn't work that well and doesn't make [22:55.800 --> 23:00.720] really sense. But I think it should work perfectly with immutable data structures. So EMR should [23:00.720 --> 23:06.120] work fine. Yeah. [23:06.120 --> 23:19.960] Hi. Do you think it would be possible to generate the object model, like definitions I don't [23:19.960 --> 23:22.400] know what's called, object models from TypeScript? [23:22.400 --> 23:27.320] Okay, that's a good question. So actually, you can do the opposite. That means that if [23:27.320 --> 23:32.360] you are using models, it will generate TypeScript types for you. But because this is more than [23:32.360 --> 23:36.680] type checking and as you saw, this can affect application logic. That means that we cannot [23:36.680 --> 23:41.080] do this simple conversion. If you use it dynamic type checking, just like you would do with [23:41.080 --> 23:47.360] TypeScript types, you are just using like 10% of the potential of a library and it wouldn't [23:47.360 --> 23:52.960] make any sense to me. So you should see the website maybe to have more example of that, [23:52.960 --> 23:55.720] but yeah, it's a bit more than that. [23:55.720 --> 24:01.400] Yeah, I see hands. You were there. The next speaker also, could you raise your hand or [24:01.400 --> 24:06.680] stand up? Okay, so we'll have to contact them. [24:06.680 --> 24:16.080] A fantastic idea. Love the library. You mentioned rewriting it four times over eight years. [24:16.080 --> 24:19.320] How stable would you consider the project to be? How often do breaking changes to the [24:19.320 --> 24:21.520] API get introduced, that kind of thing? [24:21.520 --> 24:26.880] Yeah, it's true that I've written it four times, but the API never changed. That's one [24:26.880 --> 24:31.240] thing. And also I use it for professional projects. So I would be embarrassed if I had [24:31.240 --> 24:38.680] to throw it away, all right? So it's quite stable for many years now. [24:38.680 --> 24:51.160] Hello. Thank you for the presentation. And I would like to know whether would you recommend [24:51.160 --> 24:59.760] using object model on projects that has not yet TypeScript, only JavaScript. Thank you. [24:59.760 --> 25:05.920] Yeah, I mean, that could be a thing. Although if you are into strong type checking, you're [25:05.920 --> 25:11.480] probably already using TypeScript. If it's not the case, maybe it's fine. I don't know. [25:11.480 --> 25:16.360] But yeah, it's totally possible. But most of people are using TypeScript. [25:16.360 --> 25:21.240] Thank you very much. We have time still if you have time for another [25:21.240 --> 25:29.520] question. Yeah, you'll have to be loud because people are moving. And if you sit down, please [25:29.520 --> 25:38.400] make sure there is no space because the room is pretty full. Hi. Thank you for your project. [25:38.400 --> 25:47.080] So one other approach is using, for example, JSON schemas that then translates to types, [25:47.080 --> 25:49.200] should I speak? Sorry, I can't hear you. [25:49.200 --> 25:57.000] One other approach is using JSON schemas, for example, on the validation side, let's [25:57.000 --> 26:05.760] say, in a controller. And the JSON schema then compiles to the or deduces the TypeScript [26:05.760 --> 26:12.760] type that the schema defined. So that's one way, for example, to do validation and not [26:12.760 --> 26:20.680] have a runtime penalty besides doing the validation itself. Have you considered this approach [26:20.680 --> 26:28.360] for your use case? Yeah, so good question. So you can indeed use [26:28.360 --> 26:35.600] this kind of type declaration. One problem is that, again, if you are sticking to what [26:35.600 --> 26:40.480] can TypeScript do with static type checking, you are only just using a fraction of what [26:40.480 --> 26:45.880] can be done. For example, I told you about custom assertions. I told you about the fact [26:45.880 --> 26:50.640] that you can check values along with the types. So all of these things would not match the [26:50.640 --> 26:54.960] model that you are describing with JSON. So that means we need to have another API for [26:54.960 --> 27:00.520] that, and that's why I have the on API for object model. [27:00.520 --> 27:05.520] Another last question, and then please, everybody, squeeze no empty seat. There are a lot of [27:05.520 --> 27:12.200] people still standing. So JavaScript is executed with V8, and there [27:12.200 --> 27:19.360] is a lot of optimization underneath where you have an inlining going on optimization [27:19.360 --> 27:25.320] of the function. When you use proxies, all of that is going to be gone immediately. Like [27:25.320 --> 27:31.800] the performance set is not only when you set something and when you would normally go through [27:31.800 --> 27:38.840] the proxy. It's not a hook. How's it called again? I forgot the name. Anyhow. Like when [27:38.840 --> 27:44.760] you have the trigger for it, but also for anything that relies upon that data from that [27:44.760 --> 27:51.160] point on. So it's going to be a huge performance. I would definitely not recommend this pretty [27:51.160 --> 27:52.160] much in production. [27:52.160 --> 27:58.280] Yeah, I talked about the performance issues. One thing is that it's only useful for applying [27:58.280 --> 28:04.680] to external interfaces like network request. You can just validate everything related to [28:04.680 --> 28:08.920] one request. So the loss of time due to the network request compared to the loss of time [28:08.920 --> 28:14.360] due to proxy, it's acceptable in my opinion. [28:14.360 --> 28:19.240] You can debate after. Don't worry. In his go. [28:19.240 --> 28:23.200] I mean, I run a bunch of assertions and it takes less than 10 milliseconds. So I don't [28:23.200 --> 28:26.840] think the loss of time is so much trouble. Thank you very much. [28:26.840 --> 28:37.840] Thanks again.