Thank you all for being here and for waiting, sorry for that. So our next speaker is Nicolas, who is a staff engineer with a lot of experience and he is here to talk about Enix and an actual use case that he incurred during his time in Hazura. Thank you Nicolas, for your applause. So, does your build time keep getting longer? Well maybe we can extract some packages into overrack packages. But then the packages are extracted, started to explore the dev time to work and integrate in your app. And then it's hard to keep up with two versions? Yeah, at Hazura it was the same. The build time was like 15 minutes for the frontend. The dev reload time was like 5 minutes, so you make a change, you wait 5 minutes and then it's actually done. And tooling wasn't proper everywhere. So we had to make a change. And this is the story of this change. First who am I? I'm Nicolas, I'm a staff engineer at Pethitch. You can find my Twitter and my blog. This is also in the right hand form in my blog if you want to dig further. So let's get back to the topic. So what was the setup? We had two code bases, the open source one and the enterprise version. What we did was we extracted some of the code from the open source code base into a bundle through extra layers of webpack. And then we installed this into the enterprise application. It seems pretty standard, right? But then tooling wasn't the same everywhere. In one place we had touch scripts, yes, test, storybook, chromatic, Cypress, so very good dev experience, dev installation and everything. In the other side, which let's remember, enterprise clients pay for the other side, we had JavaScript, no touch scripts, yes, test, and that's it. No storybook, no entry and test, nothing else. Because it was so complex to work in this second part of the application, this was the end result setup. But that's not it. Get worse. We had one K-line of custom webpack config just to bundle part of the application into the other one. Log files management was hell when you change one thing in one place. You had to make sure the log file, not the package version, the log file was the same in the other place. Otherwise, things will crash in production and without end to end test, you only know when you're in production or when you test your dev environment. CI was very slow because of this whole system. So we wanted a Mono-repo tool. Let's have everything inside of a single Mono-repo, having them work better in union instead of isolation. We made a wish list for what we wanted to do. What we wanted in the Mono-repo tool was task orchestration, saying build this app before this one. We wanted to have dependency-graph-visualization because right now we have two packages, but in the future we'll have more. We want to see what the hell is going on without having to guess and looking at packages and digging through code. We wanted to have consistent tooling. Let's say we have just and the same config of just and the same version of just everywhere. Because yes, it wasn't the same version of just before. Fun to make with stuff. And we wanted to have contact constraints. And for example, the open source edition couldn't import the pro edition, because you don't want to give away things for free. Like companies get paid for. We wanted to have distributed task execution so that we could scale the CI by adding more runners and to say run those jobs in parallel and deal with how you want to do. And the bonus point is we wanted code generation so that scaffolding was baked into the tool so that in the end everything was done for us. So after this open X we went into the ecosystem, look at every tool that existed. And we checked every one of them. First a small disclaimer. This work happened about a year ago. New tools exist since now. Like Moon repo didn't exist back then. So if you want you can also look into Moon repo. And I also want to shout out every engineers working on those moon repo tools. They are amazing. If you have anything they are always willing to help. So kudos to them. So what did we look into? First one, Bazel. Bazel is made by Google to handle Google monopos. It's huge, complex, you can do a lot of things. But it's also very complex to use. We looked at Gretel because yes, Gretel can do other things than just Java. You can do whatever you want in Gretel. It's tailored to Java but you can do JavaScript, you can do Go, you can do whatever you want in it. We looked at Lerner which is the historical and classical tool to manage a moon repo in the old days of JavaScript. We looked at NX because I've used this in the past in the Angular days when NX was only an Angular plugin. And yes, this is a real monopo tool. We looked at Pence which is mainly used by IBM but also in other places. It turns out it's pretty good if you want to experiment and give it a try. We looked at Java repo because all the hype and trouble was solved and everything. So it was in the list. And so that was like the tool that we looked into. So let's see. We wanted Tasker-based acquisition. Well they could all do it so that's good. We wanted dependency graph visualization and Pence didn't support it. So those two are out. Then we wanted ecosystem tooling. Troubles didn't support it. Lerner neither. So we end up with either Bazel or NX. Project constraints, they both support it. Amazing. We wanted this task execution, they both support it. Cool. And congeneration, well Bazel didn't support it. While we could have added Bazel congeneration utilities and extra code, it was also simpler to set up than NX. Complex to set up than NX was way simpler to do. So Indian NX was the tool that met his needs that we had at Hazara. If you want to learn more about those tools, this is a great resource. It's open source and contributed by many of the maintainers of such monorapos where you have a graph of all the main things that make the monorapo features and each project is listed in here with what it can or cannot do. So we had with the tool NX. But turns out there is two flavors of NX. Integrated or package based. First let's go into package based. Package based is behave like a PNPM, such as NPM workspace. You have many packages, they all link together. It works pretty well. But it doesn't have consistent tooling. You can do whatever you want in your projects. The migration path is way here because you just slap an extra JSON at the root and it's done basically. But there is still a bit of step between the leaves. Let's remember why we are doing that because we want to make sure the build between libraries is way faster so that we don't reinvent the wheel every time. So then what is integrated? Integrated means that every tool in the workspace is unified and considered a monorapo as one unit. Every tool is consistent because every tool has the same version and the same configuration everywhere. You can train it in a specific project but the base is the same. But the migration is more thoughtful because you need to decide how you want to migrate. Do you want to align with NX context or do you want to bend NX to your wheel because you can do both. But thanks to this, we have optional build steps between libraries, which means we could solve all speed issues. But there is one more thing, plug-ins. But what the hell is a plug-in? A plug-in can do three things. It can generate generators that allows you to scaffold the bases. NX new library, done. NX new application, done. NX new storybook, done. It can execute it, which is wrapping the tool to make it simpler for you to consume. And the best part is automatic migration. For example, a new version of desk came up and you need to update your test to have a new configuration for the timer. NX will migrate your code for you automatically and it works 95% of the time. You won't have to do anything. This was really helpful for us because the code base was huge, like a million of code on those lines and it was hard to maintain. So that's all good and all, but we engineers, right? Tread-offs, not everything is green. Yeah, there is two big ones. First one is single version policy. We state that there may only be one version of a dependency and package inside of the monorepo. While it adds extra constraints, it's also recommended within any monorepo. Because if you have a library that is built using React 16 and another one with React 18, you cannot import the 16 into the 18 one. And the way I see single version policy for me is a bit like buy versus loan with interest. When you want to migrate React, if you buy, you just bite the bullet. You spend maybe a bit more time, but you do it everything at once and everything is a daily. Versus if you loan the migration, meaning you have to spend many times doing many packages one by one over time, every time you have to regain context, how do I migrate this again? How do I send this again? And every single time you want to migrate a new system, it takes way longer in the end. But it's a bigger investment up first. You pick. Buy enough tools is another constraint. You have to wait for the tools, meaning that, for example, like this version came up, you have to wait for NX to update in their setup so that it will automatically migrate the tools. In enterprise software, waiting for a day or week is not that big of a deal for a new test version, to be honest. And it's way better now because they work hand in hand with actual engineers working on those tools. And some of them actually work at NX now, so that helps a lot. And if you need it, there is plenty of escape hatches, so you can just do whatever you want in the case you may need. So we know what we want. We want to manoeuble. We want NX. We want integrated. How do we proceed? Because we're not going to say, we're going to freeze production for six months until we might get everything. That's never going to work. So the goal is to migrate incrementally without stopping the digital data work. And we add some requirements for this migration. First of all, we wanted to have no cost freeze during this migration. We had many engineers working on the code base, and we never wanted to say, stop working for half a day every week so that we can migrate stuff. That's not feasible. We wanted to have as little regression as possible. Nobody likes bugs. And neither of those customers. We wanted to adhere to NX. So that automatically migration what was as easy as possible. And which meant less maintenance in the end. And furthermore, if we have standard tools, then reusable skills. You can switch teams and everything is the same. So that's nice. Like companies that do loads of re-ogs, that's a big seller. And nice to keep. We had seven years of Githy story. Githy story is sometimes the only reason sometimes we can debug something because of the JavaScript and such. So we wanted to keep it. So here was the situation. We had our current code base. We then created a new NX workspace, like just create a new workspace. We import the code into the workspace. We build it. Is it working? Yeah, everything is done. Except not. Things broke, obviously, because our code had many issues. And so the next step is to identify a whiteboard and then break the current build. This way we can fix it in the current application. And then we can start over again. The good thing about this migration path is that at every step of the way we provided value to the actual developer working on the old system while preparing the new system. And at one point we identify some of migration we needed to make to NX. So every time we create a new workspace, we added a non-migration beforehand. And we did this cycle many times to make sure every step of the way it worked, we even had a crown to do on a weekly basis to make sure everything was good. And I mentioned we had to make tweak to NX. One thing we had to tweak was the JavaScript path because we had add slash. And in the monomaple, add slash means nothing because there is no root. There is only packages. But we tweaked it so we can make sure the migration was not blocking and require a lot of work on the previous code base. We had to include Node.js fanbots because even though no Node.js code should end up in the browser, we all have Node.js code in the browser, like HTTP and such. We had to make some specific changes to the web-config, like SVG and such. And we had to disable some ASN tools because, well, our code wasn't up to standard, obviously. So that's what we needed to do. What about our code, right? So first of all, we had CSS module without the .manual.tss extension. So there would be a VIN like CSS modules, but we didn't have the extensions. We had to fix it. We used an ability to pass in CSS in tabscript. And it shouldn't have worked, but somehow it did. So thanks, Webpack 3, I guess. But we had to change this so that it worked with Webpack 5. Path imports relied heavily on Webpack config, so we had to change that also. We had to update a test in tabscript to a version that is compliant with NX. We had to update the entry points so that they only export a component and not mount the application. And this was the kicker. Turns out, somehow, the build compiled with a lot of second-dependencies. Like a lot. Like 150 loops of second-dependencies within the codebase. And this was like one of the libraries, not just the bootstrap of it. So we had to dig through and fix our code, basically. And we went down through 95, and now Webpack was able to compile the application, and the browser was able to load it. So that was good. What it looked like in the end. We had our pro application that loads the pro library that imports the OSS library. And the OSS application that loads the OSS library and the end-to-end test that both imports the library and the application. Thanks to this, this was, by the way, generated by the NX graph of the workspace. We don't have to do anything. So all good, right? Everything is nearly ready. We just need to switch. And switching means keeping the Git history. So to keep it, we first made a commit to clean up the old workspace. Then we made a second commit to Git MV to the over place. Then we made an archive for OSS because, given we are open source product, we wanted to make sure a contribution went up broken because of this. Both commits, we applied the known tricks, and then we were in NX land. Thanks to this way, the second commit was able to identify into Git blame to make sure Git blame doesn't pick up this commit. So we still kept our Git history for whatever we wanted. In the end, the total freestime for this migration was three hours. From the beginning to the actual end of the migration, three hours total. It wasn't a fault lasting a few months. And the three hours is because of CI was slow to run on the four commits that I mentioned before. So all good and all right. What about the results? We want numbers for all users and all developers. First all users, zero bargain pollution. That was great. Because of this incremental approach that we took, we were able to see that every step of the way we didn't break something because otherwise we would have identified it in the app. The over surprise was that because everything is unified, the bonus rate decreased quite a lot from 43 megs to 13 megs. And funny thing is when you get a call from a service representative, thank you, Niko. I can finally use the app locally without being too slow to load. Thanks, I guess. It's a bit weird. You wouldn't before, but still. So this helped us at the low time. We have the application loading like five seconds faster thanks to this. Okay, that's good for devs and everything for users. What about devs? Well, 30x faster local devs. Because we didn't have to have build step every step of the way, we went from five minutes to ten seconds. This was life changing. Try to imagine when you debug something, you make a change where five minutes to see that the console you added show ups. Now it's like ten seconds in an instant for what we used to. And the CI was about 60% faster in the worst scenario. In the best case scenario, it's about 80% faster thanks to caching and things like that. All right, good. Is it the end? Are we done? We are now in Enixland. We have the packages. Are we good? It could be. It could be a step that we, you say is good enough. We don't want to go further. But you could. One of this area is architect of the coupling where you say I want to make sure that my open source doesn't import my enterprise code. And you can info that thanks to Linchwool in Enix. You have a Linchwool that's better than Debreche, but it basically says that a pro code can import shared and OSS and pro and that's about it. A shared can import shared. In a visual way, this looks like this. Where you ensure that libraries in the scope can only import within the scope or the scope they allow to go to. This helped us heavily to ensure that open source code stayed open source and the enterprise code stayed enterprise and open source couldn't import through the tooling production like a cloud enterprise code. Then the other thing we went further is to unify our tooling. While in this migration, we just add Enix, generate new entry and test. We add the new entry and test for our provision. And this costs us like 20 minutes to do. We now have a V-test in some of the new projects. And we also made our custom plugin because you could make your own plugin. It's relatively easy. And thanks to the plugin, we can create a new library. I want a library with this scope and this type and put it in the right folder for me. I don't care. Do it for me. And the naming would be automatic. Everything would be automatic. In those cases, you can say generate automatically like the code owner, update the CI if needs so and that. Because in the end, thanks to the plugin, you get the specificity of your tooling, all of the developer and engineers mind and into automation. Because we all know this documentation that is never updated. And a tooling is always updated because we use it regularly. So if we know it's all of it, we can look into it. So in the end, what I wanted to say is coding on a last code base shouldn't feel like this. You are not sure you're going to break something. You are not sure what you change with a fact. You have no idea what is going on. Instead it should feel like this. A happy dance. We just pass the ball around and have things moving in the right direction. Thank you for your attention. Are there any questions? So in this case, we didn't use NPM to share on the outside. However it's supported in NX to be able to release applications. And thanks to the NX plugin, it can understand your workspace and create a package for your library to be exported publicly on NPM. Next week there is a new launch event for NX and they are going to announce something that may be related to your question. Are there any other questions? Yes. Can you hear me well? Yes. My question is what was the main reason for such decrease of the bundle size? Is it because you are using all of these cycles in the code? One of the questions was why we end up with such a lower reduction in the bundle size, because what happened in the beginning of the talk, what happened before is that we had one package that we had bundled into a package. Sorry, there are a lot of slides. Anyhow, I think you remember close enough. So what we did before is we exported a large part of the application into a package and then we imported this package into the proper base. First change now is that Webpack now has a unified view of the whole system and has a way better tree taking. Because in this middle package right here, Webpack didn't understand what was actually imported into the end application and wasn't able to do as powerful tree taking as before. So that was one huge step that helped us on this. The second step was having updated Webpack configuration and tooling, which makes sure that we didn't need to target IE anymore. So that reduced like 5 megabits from the bundle. And so both things combined plus better CSS processing with like a unified view again of the whole system made that we had this decrease in bundle size. Yeah. So today I don't pay for it and I'm doing a similar migration using an X2. There is a new tool that I would investigate, which is called MoonRepo, which is similar in some cases to an X. However, through this day for an enterprise ready product, I will still use an X. Because the one thing they are moving towards to is to also have a way smarter CI. Because if your CI can understand your workspace, it can also understand better what to do and what not to do. And so for this day, an X would be still my choice. In the future, I will still investigate MoonRepo to see if it could make sense. But unless you have a huge scale like 10,000 engineers, Bazel would make sense. Because you could have a team of like 20 engineers working on Bazel. So yeah, that's my answer. Yeah. So just to make sure when you started with an X, you imported package by package. But you threw away the results in the ads. Yeah. And you redid it in two hours. Yeah. So this way, we made sure the old system was being updated to the change we needed to do. So this way, if for whatever reason we had to stop, we still provided value to the existing base. So on the question before, what do you think of TurboRepo? Yeah. So TurboRepo has some features that are integrated into an X in terms of a feature of parity. However, it lacks some of the larger system that is required for an enterprise project. You don't have distributed task execution, for example. You don't have unified tooling. You don't have generators. And this makes that, for me, TurboPo is a middle between learner and an X. It's like a middle ground where you have a bit better because you could have tasks like caching on the cloud, thanks to like Verso. But you don't have the full power of something like an X. So yeah. Yeah. If you compare TurboRepo with the other way of choosing the index, the first one, how would you compare it? So I'm going to have two answers for that. One which is related to next week announcement and one for today. For today, an X requires a bit more conscience and tooling when you set it up. But stay tuned because it will be even easier to adopt an X to an existing workspace because they are trying to change the fact that an X is smart and trying to understand what is your project. And you have less friction to adopt an X. Yeah. Did you have any non-Node.js applications or services that you needed to integrate in this migration or an X is only for Node.js related to nodes? Great question. So by default, an X is agnostic. There is an ecosystem of plugins that exists supported officially by an X that is very fund-electrified and circulated. However you could do whatever you want. There is community plugins for go, for .NET, for Java, inside of an X where for example for the Java project it will understand the POM.xml and try to understand whatever it can automatically. And one great thing about Polyglot repo like this is you can say when your backend change, we render end-to-end tests for the frontend because they are related. Because you can say your frontend like your SDK impulse is related to the backend because it is linked to the Open API spec. Then this, we trigger everything on the frontend. And this is where an X or a manual report shines is that it's one context even if it's for Polyglot. Unfortunately we don't have more time for questions so we'll begin with a close for you guys.