[00:00.000 --> 00:11.520] Hecate is a Haskell from the trenches with an interest in resilient systems and documentation. [00:11.520 --> 00:16.340] When not at work or in the Haskell community, they are a trombonist in various orchestras [00:16.340 --> 00:23.200] and brass bands. Hecate uses they and them pronouns and farcical amounts of caffeine to [00:23.200 --> 00:29.520] retain human form. And they're going to present to us a presentation entitled Web Application [00:29.520 --> 00:34.760] Architecture in Haskell with flora.pm. Thank you Hecate. [00:34.760 --> 00:40.960] Thank you very much. Can everyone hear me? Perfect. So welcome to this talk entitled [00:40.960 --> 00:47.160] Web Application Architecture in Haskell when the domain drives the types and types drives [00:47.160 --> 00:52.760] the program. So this talk is intended for a missed audience of software engineers who [00:52.760 --> 00:58.760] are acquainted with the practice of domain driven design and Haskell programmers who [00:58.760 --> 01:04.600] are interested in crafting better systems. The goal is to create a bridge between the [01:04.600 --> 01:10.680] practitioners of domain driven design and the users of Haskell. [01:10.680 --> 01:17.880] So my name is Théophile Choutry aka Hecate to the community. I am a backend engineer [01:17.880 --> 01:25.840] at Scrive. We are a Swedish company and we have a e-signature platform for contracts and [01:25.840 --> 01:32.440] various documents as well as a digital identity hub where we aggregate various national identity [01:32.440 --> 01:37.920] providers like It's Me in Belgium for example or Bank ID, France Connect for the French [01:37.920 --> 01:46.880] people here. I am also privileged to be a board member of the Haskell Foundation and [01:46.880 --> 01:53.120] this is one of my numerous implications in the community. [01:53.120 --> 01:59.040] So Haskell, the pure functional programming language, so these are words and words mean [01:59.040 --> 02:04.520] nothing until we can practically apply them and these words bring us concrete features [02:04.520 --> 02:09.840] like native supports for recursion for example without blowing your stack type system that [02:09.840 --> 02:15.400] doesn't hate you, higher order functions that almost every language has today and many [02:15.400 --> 02:21.400] other features. There are two features especially that I want to talk about and it is going [02:21.400 --> 02:27.320] to be its ability of the language to adequately represent business domains and the ability [02:27.320 --> 02:35.040] to track side effects in a semantic way. For example, the adequate representation of [02:35.040 --> 02:41.960] business domain we can use algebraic data types to allow us to model with more precision [02:41.960 --> 02:47.120] the real world and its nuances. Encoding rules by construction at the type level is [02:47.120 --> 02:53.920] something that we can easily do and in this example for example the members of the excess [02:53.920 --> 03:00.680] data type visitor and admin are promoted to the type level for this user type which means [03:00.680 --> 03:10.200] that for example there can only be two values in the privileges parameter of this user type [03:10.200 --> 03:18.360] and also it means that I am going to get rejected if I pass a visitor user to this function [03:18.360 --> 03:24.200] called view back office but I'm going to get rejected at the compilation. This is something [03:24.200 --> 03:30.000] that is trivial to implement at the value level I could have a check on a member of [03:30.000 --> 03:35.120] you know the object of a property of the object that would be is admin with a Boolean but [03:35.120 --> 03:40.480] I have to write the check manually and possibly for every function that needs to have such [03:40.480 --> 03:48.120] a check. If I encode this immutable property at the type system, at the level of the types [03:48.120 --> 03:56.200] the compiler then is tasked with checking if I'm doing my job correctly. Checking side [03:56.200 --> 04:01.920] effects semantically. If we check the previous example we can see that view back office has [04:01.920 --> 04:07.680] a result type of IO HTML which more or less means you know I am doing observable side [04:07.680 --> 04:14.160] effects and I return you a value 8 in HTML but that's you know it doesn't adequately [04:14.160 --> 04:21.680] represent the reality of the function and for example here we have a way of tracking [04:21.680 --> 04:28.400] side effects that are being executed you know in a more human readable form and we can declare [04:28.400 --> 04:36.040] them at the type level. So here we have we have a function that so the signature is we [04:36.040 --> 04:42.800] get an int which is an identifier and we return a function that returns you know text and [04:42.800 --> 04:52.080] this f monad this f return type has also a list of effects so what are these effects [04:52.080 --> 04:58.640] basically I declare that I'm going to perform database access and possibly you know mutation [04:58.640 --> 05:03.400] I am going to access a ready server for the caching and I'm also going to ship my logs [05:03.400 --> 05:11.600] to an external platform or at least to perform the loading effect whatever that means. So [05:11.600 --> 05:18.560] and then we can have you know more useful breakdown of which functions have which effect [05:18.640 --> 05:23.920] so get entity name is composed of first we get the entity from the cache if we have it [05:23.920 --> 05:29.680] and then if we don't we can switch to the database to perform a perhaps more costly [05:29.680 --> 05:38.720] access and then finally we log the effects so logging db redis all of them are then unified [05:38.720 --> 05:48.240] in this list of effects at the type level. So one type system is both strong and expressive [05:48.720 --> 05:54.720] we get a lot closer to feel as refactoring and what does that mean because many languages today [05:55.600 --> 06:02.880] claim to provide such a thing. So feel as refactoring you more or less get vibe check by [06:02.880 --> 06:08.320] the compiler which is pretty cool because then the compiler keeps you in check regarding the [06:08.320 --> 06:13.520] changes in your program and how they affect the program's behavior you have also to come to terms [06:13.520 --> 06:19.760] with the fact that your worst enemy is now yourself you can't blame you know errors in production [06:19.760 --> 06:28.080] because undefined is not a function. So there are limits of course to correct by construction [06:28.800 --> 06:32.800] and I think it's especially important to be intellectually honest with that [06:33.440 --> 06:41.200] Haskell is not a prover you can prove you know lemmas or you know and theorems with it so you [06:41.200 --> 06:46.240] have to write tests and tests coming come in many shapes and forms you write tests for your [06:46.240 --> 06:51.440] integrations your properties and end to end tests tests are not optional unlike maybe. [06:53.760 --> 07:00.800] So functional application architecture so this that I showed you gives us a tool to focus now on [07:00.800 --> 07:06.000] the topic of functional application architecture and this time we arrive at in the land of domain [07:06.560 --> 07:11.520] design and there is there will be a brief overview of the terminology and the techniques that were [07:11.520 --> 07:18.480] created in the discipline and how they apply to us. So we have without surprise the concept of a [07:18.480 --> 07:26.640] bounded context it's a context there are workflow in there so what is it really it's an autonomous [07:26.640 --> 07:33.200] subsystem so it's responsible for one or multiple workflows and it has well-defined boundaries which [07:33.200 --> 07:39.920] is extremely important so we have to formalize or at least be very explicit how we talk to it [07:39.920 --> 07:46.480] its inputs and its outputs. If we take the example of flora.pm which is a community website [07:47.200 --> 07:53.120] an alternative package index for the Haskell community the schema is fairly simple we have [07:53.120 --> 07:58.160] the web component that goes to the core component which is tasked then to interface with database [07:58.160 --> 08:04.320] and we have a jobs worker for the jobs queue that also talks to the core components and to the [08:04.320 --> 08:13.440] database but we know that it's not going to talk with the web components so this is the kind of [08:13.440 --> 08:19.280] you know setting boundaries because they are in a healthy relationship and you know we know [08:19.280 --> 08:29.040] they will not talk to each other. One more step forward philas refactoring. Now something that [08:29.040 --> 08:35.600] the Java and C sharp world gave us are the concepts of data transfer objects data access [08:35.600 --> 08:42.400] object and business object. Sometimes they're the same thing sometimes you are lucky that the [08:42.400 --> 08:49.120] JSON payload you receive is the same object upon which you will perform your business computations [08:49.120 --> 08:55.280] and which you will store in your database but sometimes they are not and really it is pure luck [08:55.280 --> 09:01.040] that sometimes these types align. An example of how this bit me when I was young and hopeful [09:01.040 --> 09:07.200] obvious without much practice during meeting with other people we would try to define a json [09:07.200 --> 09:14.640] based format for data exchange between several systems and we had elixir systems php ruby python [09:14.640 --> 09:21.440] and these all you know give you several slight differences in how you can you know have data [09:21.440 --> 09:28.480] types encoded in these languages and for example if you are dealing with rubies or php users they [09:28.480 --> 09:34.320] will try and push heterogeneous lists in the data format so you can have a nint a string [09:35.040 --> 09:40.000] and natively in Haskell we don't do that so we would have to create some abstraction on top of it [09:40.800 --> 09:48.000] and I was realizing that I was constraining myself with the capacity of each language [09:48.000 --> 09:55.120] to create this data format based on json but you don't have to do it you can have your fully [09:55.120 --> 10:03.200] external way to talk to your mates your other systems and have a different representation [10:03.200 --> 10:11.280] inside your core components for example if we apply this to flora we can see that actually [10:11.280 --> 10:17.520] I have my business objects living inside my bounded context when I need to store them I serialize [10:17.520 --> 10:24.320] them I serialize them to a data access object that will be compliant with what my database expects [10:24.320 --> 10:29.520] so it means no fancy mutually recursive types for example or something like that and when I need [10:29.520 --> 10:36.080] to send that on the wire I will serialize that to a format that is easily representable by xml, [10:36.080 --> 10:41.040] json and other you know various cursed binary representations that we may find especially [10:41.040 --> 10:48.800] in the banking system so in the end if I was to summarize bounded context you know I showed you [10:50.160 --> 10:56.400] a very simple diagram earlier of flora but now how does it interface with each other so we have [10:56.400 --> 11:03.440] details between each component and especially between the clients and us daos for storage access [11:03.440 --> 11:11.360] and inside each component we operate on our business objects it so happens that the business [11:11.360 --> 11:16.800] object can be extremely similar between the web and the core components but sometimes they are not [11:16.800 --> 11:24.000] and I think it's very liberating to know that you don't have to keep to a single representation [11:24.000 --> 11:30.560] from a to z all the way you really can have conversion layers between your components between [11:30.560 --> 11:36.560] your interfaces and it's perfectly all right for example the retrieval but reading configuration [11:37.360 --> 11:42.720] the 12 factor application model tells us to read configuration from the environment from the shell [11:42.720 --> 11:50.480] so what we have on the left is the conflict type which models what I get from the environment [11:50.480 --> 11:56.640] with a twist because you know I can force some types it's not all text base I can force my [11:56.640 --> 12:04.560] parsing of HTTP ports to be a word 16 for example because I'm not so interested in you know having [12:04.560 --> 12:13.200] port number of one million and unless not without overflow so I've got my xml configuration that [12:13.200 --> 12:17.840] describes for example the first member is db config with a pool configuration so it's all the [12:17.840 --> 12:23.360] information I need for the pool the database pool and then internal configuration it's the pool [12:23.360 --> 12:32.240] itself and it's it's very useful because then I have this very explicit conversion and it's [12:32.240 --> 12:38.480] perfectly all right then to change something inside or outside my core components because [12:38.480 --> 12:44.160] then I only have you know this bottleneck that I can easily change and one more step towards [12:44.160 --> 12:54.320] fearless refactoring separating commands and queries so this has practical effects in terms of [12:54.320 --> 13:00.560] operation infrastructure and also in terms of ergonomics for the people who read our code [13:02.480 --> 13:07.120] you know in a practical way if we know that we have a recurrent fairly heavy processing [13:07.120 --> 13:14.080] query that runs and can take significant lock or CPUs on our server we have the option to have [13:14.080 --> 13:21.760] these queries run on a read-only replica for our database and put this replica on another machine [13:21.760 --> 13:27.520] so postgresql for example very specific example but I can talk about that you can have read-only [13:27.520 --> 13:33.440] replicas which take read-only queries and will be very angry at you if you ever try to mutate the [13:33.440 --> 13:40.640] state of this replica so you have your primary server which upon which you perform mutating commands [13:41.200 --> 13:46.240] and then it will stream these changes to the read-only replica and then the replica will [13:47.040 --> 13:53.280] provide you with a read-only interface that is like not only enforced at the type and at the [13:53.280 --> 13:59.040] level of the types for example in your applications but fundamentally on the protocol itself you will [13:59.040 --> 14:04.960] get a runtime error if you try to mutate this state so you can't like unsafe performance [14:05.760 --> 14:13.680] unsafe course you weigh you know behind that so something I learned at my current place of [14:13.680 --> 14:20.960] employment scrive is to have a separation like a physical separation in the code between types [14:20.960 --> 14:29.280] the commands and the queries so the dialect the idiom that we have here is that we have these [14:29.280 --> 14:37.840] dot query and dot update modules in which we put the read-only and mutating queries and then when [14:37.840 --> 14:44.800] we import them we qualify them for example import qualified as query and then there is a visual [14:44.800 --> 14:51.600] indicator so you know it's very bare bones but it does work that this is a query that is going to be [14:51.600 --> 14:58.000] read-only it's not going to increment a counter in a site table because you have performed something [14:58.000 --> 15:07.040] that is seemingly read-only a a good example for example it's LinkedIn when you view someone's [15:07.040 --> 15:13.600] page on LinkedIn they have a notification so you would think that viewing something it's a [15:13.600 --> 15:19.040] fundamentally read-only even the terms reading viewing you know you would think it's read-only [15:19.040 --> 15:24.080] but perhaps there is a counter that is increased with you know user tracking so that you can later [15:24.080 --> 15:32.400] report who has viewed your page but if you can you know bring one step more into separating the [15:32.400 --> 15:38.000] queries and the commands then it's much more it's much easier to know what which kind of [15:38.000 --> 15:46.720] operation you're performing at which place in the code so we could go further even and declare [15:46.720 --> 15:53.440] queries and commands as effects and with their own connection pools so for example I don't only [15:53.520 --> 16:03.280] have the db effect in my stack I'm declaring that I'm performing a read-only operation on the [16:03.280 --> 16:12.080] read-only replica of my PostgreSQL database so one more step towards you know more so of course it [16:12.080 --> 16:18.480] can be a technical detail but also I think it's very important to be able to say to the readers [16:18.480 --> 16:24.240] of your code what are you performing which side effect does it have especially in the system [16:24.240 --> 16:31.920] that you have ownership of now that's my anarchist tendencies coming up [16:33.360 --> 16:37.360] let's keep our distance from the state the state is best contained [16:39.840 --> 16:46.000] so the cache of our application is actually a bounded context in its own it has its own [16:46.000 --> 16:53.280] lifecycle data storage and api and by decoupling our application monolith from its state we have [16:53.280 --> 16:58.960] worked a significant portion of the path to having a setup where we can have multiple instances of [16:58.960 --> 17:05.920] our application and serving data from the same database in cache so at this point by ensuring [17:05.920 --> 17:12.080] that the database server keeps operations in sync we've got you know higher consistency of [17:13.040 --> 17:20.960] the application so that's the the cap theorem for the systems you've got cap is your application [17:20.960 --> 17:28.560] consistent is it available or is it tolerant to partition and in you know some industries where [17:28.560 --> 17:35.680] you work in very sensitive with very sensitive data if you have a production incident you can't [17:35.680 --> 17:43.840] risk having inconsistent data or having an inconsistent state where people can read someone [17:43.840 --> 17:49.440] else's private folder so it's better to shut things down for a bit we you know we keep our [17:49.440 --> 17:55.360] count we take a deep long breath and then we restart the system but it's because availability [17:56.480 --> 18:02.800] has to take you know one for the team in order to keep consistent and you know partition [18:03.440 --> 18:10.640] tolerance sorry to partition can go out the window so for flora for example very simple we can have [18:10.640 --> 18:17.040] our clients that talk to our nginx gateway and then the multiple instances of flora that still [18:17.040 --> 18:23.120] speak to the same database server for mutating operations and the same replica for read only [18:23.120 --> 18:29.040] commands you know i'm not selling you microservice architectures and you know scale to the moon [18:29.040 --> 18:37.600] type of stuff but i think it's a very decent way to start a monolith we all know that you know [18:37.600 --> 18:42.960] a good distributed system has to start as a monolith and then you know split it further and [18:42.960 --> 18:49.920] further if you start with a microservice based you know architecture you might end up with [18:49.920 --> 18:55.680] a distributed monolith but the the whole thing of a microservice based application is to have you [18:55.760 --> 19:02.080] know independent context that can still run so here we don't take you know the bet that every [19:02.080 --> 19:08.240] component is fully independent we acknowledge boundaries that we have some boundaries between [19:08.240 --> 19:15.520] the web the core and the job workers components and then themselves they have their own context so [19:16.800 --> 19:20.960] it's also about realism like do you want to scale to the moon and raise like hundreds of [19:21.040 --> 19:25.920] thousand of dollars from venture capitalists or do you want to create a nice community website [19:25.920 --> 19:33.680] that indexes packages for the haskell environment i'm going to make a short detour here and it's [19:33.680 --> 19:40.880] directing our workflows with types so it's a technique that brings together type safety [19:40.880 --> 19:45.680] and ergonomics which is one of my favorite subjects to create type directed state machines [19:45.680 --> 19:53.920] very fancy word basically it's it's really the way that your operation are composed together [19:53.920 --> 19:59.920] and you will be driven to compose these operations via their types it can be a bit scary sometimes [19:59.920 --> 20:06.000] to think of your business operations as a state machine but it gives us a terminology and a [20:06.000 --> 20:13.920] literature to take from and to think of how we organize and compose our operations so for example [20:13.920 --> 20:20.560] here we have a workflow state which can have three values arrival processed and departure [20:20.560 --> 20:28.960] and a workflow that has this state type parameter so we have a new workflow value that creates a [20:28.960 --> 20:35.920] workflow w1 and then the process workflow function takes a workflow but not any kind of workflow it [20:35.920 --> 20:44.400] has to to be set to arrival it can only take newly arrived workflows and then sets them as [20:44.400 --> 20:51.040] processed again this could be in quotes trivially implemented at the value level with you know [20:51.760 --> 20:59.760] properties of the workflow objects and we could very easily verify check these property [20:59.760 --> 21:06.720] at in the value level in the code but here I factorize all these checks and I put them really [21:06.720 --> 21:13.520] at a place where the compiler can guide my hand and tell me where I went wrong with that and [21:13.520 --> 21:20.480] finally the send back workflow can only take processed workflows by the laws of the types [21:20.480 --> 21:28.320] and then sets the workflow as departed so if I compose the functions in the good order so [21:28.320 --> 21:33.840] new workflow and then you know a pipeline of functions and then I pipe it into process workflow [21:33.840 --> 21:40.880] and send back workflow everything is good if I try to skip a step I will get a compiler error [21:40.880 --> 21:49.680] that says you wanted me to take a processed workflow but actually you know I need sorry you [21:49.680 --> 21:57.040] wanted me to take an arrival workflow but actually I need the processed workflow and this code you [21:57.040 --> 22:03.520] know you're not sending that code to production because you cannot compile this code in terms of [22:04.160 --> 22:11.360] web application development there are also some for us hastalers we like to put everything at the [22:11.360 --> 22:16.000] level of types you know and think of our code as being you know formally proven or code by construction [22:16.560 --> 22:22.960] but sometimes you know we must not drink all the cool aid or all the climatic for example database [22:22.960 --> 22:29.920] layers that promise type safe SQL if you ever find a database layer that promise type safety [22:30.880 --> 22:35.440] either it's the kind of type safety that is trivial to implement and it's totally expected [22:36.000 --> 22:41.840] of the tool to have it or it has encoded the semantics of SQL at the type level and we've [22:41.840 --> 22:47.120] either found the golden goose you know or someone who has clearly underestimated the difficulties [22:47.120 --> 22:54.800] of SQL semantics. Also SQLite for development and PostgreSQL for production that's something that [22:54.800 --> 23:03.120] the python community has popularized in the 20s 2000s and 2010s so we can accomplish great things [23:03.120 --> 23:07.680] by lying to the universe but we carefully accomplish anything by lying to ourselves [23:07.680 --> 23:15.520] and SQLite is its own system and unless you somehow perfectly code in the common subset of [23:15.520 --> 23:21.920] SQL supported by both implementations you will be maintaining two sets of database migrations [23:21.920 --> 23:29.040] and sometimes of code so PostgreSQL has very good features SQLite has difference but also [23:29.040 --> 23:35.680] good features not its type system of course but if you get used to one locally and then discover [23:35.680 --> 23:41.120] the second one once you're deployed you're going to have a bad time and also the muscle memory [23:41.120 --> 23:46.400] because brain is a muscle that you will have accumulated with SQLite will be fairly useless [23:46.400 --> 23:53.600] with PostgreSQL. So where to go from here documentation you produce documentation we have [23:53.600 --> 24:00.480] many ways of producing documentation and we hold also tremendous power in the types and coupled [24:00.480 --> 24:05.680] with introspection it means that the algebraic data types like the sem types and the product types [24:05.680 --> 24:11.520] the product types so the enums and the records they can serve as the backbone for further even [24:11.520 --> 24:16.560] documentation the types themselves are not documentation but they can be used to guide the [24:16.560 --> 24:23.520] reader and you remember how I told you to write the tests so the best tests are those that can [24:23.520 --> 24:32.400] describe real-world behaviors and if you can even produce you know a summary web page that shows [24:32.400 --> 24:38.640] the behaviors and the high-level paths taken by your program according to some input this is very [24:38.640 --> 24:44.960] particularly helpful for less technical people like product managers who want to know the behavior [24:44.960 --> 24:50.560] of your program if you can present a nice interface of how the code is executed according to some [24:50.560 --> 24:59.520] high-level business you know operation it's even better. So I have a couple of sources for what [24:59.520 --> 25:05.040] I'm saying I'm not pulling that out of my arse the first one is domain modeling made functional [25:05.040 --> 25:12.480] by Scott Vlaschen it's an excellent book written in F-sharp for the functional and DDD practitioners [25:13.040 --> 25:19.840] it's excellent I encourage you to read it as well as living documentation by Cyril Maertere and that [25:19.840 --> 25:26.880] one is also excellent really it puts the documentation as its own living system that for which you will [25:26.960 --> 25:32.640] have real clients because you know PMs and other engineers in your organization or consumers of [25:32.640 --> 25:39.680] documentation and of course here's amounts of caffeine as Fraser told earlier so that would be [25:39.680 --> 25:52.720] the end of my talk. Do we have a couple of minutes for questions perhaps? Yes thank you Akate and we [25:52.720 --> 26:03.840] do have about 10 minutes for questions. Yes Youngman there. This is this is on this is more of a [26:03.840 --> 26:13.360] comment than a question. So one little detail that I think that sort of you could have sold also right [26:13.360 --> 26:20.720] is the fact that when you do this when you do the data kind annotation on your workflow you know [26:20.720 --> 26:26.000] instead of you know checking that during runtime we do the type annotation and that's actually [26:26.000 --> 26:31.520] more efficient right because because of type erasure that there's no runtime data or check [26:31.520 --> 26:37.440] that has to happen right. Yes so what Björn says is that indeed there is a matter of efficiency [26:37.440 --> 26:44.880] because the data kinds when we encode you know the nature of parameters in our workflow these [26:44.880 --> 26:54.240] all goes away at code generation so you if you are in a setup where you need some you know very [26:54.240 --> 26:58.720] minimal code that is being generated if you are in tight loop for example this code is completely [26:58.720 --> 27:04.000] raised at the level at the time of code generation and indeed you you spread some CPU cycles. [27:06.560 --> 27:11.920] Any other question? You can also call me out on my bullshit. I won't be offended. Yes [27:12.160 --> 27:15.360] Do I need a mic or am I? I can repeat your question. [27:17.920 --> 27:27.280] The libraries that offer type safe database access that are I'm sure hideously incomplete also offer [27:27.920 --> 27:33.920] abstraction over different database backends which is one of the problems you were talking about [27:33.920 --> 27:39.600] like why you're using Postgres to develop locally. So my question is are there really [27:39.600 --> 27:46.400] situations for Flora PM where those libraries didn't provide a feature that you needed? Yes so the [27:46.400 --> 27:53.520] question is those libraries that encode you know all of the semantics of SQL at the type level [27:54.800 --> 27:59.520] are there situations where they don't provide features that I would need for Flora PM? Yes [28:00.160 --> 28:04.640] so as I told earlier I'm very preoccupied by ergonomics. [28:05.600 --> 28:12.800] 20 minutes of compilation time and you know 20 gigabytes of half interface files on disk [28:13.520 --> 28:18.400] I would consider that a problem in terms of feedback loop for contributors. [28:19.520 --> 28:26.000] My previous place of employment we used the toolkit squeal for type level encoded SQL queries [28:26.960 --> 28:32.080] because they were business critical so we wanted to invest in something very much [28:32.080 --> 28:38.800] type safe because of the critical aspects of these queries. It was hell, it was horrendous, [28:38.800 --> 28:45.840] it was not only to view and to review but also because it took so much time to compile like [28:45.840 --> 28:53.520] unironically 20 minutes and we had some problems with stack because the interface files on this [28:53.520 --> 29:01.200] were taking way too much space. Type families in Haskell are best consumed with you know [29:01.280 --> 29:08.320] responsibly and I'm a servants user so you know I can't you know shit too much on type families [29:08.320 --> 29:14.000] but in some cases very specific cases is best to rely on the expertise of outside systems. [29:14.560 --> 29:20.480] For example my best friend who's here actually in FOSDEM is my database administrator at work [29:21.760 --> 29:29.200] and you know I keep him close you know. Do you ever have experience to need on board like a [29:29.200 --> 29:35.680] newer developer that to maintain or even do new feature to the project if so what's the [29:35.680 --> 29:39.120] experience especially if they don't have any Haskell experience or especially this kind of [29:39.120 --> 29:43.120] yes very good question do we have any experience on boarding new developers on the project [29:43.120 --> 29:47.920] actually with this this talk was supposed to be the continuation of the different on boarding [29:47.920 --> 29:53.840] sessions rather than on floor at the pm so sometimes if you find me on discord or matrix [29:53.840 --> 29:58.800] I will share my screen and introduce people to the codebase and I think that's one of the most [29:58.880 --> 30:05.520] important aspects of flora as a project not only it is a community tool that has you know [30:05.520 --> 30:12.080] aims to satisfy the users but also it's a vessel for teaching so I have got many tech techniques [30:12.080 --> 30:17.920] that I explained in this talk implementing flora and flora is my the factor codebase [30:17.920 --> 30:24.720] to teach these techniques and I had very bad you know experience with community tools that have [30:24.720 --> 30:31.440] badly aged and the code is only known by you know the 10% of maintainers that stick around [30:32.560 --> 30:39.280] even if the majority the vast majority of contributors of a project or the 90% of people [30:39.280 --> 30:44.960] who just make one pull request and then go away forever so it's very hard to retain institutional [30:44.960 --> 30:52.880] knowledge and also is very hard not to aim to please the 10% of people who stick around and [30:52.880 --> 31:00.640] submit patches you know on the regular so yes I would think that and that's the goal of flora [31:00.640 --> 31:05.760] onboarding new contributors easily is actually a feature and if it can't be done anymore it's a bug [31:09.600 --> 31:12.640] any other question nope [31:14.080 --> 31:14.800] oh sure [31:15.520 --> 31:20.720] such a representation is it possible to write a function that say generate a diagram [31:23.040 --> 31:30.320] it technically is I have references for you so the question is can we generate diagrams [31:30.320 --> 31:35.920] from such representations because indeed we have the possible values that we have at a type level [31:35.920 --> 31:41.120] and we can do many things with our types including inspecting them so yes I believe there are several [31:42.080 --> 31:47.920] libraries on hackage that aim to for example provenance it's a library that gives you the [31:48.800 --> 31:53.440] the path that the data takes and the provenance of your data throughout the code [31:55.120 --> 32:02.160] I would say it's the it's one of the the greatest thing to be able to do is to represent your code [32:02.160 --> 32:08.320] and to extract facts and movements from your code in a higher you know level representation [32:08.480 --> 32:14.880] so yeah I believe we can do it today I don't do it personally I think it's possible [32:19.440 --> 32:24.960] there's time for more questions or you can like duel me if you want to challenge my beliefs [32:29.520 --> 32:37.840] okay that seems like it so thank you again Ikate thank you very much [32:38.320 --> 32:38.820] you