[00:00.000 --> 00:11.680] Great. Yeah. Sorry for the old, the usual show. I'm sure you're used to it now. Great [00:11.680 --> 00:18.520] to see so many people here. Amazing. Thanks for your interest in the Co-op Cloud. Yeah, [00:18.520 --> 00:24.440] I'm decentralized. That's my internet name. And these are my enterprise slides for the [00:24.440 --> 00:31.240] Co-op Clouds. So I've been working on this project now for, yeah, maybe two to three [00:31.240 --> 00:39.320] years, I would say. So there's a lot of knowledge of the project, how it happened, and what came [00:39.320 --> 00:44.400] to pass locked up in my head. But I am only one person involved in this project, what has now [00:44.400 --> 00:49.680] become quite a wonderful project, I'd say, which involves a lot of different collectors, a lot of [00:49.680 --> 00:54.840] different groups. So yeah, this is just kind of like a moment to offload what we've been up to for [00:54.840 --> 01:04.040] the last two years. And yeah, all the hot takes are mine alone. Some of the images are taken from, [01:04.040 --> 01:10.960] yeah, shout out to fellow worker Trav, the internet gardening collection on Arena. I totally [01:10.960 --> 01:20.640] recommend checking that out. It's great. So yeah, Co-op Cloud. This is the official website ready [01:20.640 --> 01:25.800] description of what the Co-op Cloud is, a software stack that aims to make hosting Libre software [01:25.800 --> 01:30.840] applications simple for small service providers, such as tech co-ops who are looking to standardize [01:30.840 --> 01:38.320] around open, transparent, and scalable infrastructure, which is a lot. So I was thinking, how am I [01:38.320 --> 01:43.240] going to explain this? Because we come straight in with a software stack, but actually, it's much [01:43.240 --> 01:48.720] more than that. I think you could argue our project is more a social endeavor, social organizing, [01:48.720 --> 01:54.160] more than a technical thing. But of course, they're overlapping. So I thought, well, okay, [01:54.160 --> 02:03.240] we'll just go back. We'll do a history lesson on how the project started. And I feel like that is an [02:03.240 --> 02:10.080] important angle for introducing the project. So I want you to understand where the project is [02:10.080 --> 02:14.760] coming from. And then maybe you feel welcome to join the project and shape the future of the [02:14.760 --> 02:22.600] project, right? And I think this historical view on where things are coming helps to ground the [02:22.600 --> 02:29.080] project in an actual need, human need. There's a lot of software projects out there that are just [02:29.080 --> 02:37.720] like, why does this exist? But we're trying to show that this is socially useful and why we have [02:37.720 --> 02:44.600] come up with this project and initiated and tried to make it work is based on what we needed at [02:44.600 --> 02:53.440] the time. But that may necessarily be the case in the future. So yeah, put that to one side. And let [02:53.440 --> 03:00.400] me introduce you to the magic brain of autonomic co op. So these are some people in the cop. There's [03:00.400 --> 03:07.560] me on the banjo. That's not all of them. There's 13 of us. As far as I remember, we have a new [03:07.560 --> 03:12.000] website that was just put up last night. So check it out. It's hilarious. I don't know if I can [03:12.000 --> 03:19.520] get up here, but I'll try maybe at some point. So yeah, we're a technology cooperative, a worker [03:19.520 --> 03:23.200] owned cooperative. That means the people who work in the business own the business. It's run and [03:23.200 --> 03:32.480] managed by ourselves. And that means we're, yeah, this is like, you know, we when we come into where [03:32.480 --> 03:38.320] we work, we have the chance to make decisions about every aspect of how the workplace runs. So, you [03:38.320 --> 03:43.680] know, what kind of work do we want to do? How do we want to make decisions? How do we deal with [03:43.680 --> 03:49.920] money? How do we find new work? Who do we want to work with? Another member wants to work with this [03:49.920 --> 03:54.480] group, but someone else disagrees with that. How do we work this out? How do we deal with conflict? [03:55.280 --> 04:03.760] It's kind of an end to end workplace situation where you have the chance to, you know, be involved [04:03.760 --> 04:09.360] in every step of the process. And of course, you don't have to do that alone. You can do that with [04:09.360 --> 04:18.880] your friends. So that's the kind of model of the cooperative. Yeah, and one of the kind of, [04:21.040 --> 04:30.880] yeah, I'll come back to the website maybe in a bit. But so one of the ideas behind what a cooperative [04:30.880 --> 04:38.000] is, is this listing of cooperative principles. You can see them on the right hand side there. [04:38.000 --> 04:43.200] For example, autonomy, independence, you know, economic participation of the members, [04:43.200 --> 04:47.440] open voluntary membership. These are kind of like non-binding principles of what it means to be a [04:47.440 --> 04:53.840] co-op. Principle six, for example, that you will work with other co-ops. So, you know, we want to [04:53.840 --> 05:00.880] expand cooperation in the ecosystem of groups that are trying to do this as well. And a lot of [05:00.880 --> 05:10.160] these hinge on actually having money to survive in order to practice what it means to be a [05:10.160 --> 05:17.280] cooperative. So we're a technology cooperative. And what do we do exactly? And you could kind of [05:17.280 --> 05:22.880] say we do basically everything and anything like we would happily develop a new piece of software, [05:22.880 --> 05:29.360] but we'd also come to your house and fix your toaster. We, you know, we try to work with people [05:29.360 --> 05:33.840] that we want to work with, that we want to support, that we like their work, or that connect with [05:33.840 --> 05:41.040] our values. And one of the things we've done quite successfully, in my opinion, is to run a free [05:41.040 --> 05:47.920] software step internally and for the people we work with. So we have like, you know, all the [05:47.920 --> 05:54.640] things we use our free software. So we're pretty good at that, managing our own internal infrastructure. [05:54.640 --> 06:03.920] And so necessarily, we would then just offer that to groups, like, do you want a website? Do you want [06:03.920 --> 06:08.720] your own chat system, like a matrix, or do you want a wiki, or do you want an xCloud, or whatever [06:08.720 --> 06:16.880] this kind of stuff, this classic hosting situation. And one of these problems that has come up, [06:16.880 --> 06:22.560] Autonomic is running like several years now, five years or so, is that it's difficult to make money [06:22.560 --> 06:28.640] out of. People are used to just getting stuff for free for very understandable reasons. Big tech. [06:30.320 --> 06:35.360] And at some point, you know, we want to scale out the code, we want to involve more people, we want [06:35.360 --> 06:40.720] to be able to make a living from it, we want to be able to survive, pay the rent and so on. And [06:40.720 --> 06:45.120] then we started realizing that, okay, actually, we're making more money, or people are happy to [06:45.120 --> 06:52.320] pay us more when we are doing support. And this is talking to people, getting on a call and being [06:52.320 --> 06:57.680] like, what is software, and then talking about it, or what is a wiki, everyone's saying wiki, [06:57.680 --> 07:05.040] I don't know what a wiki is. And just having a chat about it. But then also, being around later, [07:05.040 --> 07:11.600] so once they ask for the thing, like an xCloud, you set it up, and then you don't disappear because [07:12.160 --> 07:17.360] you're a cooperative, and you intend to continue. So sustainability is one of the core principles. [07:17.360 --> 07:26.480] So we stick with them, we chat, we're contact on mail and so on. So this is kind of the dilemma [07:26.480 --> 07:31.840] of like, we want to definitely do more hosting, because people need digital infrastructure to [07:31.840 --> 07:38.560] do their work. This is just like a base layer of how to organize things. But it's difficult to survive [07:39.360 --> 07:44.240] as a technology cooperative, only doing hosting. So you need to like expand what you do. [07:44.240 --> 07:52.240] And one of the places we ended up was using Cloudron, which is a kind of like, [07:53.360 --> 08:00.160] one click install open source system, which I would still recommend. It's great. It got us really [08:00.160 --> 08:04.400] far. A lot of the next slides criticized us. So I'm just going to say it's great on this slide. [08:07.840 --> 08:13.280] So you can see like, you have your classic selection of open source apps, if you're familiar [08:13.280 --> 08:17.440] with that stuff, like Rocket Chat, for example, we use Rocket Chat to communicate where, you know, [08:17.440 --> 08:28.000] in the co-op. And what this enabled us to do was basically work with different groups that may not [08:28.000 --> 08:33.360] necessarily have money to support us, like to make it sustainable to work with them, [08:35.280 --> 08:40.000] but still support them to do their work. And Cloudron was like a really kind of like, [08:40.000 --> 08:44.240] get out of jail card in that sense. It was like, okay, I can just like hit five times [08:44.240 --> 08:48.320] on this WordPress button. And we've got five WordPress sites and people are going, [08:48.320 --> 08:51.680] they're posting, they're organizing, they're working, we're supporting them. It's brilliant. [08:53.280 --> 08:59.200] You know, and for those that don't know, Cloudron is just like, you fire up a server, [08:59.200 --> 09:06.240] you SSH into it, you run a command, it spins up this thing. And you're going, you can install [09:06.240 --> 09:12.640] multiple apps on the same server. So this really reduces the costs for rolling out digital [09:12.640 --> 09:25.440] infrastructure. Let me check time. Sorry. Yes. Right. So yeah, remember I said at Cloudron school, [09:25.440 --> 09:34.400] but at some point, the core of the product, the front end, the web front end became proprietary. [09:34.400 --> 09:38.800] So they made a switch. In some sense, I can't blame them because, as I said, if you want to [09:38.800 --> 09:43.680] survive, you need to make a book and you need to pay the rent and so on. But yeah, that made us [09:43.680 --> 09:50.160] nervous. You know, when we work with people as a cooperative, we want to say, we'll be around [09:50.160 --> 09:55.280] for you. We want to continue. We want to do this sustainably. And if you remember one of the [09:55.280 --> 10:02.000] principles is like our independence, like how, like, can we exist in this world? You know, [10:02.000 --> 10:05.760] there's, of course, interdependencies, but like, can we continue to do what we want to do [10:05.760 --> 10:12.960] without relying on one specific thing? So when this, so Cloudron made the decision to make the [10:12.960 --> 10:18.320] front end proprietary, you can still use it or whatever. It's a great system. But we thought, [10:18.320 --> 10:28.240] oh no, what's going to happen? And then we started looking a bit deeper. And we realized that [10:28.240 --> 10:38.320] Cloudron, when you click and install an app, the apps are, yeah, they're, it's an image. So it [10:38.320 --> 10:44.160] uses a container system. It's kind of like a light virtualization layer. They're packaging [10:45.200 --> 10:51.600] all the apps themselves. So the people who work in Cloudron Inc. or whatever, when they want to [10:51.600 --> 10:58.880] provide a new app, they say, okay, let's make a new Git repository and let's package this thing. [10:59.120 --> 11:04.720] And that's great. And it works really well. But then we realized that they had packaged, [11:04.720 --> 11:10.080] I can't remember specific apps, but take, for example, NextCloud. They had gone and some [11:10.080 --> 11:14.640] worker in Cloudron had packaged NextCloud. But then we went to the upstream NextCloud repository [11:14.640 --> 11:19.760] and we saw they had provided an image. And then we thought, well, why aren't we just working with [11:19.760 --> 11:24.640] them? Because that would, you know, connect our values of expanding cooperation in different [11:24.640 --> 11:30.400] layers of the software stack and so on. And then the more we looked, we realized that free [11:30.400 --> 11:37.120] software communities were really converging on the idea that you have a packaged image inside your [11:37.120 --> 11:41.120] repository. So you're hacking on your source code, and you just have an image that's building. [11:41.680 --> 11:46.560] And we can use that. That's what Cloudron needs. So we were like, okay, that's interesting. Maybe [11:46.560 --> 11:54.560] let's think about that. Yeah. And then the kind of like, the end of the, the logical end of the [11:54.560 --> 12:03.440] paranoia or whatever it was that like, we have, you know, we're relying on this one company, [12:03.440 --> 12:13.120] this one group, you know, autonomic is a cooperative that supports maybe 20 plus groups, [12:13.120 --> 12:18.880] but like a large selection of them have very little money to pay for the infrastructure. [12:18.880 --> 12:23.680] So if you can imagine that Cloudron would somehow make a bigger step into more proprietary [12:23.680 --> 12:29.440] solutions or somehow increase the cost or who knows what, we'd be in a position where we can [12:29.440 --> 12:36.080] support those groups. And that's not something we wanted to, to deal with. So we, we started to try [12:36.080 --> 12:43.680] to think like, you know, we're a co-op, how do we make this better or whatever. And that is the [12:43.680 --> 12:50.560] start of call clouds. So now we're getting back into another take on that original really long [12:50.560 --> 12:57.280] sentence that had things like software sack in it. Yeah, we were like, let's just eliminate this [12:57.280 --> 13:03.280] issue of proprietary, you know, angles or whatever. Oh yeah, here's where I just couldn't get the [13:03.280 --> 13:09.760] emoji in the slide. So I just gave up. I, yeah, we're just like, we'll just copy left the entire [13:09.760 --> 13:17.040] thing. Let's just do that. That seems sensible. Cool. And then as I mentioned, yeah, you have [13:17.040 --> 13:24.560] a little favorite emoji. We want to work with the upstream developers of the software because [13:26.640 --> 13:31.600] a lot of precarity is in the open source ecosystem and that there's, you know, certain [13:31.600 --> 13:36.480] developers which are doing unpaid labor to develop the software that a lot of us rely on. [13:38.640 --> 13:42.320] And they're providing these, you know, a lot of them are providing these packages and we thought, [13:42.320 --> 13:49.200] well, okay, we can just meet them where they're at, engage with them on the issue tracker, [13:49.920 --> 13:56.560] you know, speak to them, make them aware of our hosting efforts. And, you know, we're, we're in [13:56.560 --> 14:04.640] a sense like closer to end users in that, you know, it's like developers, posters, users, [14:05.840 --> 14:10.800] you know, summarized and often developers, well, they've got their hands full trying to [14:10.800 --> 14:15.280] make the software work so we can like help that connection. So we're trying to bridge that in [14:15.280 --> 14:24.640] that sense. And also, which we'll see later on, they're also providing, so they're packaging up [14:24.640 --> 14:32.320] their apps in images, but they're also providing this kind of extra configuration around it, [14:32.320 --> 14:37.040] which kind of tells you how to deploy the thing, which is great for people who are doing hosting [14:37.040 --> 14:42.080] because, okay, we have an app, but does it need a database, for example? So what, database? [14:43.360 --> 14:47.440] But they're also doing that for us as well. So if you develop open source software, thank you. [14:47.440 --> 14:55.760] Yeah. And then the democratic governance. We were going to initiate this project, [14:55.760 --> 15:00.480] let's say. I wouldn't say we like invented it, we just like collude a bunch of stuff together. [15:03.200 --> 15:09.840] But we wanted to not be in control of that. So not become the new cloud run, which is like, [15:09.840 --> 15:15.440] okay, let's set up some clear rules for how do you interact with this project and also [15:15.440 --> 15:22.000] on what basis are you also a technology cooperative? Are you an open source developer? Are you a user [15:22.000 --> 15:26.240] of this software? Do you want to support the host or do you want to support the... We can [15:26.240 --> 15:33.040] start to engage with it where we're at, but meet in this kind of common project. And obviously, [15:33.040 --> 15:42.320] the goal then is to sustain open source digital infrastructure and expand cooperation. [15:42.320 --> 15:51.040] Yeah. So moving out of the kind of history phase and now into what it actually is. [15:54.400 --> 16:01.360] Yes. So this word kind of pops up if you've checked the website or the docs, [16:01.360 --> 16:13.520] democratic tech collectives. So what is that actually? Autonomic is a... We're [16:13.520 --> 16:17.760] probably registered in the UK. We're publicly regulated. We're a cooperative society. [16:19.280 --> 16:24.080] We have gone through the paperwork to make that happen because we wanted to do that. [16:24.080 --> 16:31.920] But we recognize that not everyone will want to do that or be able to do that. [16:33.200 --> 16:38.080] And this really depends on a per country basis. If you're in the Netherlands, [16:38.080 --> 16:43.520] it's a different model or if you're in the UK, the way you relate to the legal system and the [16:43.520 --> 16:48.000] state is like... In Germany, it's quite difficult. I've been told there's no way to [16:48.000 --> 16:55.920] kind of just slot yourself in. It's not very easy to find information. [16:57.360 --> 17:02.560] And we don't want that to be a limiting factor in working together. So we were trying to [17:02.560 --> 17:10.080] conceptualize this idea of what other groups would we be willing to work with but not close [17:10.080 --> 17:20.880] that definition down from the outside. So we were thinking about other groups who want to work [17:20.880 --> 17:25.040] with their friends or together with other groups and have set up decision making, for example, [17:25.040 --> 17:29.760] collective decision making. So they're able to navigate what they want to do together. [17:31.280 --> 17:35.280] That was kind of like a thing we wanted to do. We wanted to be able to interact with each other. [17:35.280 --> 17:41.040] We wanted to be able to disagree and receive and send constructive feedback, reach compromise, [17:41.040 --> 17:46.720] stuff like this. This is kind of easier to do when people are already organized in their own groups. [17:47.840 --> 17:53.600] But yeah, this didn't really rule out individuals who are active in the project at the moment [17:54.320 --> 18:00.160] and also other technology cooperatives. We're kind of just trying to saddle through it somehow [18:00.160 --> 18:05.840] and be like, just work with us. Let's figure it out together. And this is already in progress. [18:05.840 --> 18:11.680] This has been in progress for some time, the formalization of what this means. And that's [18:11.680 --> 18:19.360] an open process, which would love to invite you to come check out. The configuration commons, [18:20.800 --> 18:27.200] this is a, so as I said before, we have open source apps that we can package them in images. [18:27.200 --> 18:33.680] And then we can specify a configuration around those images to describe how that app should look [18:33.680 --> 18:40.880] like in a deployment on a server live and well, people using it. And one of the things we know [18:40.880 --> 18:47.120] is was that the, as I said before, the configuration that was being provided by the upstream repositories [18:48.480 --> 18:55.600] was useful, but it didn't specify the full end-to-end production kind of scenario. [18:55.600 --> 19:01.600] And that's kind of a big word, but for example, how to back up the app data. Like the thing is [19:01.600 --> 19:06.080] deployed, people are using it and we need to make sure their data is safe. So we need to back it up. [19:06.080 --> 19:11.840] How do we back the thing up? How do we restore it? This is kind of like a step from it works to [19:12.640 --> 19:18.240] it works and it's safe for these people to use it. And we wanted to encapsulate that into our [19:18.240 --> 19:26.240] configuration. So this is a big part of it. We'll go through each one of these deeper. I'm just [19:26.240 --> 19:32.240] going to overview them now. Yeah, ABRA is a command line tool. So our own digital tools, we wanted [19:32.240 --> 19:40.240] to be involved in how the tools are shaped and how we use them. You know, that's a great situation [19:40.240 --> 19:44.240] where you're dogfruiting your own system with your peers and you're trying to figure out how does [19:44.240 --> 19:50.240] this, how does this, you know, best suit us, you know, and according to our own constraints, [19:50.240 --> 19:56.240] you know, we can't spend all day learning this obscure system or we can't, you know, [19:56.240 --> 20:02.240] invest too much time in some things. So we need to cut corners and it's best to be involved directly [20:02.240 --> 20:10.240] in that process. So, yeah, and if we remember back to Cloud Run again, sorry, I'm not going to be [20:10.240 --> 20:16.240] bashing them at all. But yeah, like, how do we interact with the system? In the case of Cloud Run, [20:16.240 --> 20:24.240] it was a web front end. But if we're talking about interacting with technology cooperatives [20:24.240 --> 20:30.240] and other tech collectives, there's kind of a difference to begin with because Cloud Run is maybe [20:30.240 --> 20:35.240] trying to provide for the, say, let's say, non-technical user or someone who can just kind of get [20:35.240 --> 20:41.240] in and click a few buttons or whatever. But we're already dealing with groups that are actively, [20:41.240 --> 20:46.240] like, you know, deeply involved with like Linux system administration and so on. So do we need a [20:46.240 --> 20:55.240] front end? So those are the kind of questions we can ask and answer them ourselves in those moments. [20:55.240 --> 21:04.240] And the collective infrastructure. So there's a lot that goes on in between getting an app out [21:04.240 --> 21:10.240] and someone using it. You know, people have to meet each other and talk and who are you and [21:10.240 --> 21:15.240] what's going on and what do you do and all this kind of stuff. And then money. Where does the money [21:15.240 --> 21:20.240] go? Who, you know, what do you charge? You know, it's just this end to end, all the social process [21:20.240 --> 21:26.240] that go in and around that. We wanted to build those up too because that is a huge problem when [21:26.240 --> 21:30.240] you're starting off as a collective and you're like, oh, no, where's my bank account? Like, how do [21:30.240 --> 21:35.240] I get a bank account? And you can see like open collective, for example, doing this, just like [21:35.240 --> 21:43.240] mutualizing this financial infrastructure and just getting people going. So yeah, docs, yeah, [21:43.240 --> 21:51.240] get hosting, get off GitHub. That's cool. Stuff like that. So I'm going to go through these a [21:51.240 --> 22:02.240] bit deeper. Yeah, we're trying to form so we're our proposal at the moment as autonomic, which is, [22:02.240 --> 22:09.240] yeah, now I can say the sentence and I hope it's a bit clear. We initiated this project and we're [22:09.240 --> 22:15.240] deeply embedded inside it, but we're attempting to step out of the project and reenter on an equal [22:15.240 --> 22:22.240] basis with other collectives. So in order to do that, we're proposing a federation model, which is [22:22.240 --> 22:30.240] based on a great project called Co-op Cycle, if you've heard of it, but basically different groups [22:30.240 --> 22:35.240] can interact with each other. There will be democratic decision making and yeah, some laws and [22:35.240 --> 22:40.240] we'll come up with a constitution together and all this kind of stuff. I couldn't fit in on this [22:40.240 --> 22:48.240] line, but it's not just chaotic nerds, it's also open to, you know, we can imagine the people who are [22:48.240 --> 22:55.240] using the software grouping together and joining our federation as well to say, hey, this button [22:55.240 --> 23:01.240] should be over there. Could you just fix that? But in a more collective sense, like we could be [23:01.240 --> 23:06.240] gathering money, we could be figuring out how to improve things, you know, you don't have to be [23:06.240 --> 23:12.240] able to just write software to join this kind of stuff. We could be, you know, connecting the [23:12.240 --> 23:22.240] different struggles to build up like a better open source ecosystem. So this is a process that's [23:22.240 --> 23:28.240] going on right now. There's invites going out. We have a new round of private funding. Thank you [23:28.240 --> 23:34.240] private funder and yeah, there'll be more news. But check the website if you're interested in this, [23:34.240 --> 23:42.240] if you're also part of a collective love to hear from you. This is a massive problem that we, of [23:42.240 --> 23:48.240] course, experiences being sys admins. And we've seen a lot of people join in on the old matrix [23:48.240 --> 23:57.240] channels. And, you know, that'd be one person setting up a few apps for friends, family, maybe [23:57.240 --> 24:02.240] their local community, food co-op, what have you. And then a few months down the line, it's [24:02.240 --> 24:09.240] completely overwhelming. The email broke, you know, or it doesn't do this, or who knows what. If you [24:09.240 --> 24:16.240] do any sys admin, you'll know the story. But when you step into the co-cloud project, you have this [24:16.240 --> 24:22.240] chance to meet your peers who are also doing this work. They may not be involved in your project [24:22.240 --> 24:30.240] whatsoever, but that's not important. We have this idea of the config comments that means we can [24:30.240 --> 24:37.240] share work on the, again, the next cloud recipe, that's what we call them, we'll come to that in a [24:37.240 --> 24:45.240] bit. So we can work together on the same configuration. And that means we're sharing tips, [24:45.240 --> 24:50.240] tricks, we're talking together, you know, it's like this group of users wants this, do you want [24:50.240 --> 24:56.240] that, oh, I tried this, you know, this kind of stuff. And this is really, yeah, this has been [24:56.240 --> 25:02.240] seen to be like a great feature of trying to just like open the door for people to come in and [25:02.240 --> 25:13.240] work together. Because there's a lot of tools that propose reuse and kind of mind share and a, you [25:13.240 --> 25:24.240] know, collective point of reference, but don't really live up to that, you know, the ideal of [25:24.240 --> 25:30.240] reuse, just not doing it again, you know, repeating yourself. So we were conscious of that, we're [25:30.240 --> 25:37.240] trying to work on that. Yeah, folks on training collaboration, so we wanted it to be easy to [25:37.240 --> 25:42.240] onboard, we want more democratic tech collectors to pop up, we want more technology [25:42.240 --> 25:47.240] cooperatives to be able to start, we want people to be able to decide, I want to own and run my [25:47.240 --> 25:53.240] own workplace, and I want to, you know, bootstrap some infrastructure for groups, I find cool in [25:53.240 --> 26:01.240] my city. And that's what called cloud is about, it's like a tool set for these people. It has to be [26:01.240 --> 26:08.240] easy to use, of course, and we're very focused on that public, so groups who have already decided [26:08.240 --> 26:14.240] that that's what they want to do. They're on the terminal at the moment, we only have the [26:14.240 --> 26:18.240] terminal clients, and they're, you know, they're getting into that, you know, we can't do [26:18.240 --> 26:24.240] everything for everyone, but we can definitely, with that public in mind, we can, you know, be [26:24.240 --> 26:31.240] focused on how we attempt to make it accessible and usable. And yes, cooperating with other [26:31.240 --> 26:37.240] networks, so we want to, you know, within our network, of course, we want to have groups [26:37.240 --> 26:41.240] join, but for the groups that don't want to join, we still want to work with them, if they [26:41.240 --> 26:45.240] have the same values, if they're doing similar things, we can also interface with them. And we've [26:45.240 --> 26:54.240] already seen that happening, and I'll come back to that in a bit, but, you know, once things are [26:54.240 --> 26:59.240] clear, and they're getting clear, like we are a group of cooperators, we have this configuration [26:59.240 --> 27:03.240] columns, we have these tools, we're just like specifying what we are, then it's easier for [27:03.240 --> 27:08.240] other groups to say, okay, I understand what's happening here, and I would like to do this [27:08.240 --> 27:14.240] with you. So there's kind of like, you know, a concrete example, find funding together. [27:14.240 --> 27:26.240] So, yeah. The concrete commons, yes, the open source apps you love. If you, it's this [27:26.240 --> 27:32.240] catalog of software that is out there, and people are developing the next cloud, the [27:32.240 --> 27:40.240] media wikis, the seen apps, the GTIA, the whole thing. So we have quite an expanding [27:40.240 --> 27:47.240] concrete commons, so you can, like, deploy a lot of apps. So that's the thing, like, [27:47.240 --> 27:51.240] people come to us, and they're like, oh, I need a calendar, or oh, I need, you know, a [27:51.240 --> 27:56.240] note-taking app, or whatever, and then we're out hunting in the open source ecosystem for apps [27:56.240 --> 28:05.240] to add to the catalog. So that's that, then the, yeah, so to come back to the, you know, [28:05.240 --> 28:10.240] this, again, this idea of the image is the app, which is the packaged thing, and then [28:10.240 --> 28:15.240] the kind of wrapper around that, which specifies the end-to-end kind of how they should look [28:15.240 --> 28:24.240] in production. And we were conscious to not reinvent a packaging format, and we didn't [28:24.240 --> 28:30.240] really know what to do at that point, because we thought, okay, we need to be able to say [28:30.240 --> 28:36.240] how, you know, in one file, let's say, in plain text, a config file, like, here's the [28:36.240 --> 28:39.240] app, and how do you back it up, how do you restore it, how do you take it down, how do [28:39.240 --> 28:44.240] you bring it up, how do you configure, how do you plug it, you know, this kind of stuff. [28:44.240 --> 28:52.240] Yeah, and as it turns out, the Docker compose ecosystem and Docker has kind of been moving [28:52.240 --> 28:57.240] towards the compose standard they're calling it, and it's basically, yeah, if you've ever [28:57.240 --> 29:03.240] seen the Docker compose file, it's a YAML text file, and it kind of just specifies what [29:03.240 --> 29:11.240] we're looking for. So we just figured, great, upstream developers are already using this, [29:11.240 --> 29:19.240] this is a developing open standard, great, let's just build on this. And I think as it [29:19.240 --> 29:24.240] turns out, that was a good choice and has had many benefits, which we'll come to on the [29:24.240 --> 29:33.240] next slide. Yeah, so we're working with upstream developers, and yeah, we're, so in my background, [29:33.240 --> 29:38.240] I've done a lot of configuration management tool, like for example, working with Ansible, [29:38.240 --> 29:44.240] which was always the grand ideal of reusing roles, if anyone has been, has used Ansible [29:44.240 --> 29:52.240] before, it's kind of like a packaging format for, you know, you have a server and you want [29:52.240 --> 29:58.240] to install like Apache on it or something, and you tell Ansible, install Apache for me, [29:58.240 --> 30:03.240] and then you want to install the new thing, and then you kind of package this into a role. [30:03.240 --> 30:08.240] And at some point, I realized that everybody is writing the same roles, there's like, it's [30:08.240 --> 30:13.240] very difficult to like share and to reuse other people's stuff. It's definitely happening, [30:13.240 --> 30:20.240] but not on the scale that I was looking for, and so I was looking for alternatives. Yeah, [30:20.240 --> 30:28.240] so in the project so far, we've seen people like we have, for example, matrix servers, [30:28.240 --> 30:35.240] Synapse installs, we've seen multiple collectives look at the config and say, this works for [30:35.240 --> 30:41.240] me, I'm going to use this, and then changing the config in collaboration with other groups. [30:41.240 --> 30:46.240] So we're already seeing that people are able to make the changes that works for them without [30:46.240 --> 30:50.240] breaking other people's installs. And then when things need to be worked out, they speak to [30:50.240 --> 30:57.240] each other and things move along. And again, the compose standard really helps here in [30:57.240 --> 31:05.240] that it's quite flexible and helps us, people can move around each other inside the configuration, [31:05.240 --> 31:13.240] which will hopefully become clear on the next slide. Yes, great. So we're calling the app [31:13.240 --> 31:21.240] configuration a recipe, cooking inspired. And it's a good repository. Here you have git [31:21.240 --> 31:28.240] t, git t, so I don't know. And it's a bunch of config files. And as a collective, you come [31:28.240 --> 31:33.240] here and look and say, okay, this is the configuration which specifies, you know, how this [31:33.240 --> 31:40.240] thing should look like in a deployment. And kind of one of the magic source of this format [31:40.240 --> 31:47.240] is that if you've ever used, I think it's a Docker, I think it was a Docker compose [31:47.240 --> 31:52.240] command line client, you can kind of chain the compose files. So you can say like Docker [31:52.240 --> 32:01.240] compose op, compose.yaml minus f, compose.some other thing.yaml. And the system will internally [32:01.240 --> 32:08.240] merge all of these configurations together and then roll out the app, which I never had [32:08.240 --> 32:14.240] occasion to use until I realized that maybe somebody wants to use Postgres and maybe someone [32:14.240 --> 32:25.240] wants to use MariaDB. These simple choices for deployments, if it's not possible to kind [32:25.240 --> 32:32.240] of like make this easy to do between different groups, it would never work out. So very, [32:32.240 --> 32:39.240] very thankful to the people who wrote this standard. So you can see here, you can kind [32:39.240 --> 32:43.240] of, and it'll become clear when we talk about the command line client, you can kind of [32:43.240 --> 32:50.240] specify, okay, I want MariaDB and I want the SMTP config bundle all together, roll it out. [32:50.240 --> 32:54.240] And I don't want Postgres or I don't want this or I don't want that. And this allows [32:54.240 --> 32:59.240] people to just expand the config to suit their needs, document it, let people know this is [32:59.240 --> 33:03.240] a new feature. So it's quite nice. You can be just hosting some software and then somebody [33:03.240 --> 33:07.240] says, oh, great, open ID connect now works with this, just load this thing in. And you [33:07.240 --> 33:15.240] know, this is really helping us move forward and get things done. Yeah, at the moment, [33:15.240 --> 33:21.240] we have a command line client called Abra. And this is the kind of day-to-day interface [33:21.240 --> 33:26.240] for how you manage your cloud cloud install. So you have a server, you deploy stuff to [33:26.240 --> 33:32.240] a server, you're using Abra on the command line. And that was a real, that was a large [33:32.240 --> 33:38.240] decision for us at the time because we didn't have much money, we didn't know if we would [33:38.240 --> 33:43.240] be able to pull off a web front end. And again, as I mentioned before, we were trying to [33:43.240 --> 33:47.240] target specifically like, who is the public of this project? Are they people who know [33:47.240 --> 33:51.240] and are comfortable on the command line? Yes, okay, let's just go ahead with this and try [33:51.240 --> 33:58.240] to make it work. So we wrote in Bash the first one and I still completely recommend it, [33:58.240 --> 34:07.240] just unleash your inner Unix. It was great, it managed to get us to, you know, zero to [34:07.240 --> 34:14.240] hero in relatively short amount of time and have a system that was working. One of the [34:14.240 --> 34:20.240] kind of core ideas behind developing our own tools, we were very conscious that we might [34:20.240 --> 34:27.240] fail, won't be able to get it done. And we wanted to have that, the conflict commons would [34:27.240 --> 34:33.240] kind of live separately. So these would be kind of not interdependent in a way that if [34:33.240 --> 34:39.240] one broke, the other would break. So if you even today, if you just ignore that Abra [34:39.240 --> 34:44.240] exists, you can still drop into the command line, run a bunch of commands and roll out [34:44.240 --> 34:50.240] the app, which is great. And that's kind of how we originally conceived it with the Bash [34:50.240 --> 34:54.240] system, we're just running a bunch of commands. And we had kind of like laid it all out in [34:54.240 --> 35:00.240] a way that kind of said, okay, you can just push that out. So yeah, thanks to Kalex, [35:00.240 --> 35:07.240] a fellow worker who's great at Bash programming. We wouldn't have done it without them. Yes, [35:07.240 --> 35:12.240] so then we read, wrote and go, we actually managed to get some funding, which will come [35:12.240 --> 35:18.240] to you after, I guess they're not Bash programmers, if they read the source, they might have [35:18.240 --> 35:25.240] felt otherwise. But at that point, we were running into issues with the Bash implementation, [35:25.240 --> 35:32.240] which we felt quite proud about that we had like gone ahead and found the limits of the [35:32.240 --> 35:39.240] simplest possible thing that we could do. And the issues were it was difficult to install [35:39.240 --> 35:46.240] on multiple systems. It relied on a number of built in commands that were not always [35:46.240 --> 35:53.240] available on, you know, like a Fedora, Debian or whatever. So yeah, portability. We were [35:53.240 --> 36:00.240] struggling with we wanted to kind of develop other aspects of we like we wanted the tools [36:00.240 --> 36:05.240] to be able to speak to the conflict commons but not directly pairs each other. And that [36:05.240 --> 36:11.240] ended up being a kind of like JSON catalog, but then pirating data formats in Bash is [36:11.240 --> 36:18.240] difficult. So we were we're kind of pulling our hair out on that one. And then concurrency, [36:18.240 --> 36:24.240] we were struggling to manage horizontal scaling. So if you have worked with 18 groups, they [36:24.240 --> 36:30.240] want to have their own VPSes. If you have 10 servers, you know, the tool has to like [36:30.240 --> 36:35.240] fire a request to each 10 of them or whatever. And then if it's going through each one at [36:35.240 --> 36:43.240] the time, you have to wait. And as a result of kind of scaling up and using this absolutely [36:43.240 --> 36:50.240] pre alpha software and autonomic for production purposes, we reached the limits of the software. [36:50.240 --> 36:55.240] So we ended up rewriting it and go somebody knew go in autonomic at the time. That's the [36:55.240 --> 37:03.240] reason. But also, we saw that it was really, you know, we could get this concurrency issue [37:03.240 --> 37:08.240] sorted and the portability. So go kind of gives you a language level feature that it's [37:08.240 --> 37:14.240] like quite easy to say like fire across these 10 things immediately. And you can build [37:14.240 --> 37:18.240] a binary so you can just fire them out. So people just get like a binary on the system [37:18.240 --> 37:25.240] and the stuff is all baked into it. So the new problem with this is that it actually [37:25.240 --> 37:30.240] works. It does what it says and people are starting to use it. So we're now into this [37:30.240 --> 37:37.240] kind of maintenance cycle. It's kind of approach. We've done the public beta. So people are [37:37.240 --> 37:43.240] using it and starting to rely on it. And we've seen people hacking on it and submitting [37:43.240 --> 37:52.240] pull requests and checking it out. And it's all good. Yeah, so in essence, ABRA is a [37:52.240 --> 37:59.240] Docker swarm client. So no, we don't run Kubernetes. We run Docker swarm and no, it's not dead. [37:59.240 --> 38:06.240] Docker swarm is a technology that is still alive. We're kind of we have like a strange [38:06.240 --> 38:14.240] parasitical relationship with like some installs, like some banks have like a swarm install [38:14.240 --> 38:19.240] of like, you know, 10,000 nodes or something. And the current owner, I believe it's Marantis [38:19.240 --> 38:25.240] or something that's been in exchange of who owns it or who I can't remember. But we're [38:25.240 --> 38:31.240] it's still being maintained and we're happy to see that it is being maintained because [38:31.240 --> 38:41.240] we identify that swarm mode in Docker is kind of the appropriate feature set that we need [38:41.240 --> 38:49.240] without having to deal with going into learning how to, you know, roll out a large system [38:49.240 --> 38:54.240] which is built for like large scale. Our, you know, and again, this is like what was [38:54.240 --> 38:59.240] autonomic doing at the time was like rolling out single servers deploying a few apps, [38:59.240 --> 39:07.240] no greater than like 10 to 30, 50, 100 users or whatever. And swarm mode provides you with [39:07.240 --> 39:11.240] the ability to kind of roll out the app. And if it fails, it'll roll it back for you. [39:11.240 --> 39:16.240] And we can bake that into our config. So we were kind of getting the stability guarantees [39:16.240 --> 39:21.240] that we needed. And not a lot of groups were demanding, you know, when they had like a [39:21.240 --> 39:26.240] media wiki installed, they weren't saying there must be 99% uptime in this contract or [39:26.240 --> 39:32.240] not. But we still wanted to push that to the limit, like a high quality, stable service [39:32.240 --> 39:41.240] when we're rolling stuff out. This should be, yeah. And then, yeah, the swarm mode just [39:41.240 --> 39:49.240] covers that for us, the runtime, the Canadian runtime. So yeah, this is a kind of like the [39:49.240 --> 39:56.240] architecture, let's say. So on the left, you will install Abra, for example, on your local [39:56.240 --> 40:03.240] workstation, the command line tool. And then you'll via SSH kind of tell Abra to manage [40:03.240 --> 40:08.240] this server. So it'll read the SSH config and it'll connect to the server and say, okay, [40:08.240 --> 40:12.240] recognize this server. I see there's a Docker demon running on this and it's got swarm mode [40:12.240 --> 40:18.240] enabled. Cool. And then it can, you know, you can do this horizontal scaling where you can [40:18.240 --> 40:27.240] install servers. So you can load multiple servers into Abra. And then you can be sharing the [40:27.240 --> 40:35.240] Abra state between multiple people. So in a co-op of 13 people, you know, each one who [40:35.240 --> 40:41.240] runs Abra app, so whatever to list the apps and the servers, we'll see the same state [40:41.240 --> 40:49.240] come out. And we can go into that a bit later. So yeah, it's built for basically collaboration [40:49.240 --> 40:55.240] in the organization that you're in. And then the other mode is you install Abra directly [40:55.240 --> 41:01.240] on the server and it stores the state on the server, which could be useful if you're, [41:01.240 --> 41:07.240] yeah, for specific scenarios, this was requested. Yeah, some people run this, I don't [41:07.240 --> 41:15.240] know. Yeah, and then the final points. Moving on, collective infrastructure, yeah, docs, [41:15.240 --> 41:23.240] git hosting, recipes.coop, cloud.tech, you can see the list of all the apps we have. We [41:23.240 --> 41:30.240] have an open collective, which is the fiscal host is autonomic. Bank account in the UK. [41:30.240 --> 41:35.240] And that's been nice because we've been able to, once we got the funding, we were able [41:35.240 --> 41:41.240] to tell people, if you work on this, we can pay you just immediately. Anything you do, [41:41.240 --> 41:46.240] you'll get paid. And that's great because we know there's a lot of unpaid labor going [41:46.240 --> 41:50.240] on in open source world. So we didn't want to be a project that said, you know, contribute [41:50.240 --> 41:54.240] to our comments and get nothing back. No, we could actually pay, you know, so we set [41:54.240 --> 41:58.240] like an hourly wage there and we said, this is how much money we have and away you go. [41:58.240 --> 42:06.240] That worked out great, I would say. And the server firm, server's co-op, so I wanted to [42:06.240 --> 42:12.240] plug this other project that's, you can go to the website, servers.coop. Yeah, we've [42:12.240 --> 42:20.240] been working with a group who've been developing software called Capsule, Siberia, great [42:20.240 --> 42:25.240] hackers, and they're trying to develop a system. Well, they have developed a system, [42:25.240 --> 42:33.240] which is basically a server provider infrastructure like Hetsnure, for example. So VPSs, you know, [42:33.240 --> 42:38.240] I need a server to roll one out, I need another server to roll one out. And then we, and a [42:38.240 --> 42:43.240] couple of other people, myself included, were thinking, okay, this is great because a lot [42:43.240 --> 42:50.240] of collectors are centralizing on Hetsnure VPSs, right? And if Hetsnure rolls up the [42:50.240 --> 42:54.240] price or, you know, we've already seen this with the increase in cost in IPv4 addresses [42:54.240 --> 43:00.240] and the 10% increase this year, it's getting more costly to run on Hetsnure, but it's [43:00.240 --> 43:05.240] always been super cheap, and that's been super accessible. But it could change, so what do [43:05.240 --> 43:10.240] we do? We need to build up this aspect of our stack, so we kind of like expand the [43:10.240 --> 43:17.240] cooperative layer down to the servers, and that is the idea behind server's co-op. And [43:17.240 --> 43:24.240] ABRA already supports an integration, so you can do, like, server-new, and if you've got [43:24.240 --> 43:30.240] a capsule running on your server, it'll spin up a VM. So we were already trying to, like, [43:30.240 --> 43:36.240] you know, just take a turn into cooperatively managed infrastructure and build those [43:36.240 --> 43:41.240] integrations from the stack. Whether or not they're working that great is for me to be [43:41.240 --> 43:49.240] seen. Yes, so European Cultural Foundation gave us upwards of 30 grand in funding last [43:49.240 --> 43:55.240] year and the year before, in the context of the Culture of Solidarity Fund. That was [43:55.240 --> 44:00.240] amazing, thanks to them. They were really great, and I would recommend applying to them. [44:00.240 --> 44:05.240] They supported us the whole way through. It was, yeah, just fantastic. We wrote our [44:05.240 --> 44:14.240] application and sent it into them, so pretty happy with that one. Great. Nearly done. Yeah, [44:14.240 --> 44:19.240] I want to just jump, so this isn't vaporware. Again, I want to plug, like, I just find it [44:19.240 --> 44:23.240] so important to explain that people are actually using this, and this isn't just, like, some [44:23.240 --> 44:27.240] idea in our heads, and we think it's cool. It's like, people are actually running, you [44:27.240 --> 44:32.240] know, they rely on some of the things that have been deployed. So Lonely Duck Space is [44:32.240 --> 44:40.240] a project which is about, like, 13 apps, I think, on a server somewhere, which was [44:40.240 --> 44:45.240] initiated by an artist collective called Ron Grupa, which is, like, an Indonesian-based [44:45.240 --> 44:54.240] collective in the context of documentary. And they wanted to kind of approach a group [44:54.240 --> 44:59.240] that connected with their values. At some point they were, so they were invited to [44:59.240 --> 45:04.240] document it, to do the work, and they thought, okay, we're all the way over here. [45:04.240 --> 45:08.240] In Indonesia, we need to work with people who are based in Europe, so we're going to [45:08.240 --> 45:12.240] need digital tools, but we don't want to be immediately going on the Google drives of [45:12.240 --> 45:17.240] the world. So how do we build up, you know, extend our ways of working and our values [45:17.240 --> 45:23.240] into the digital realm? So then they came looking for collectors and co-ops, a friend [45:23.240 --> 45:31.240] who was through to them. Yeah, and here's just some photos of us just engaging in this [45:31.240 --> 45:39.240] massively multiplayer shared infrastructure project. So it was really great. People from [45:39.240 --> 45:44.240] my perspective understood what Core of Cloud was and the mission behind it, and felt [45:44.240 --> 45:49.240] invited to kind of look at the technology and what we were doing and comment on that [45:49.240 --> 45:54.240] and give this critique. You know, we're often in a space where people want to talk about [45:54.240 --> 45:59.240] digital tools, but the first thing they say is, oh, I don't know anything about technology, [45:59.240 --> 46:04.240] which is kind of like a hallmark. You have to excuse yourself or something. But we got [46:04.240 --> 46:11.240] past that in working with this group of people that may not find themselves technical, but [46:11.240 --> 46:17.240] as we moved on, again, Core of Cloud allowed us to just kind of deploy the tech and just [46:17.240 --> 46:23.240] like, just forget about it and then go on with the support work. And we ended up with, [46:23.240 --> 46:28.240] like the last great thing I saw from this project was the people involved in using the [46:28.240 --> 46:34.240] tools were publishing videos about how to use the tools in the stack. So they were like [46:34.240 --> 46:40.240] in the Matrix Chat and on the PeerTube publishing. Here's how you use PeerTube. Amazing. Great [46:40.240 --> 46:46.240] educational practice. Totally check it out. TV.Lumbung.Space is the worst I've done. [46:46.240 --> 46:52.240] Yeah. This was another project we met kind of in the context of Documenta. So this is [46:52.240 --> 46:58.240] a comic group. Aircraft is a comic illustrator's union, which is bootstrapping at the moment. [46:58.240 --> 47:05.240] And they felt, you know, they saw Lumbung.Space and they were like, this looks cool. And I [47:05.240 --> 47:10.240] like the idea of what is going, you know, I understand what is happening here. There [47:10.240 --> 47:16.240] are two groups cooperating here, this kind of like cultural based initiative and this [47:16.240 --> 47:22.240] technology collective. And, you know, I can see the Cloud Cloud website and this looks [47:22.240 --> 47:27.240] like we want to check this out. So we kind of got over that fear and anxiety and moved [47:27.240 --> 47:32.240] through the money exchange and, you know, let's work together. It was quite smooth. [47:32.240 --> 47:36.240] So I would say Cloud Cloud is then helping us just again put these things to rest like [47:36.240 --> 47:41.240] this is the project we're based on, you know, when we work, when we deploy your infrastructure, [47:41.240 --> 47:46.240] it will be contributing to the commons, copy left, democratically managed, you know, you [47:46.240 --> 47:55.240] kind of just get over the hump. And yeah, Kotec is a new co-op which is set in Poland [47:55.240 --> 48:00.240] and that was a major boost. So they've deployed a bunch of services. Some members are in the [48:00.240 --> 48:09.240] room. Super nice. Yeah, it's a software stack which is in its, you know, initiated and developed [48:09.240 --> 48:15.240] by cooperatives. So it should suit other cooperatives. Of course, if you want to start a new technology [48:15.240 --> 48:22.240] cooperative, how do you start? It's overwhelming. But now you've got this off the shelf project [48:22.240 --> 48:28.240] and it's like, get going. And once you enter it, you just see all the other collectives [48:28.240 --> 48:34.240] and you're like, great. We can learn. We can share. And again, it expands beyond the technical [48:34.240 --> 48:38.240] so we have the infrastructure for payment and bank accounts and all this stuff. So people [48:38.240 --> 48:46.240] can really get moving fast, I think, is good. And we want to expand the, you know, we want [48:46.240 --> 48:52.240] people to start technology cooperatives, start technology cooperatives. Yes, Enterprise Matrix [48:52.240 --> 48:59.240] 21 collectives. Somebody counted them. Maybe it was me at some point. But there's a lot [48:59.240 --> 49:06.240] of groups involved in this. You can, if you go on the website, it's in the blog post, [49:06.240 --> 49:11.240] a list of them. We've got 160 plus recipes. This is a lot of open source apps. You can [49:11.240 --> 49:19.240] probably find what you need in there. And yeah, we're running 146 apps. I think I ran [49:19.240 --> 49:25.240] Abra app LS at some points. So I don't know because heavily invested in this. And there [49:25.240 --> 49:33.240] are other collectives which are running the stack. Yeah, and maybe just kind of coming [49:33.240 --> 49:40.240] to the end. It's been a lot of details. I'm even overwhelmed myself. The last few slides, [49:40.240 --> 49:50.240] kind of philosophy take, you know, we wanted to be another project in the ecosystem and [49:50.240 --> 49:59.240] not just become the project in relation to decentralization, let's say. We really posit [49:59.240 --> 50:06.240] ourselves in, you know, against kind of this like big tech discourse and what's happening. [50:06.240 --> 50:12.240] And we thought we could contribute to, you know, internet and digital infrastructure [50:12.240 --> 50:19.240] decentralization by proposing Call of Cloud in its form. And we're only one project. So [50:19.240 --> 50:25.240] I thought it would be good to plug some other great projects which I just find super inspiring. [50:25.240 --> 50:31.240] I think, like, I would say Unohost is like maybe one of the gold standards of community [50:31.240 --> 50:37.240] organized infrastructure hosting. It has a different kind of set of priorities and goals. [50:37.240 --> 50:43.240] You know, everyone can be a system administrator. It's kind of their goal. Like, you know, let's [50:43.240 --> 50:48.240] make information available to people. Anyone can do this, get going. Brilliant project. [50:48.240 --> 50:54.240] Whereas we're going for, you know, specific groups, co-ops. You're already in the game. [50:54.240 --> 50:59.240] How can we make it easier to keep going? So it's a bit of the different layers. But absolutely [50:59.240 --> 51:05.240] recommend Unohost. Nubos based in Brussels. Check them out. Chateau's great network. [51:05.240 --> 51:10.240] Cotech is a network of cooperatives based in the U.K. I think it's like 35 cooperatives. [51:10.240 --> 51:17.240] Check them out if you're looking for a job, get stuck in. Social co-op, trying to build [51:17.240 --> 51:24.240] cooperatively managed, master on instances, local IT, great collective already based in, [51:24.240 --> 51:29.240] yeah, being involved in co-op cloud at the moment. Small technology foundation. Always [51:29.240 --> 51:36.240] find inspiring. Check that out. Small tech. Just plug in them. Yeah. Oh, five minutes. [51:36.240 --> 51:43.240] Cool. The roadmap is, yeah, as I said, building the federation. So now is a great time to [51:43.240 --> 51:50.240] join. You know, as you're, you know, again, as an individual, a collective co-op, please [51:50.240 --> 51:57.240] get involved if you want to. We're trying to find more money, of course. One of the goals [51:57.240 --> 52:04.240] of the federation is to kind of achieve financial sustainability. So the co-ops that join or [52:04.240 --> 52:10.240] the groups that join the project will be, you know, we're going to have to decide, all right, [52:10.240 --> 52:17.240] how do we fund development of the tool? And can we pay for hours around finance, admin, [52:17.240 --> 52:25.240] cash, just, you know, all this end to end stuff. Yeah, Cadabra is a kind of new effort to kind [52:25.240 --> 52:30.240] of have a server side component, which is, and this is the thing that Cloudron did amazingly [52:30.240 --> 52:35.240] well, which is just auto updating the apps. So we're trying to replicate that in terms [52:35.240 --> 52:41.240] of the server side component, which understands, you know, oh, someone who takes care of that [52:41.240 --> 52:47.240] recipe has uploaded a new version, I'm going to roll it out. And yeah, web interface, I [52:47.240 --> 52:54.240] forgot to add maybe, because it's still under discussion if we need that or not. Yes, again [52:54.240 --> 53:00.240] to the end, I have, I could do a demo, a chaos demo. [53:00.240 --> 53:16.240] Yeah, okay, maybe I'll just do a chaos demo. So I wanted to like run. Yes, just show you [53:16.240 --> 53:24.240] the command line client. So again, to contextualize this is the tool that people who maintain [53:24.240 --> 53:31.240] the service will be running on a daily basis. So it's supposed to make the job easy. And [53:31.240 --> 53:40.240] I won't go through all of it at the moment. But, you know, it's, it's, you know, we try [53:40.240 --> 53:44.240] to take effort to like explain the concepts that are involved in the project and what [53:44.240 --> 53:55.240] you can do. You know, you can list all the recipes that are available from the command [53:55.240 --> 54:05.240] line, for example. And yeah, you can also do operations on recipes here. So you can like [54:05.240 --> 54:13.240] attempt to upgrade. Oh, I probably don't have internet. Yeah, right. Okay, well, anyway, [54:13.240 --> 54:18.240] you can kind of like do the maintenance commands on the spot. So again, you're on your local [54:18.240 --> 54:22.240] workstation, you realize that there's a new version of next cloud is coming out. Okay, [54:22.240 --> 54:27.240] let's get that upgraded. And then you can run the commands here. And I'll basically [54:27.240 --> 54:35.240] operate on this directory of recipes, where you can see a selection of the apps. And then, [54:35.240 --> 54:44.240] you know, this is just the what I showed earlier, the recipe repository. So the, this is the [54:44.240 --> 54:51.240] configuration that specifies how to deploy this app. Yeah, and there's some other details [54:51.240 --> 54:56.240] that I didn't really go into. But basically, one of the nice things that we've wired up [54:56.240 --> 55:03.240] is that if you deploy this app called traffic that when you roll out next cloud, it automatically [55:03.240 --> 55:08.240] configures the lesson crypt stuff. So you just don't have to deal with this HTTPS issue. [55:08.240 --> 55:17.240] It's already in the config, you just say, give me a thing. Yeah, I don't know, was there [55:17.240 --> 55:22.240] much more? Oh, yeah, I guess. Yeah, and then there's just kind of this like command and [55:22.240 --> 55:28.240] control interface where you can see like, okay, what apps have I got on what servers and, [55:28.240 --> 55:35.240] you know, what do I need to do and you can kind of like, you know, filter it by server [55:35.240 --> 55:43.240] or whatever. I won't type it out now. But yeah, maybe I'll call it a day. Thank you for [55:43.240 --> 55:45.240] listening to me talk for so long. [55:45.240 --> 55:59.240] Thank you.